This PR removes the need to use `/rustdoc --index <CRATE_NAME>` and
instead indexes the crates once they are referenced.
As soon as the first `:` is added after the crate name, the indexing
will kick off in the background and update the index as it goes.
Release Notes:
- N/A
This PR updates the `/rustdoc` command with persistence for the
documented rustdoc items.
Now when you run `/rustdoc --index <CRATE_NAME>` it will index the crate
and store the results in LMDB.
The documented items will then be read from the database when searching
using `/rustdoc` and persist across restarts of Zed.
Release Notes:
- N/A
This PR adds an MVP of retrieving docs using the `/rustdoc` command from
an indexed set of docs.
To try this out:
1. Build local docs using `cargo doc`
2. Index the docs for the crate you want to search using `/rustdoc
--index <CRATE_NAME>`
- Note: This may take a while, depending on the size of the crate
3. Search for docs using `/rustdoc my_crate::path::to::item`
- You should get completions for the available items
Here are some screenshots of it in action:
<img width="640" alt="Screenshot 2024-06-12 at 6 19 20 PM"
src="https://github.com/zed-industries/zed/assets/1486634/6c49bec9-d084-4dcb-a92c-1b4c557ee9ce">
<img width="636" alt="Screenshot 2024-06-12 at 6 52 56 PM"
src="https://github.com/zed-industries/zed/assets/1486634/636a651c-7d02-48dc-b05c-931f33c49f9c">
Release Notes:
- N/A
This PR adds a first pass at a rustdoc crawler.
We'll be using this to get information about a crate from the rustdoc
artifacts for use in the Assistant.
Release Notes:
- N/A
---------
Co-authored-by: Richard <richard@zed.dev>
Closes#4424.
A few design decisions that may need some rethinking or later PRs:
* Other providers have a check for authentication. I use this
opportunity to fetch the models which doubles as a way of finding out if
the Ollama server is running.
* Ollama has _no_ API for getting the max tokens per model
* Ollama has _no_ API for getting the current token count
https://github.com/ollama/ollama/issues/1716
* Ollama does allow setting the `num_ctx` so I've defaulted this to
4096. It can be overridden in settings.
* Ollama models will be "slow" to start inference because they're
loading the model into memory. It's faster after that. There's no UI
affordance to show that the model is being loaded.
Release Notes:
- Added an Ollama Provider for the assistant. If you have
[Ollama](https://ollama.com/) running locally on your machine, you can
enable it in your settings under:
```jsonc
"assistant": {
"version": "1",
"provider": {
"name": "ollama",
// Recommended setting to allow for model startup
"low_speed_timeout_in_seconds": 30,
}
}
```
Chat like usual
<img width="1840" alt="image"
src="https://github.com/zed-industries/zed/assets/836375/4e0af266-4c4f-4d9e-9d74-1a91f76a12fe">
Interact with any model from the [Ollama
Library](https://ollama.com/library)
<img width="587" alt="image"
src="https://github.com/zed-industries/zed/assets/836375/87433ac6-bf87-4a99-89e1-96a93bf8de8a">
Open up the terminal to download new models via `ollama pull`:
![image](https://github.com/zed-industries/zed/assets/836375/af7ec411-76bf-41c7-ba81-64bbaeea98a8)
This PR adds a tag handler for collecting crate items from rustdoc's
HTML output.
This will serve as the foundation for getting more insight into a
crate's contents.
Release Notes:
- N/A
TODO:
- [x] Finish GPUI changes on other operating systems
This is a largely internal change to how we report data to our
diagnostics and telemetry. This PR also includes an update to our blade
backend which allows us to report errors in a more useful way when
failing to initialize blade.
Release Notes:
- N/A
---------
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
This PR removes the `color` crate, as it was not used anywhere.
We had added this experimentally, but right now its existence is just a
source of confusion.
Release Notes:
- N/A
This PR adjusts the extension download counts to be displayed using
thousands separators.
Release Notes:
- Adjusted extension download counts to display with thousands
separators (e.g., `1,000,000`).
This PR is an internal refactor in preparation for remote editing. It
restructures the public interface of `Worktree`, reducing the number of
call sites that assume that a worktree is local or remote.
* The Project no longer calls `worktree.as_local_mut().unwrap()` in code
paths related to basic file operations
* Fewer code paths in the app rely on the worktree's `LocalSnapshot`
* Worktree-related RPC message handling is more fully encapsulated by
the `Worktree` type.
to do:
* [x] file manipulation operations
* [x] sending worktree updates when sharing
for later
* opening buffers
* updating open buffers upon worktree changes
Release Notes:
- N/A
The `worktree` crate mainly provides an in-memory model of a directory
and its git repositories. But because it was originally extracted from
the Project crate, it also contained lingering bits of code that were
outside of that area:
* it had a little bit of logic related to buffers (though most buffer
management lives in `project`)
* it had a *little* bit of logic for storing diagnostics (though the
vast majority of LSP and diagnostic logic lives in `project`)
* it had a little bit of logic for sending RPC message (though the
*receiving* logic for those RPC messages lived in `project`)
In this PR, I've moved those concerns entirely to the project crate
(where they were already dealt with for the most part), so that the
worktree crate can be more focused on its main job, and have fewer
dependencies.
Worktree no longer depends on `client` or `lsp`. It still depends on
`language`, but only because of `impl language::File for
worktree::File`.
Release Notes:
- N/A
This PR overhauls the HTML to Markdown conversion functionality in order
to make it more pluggable. This will ultimately allow for supporting a
variety of different HTML input structures (both natively and via
extensions).
As part of this, the `rustdoc_to_markdown` crate has been renamed to
`html_to_markdown`.
The `MarkdownWriter` now accepts a list of trait objects that can be
used to drive the conversion of the HTML into Markdown. Right now we
have some generic handler implementations for going from plain HTML
elements to their Markdown equivalents, as well as some rustdoc-specific
ones.
Release Notes:
- N/A
This could still use some improvement UI-wise but the user experience
should be a lot better.
- [x] Show in "Window" application menu
- [x] Load prompt as it's selected in the picker
- [x] Refocus picker on `esc`
- [x] When creating a new prompt, if a new prompt already exists and is
unedited, activate it instead
- [x] Add `/default` command
- [x] Evaluate /commands on prompt insertion
- [x] Autocomplete /commands (but don't evaluate) during prompt editing
- [x] Show token count using the settings model, right-aligned in the
editor
- [x] Picker
- [x] Sorted alpha
- [x] 2 sublists
- Default
- Empty state: Star a prompt to add it to your default prompt
- Otherwise show prompts with star on hover
- All
- Move prompts with star on hover
Release Notes:
- N/A
Fixes https://github.com/zed-industries/zed/issues/10890
* removes `unwrap()` that caused panics for text elements with no text,
remaining after edit state is cleared but project entries are not
updated, having the fake, "new entry"
* improves discoverability of the FS errors during file/directory
creation: now those are shown as workspace notifications
* stops printing anyhow backtraces in workspace notifications, printing
the more readable chain of contexts instead
* better indicates when new entries are created as excluded ones
Release Notes:
- Improve excluded entry creation workflow in the project panel
([10890](https://github.com/zed-industries/zed/issues/10890))
Using the file system as a database seems like it's easy, but it's
actually a real pain. I'd like to use LMDB to store the prompts locally
so we have more control. We can always add an export option, but I want
the source of truth to be somewhere other than the file system.
So far, I have a PromptStore which is global to the application and can
be initialized on startup. Then there's a `PromptLibrary` which is
intended to be the root of a new kind of Zed window. I haven't actually
seen pixels yet, but I've sketched out the basics needed to create a new
prompt, save, etc.
Still lots to figure out but the foundations of being backed by a DB and
rendering in an independent window are in place.
/cc @iamnbutler @as-cii
Release Notes:
- N/A
---------
Co-authored-by: Antonio Scandurra <me@as-cii.com>
This PR fixes an issue in `rustdoc_to_markdown` with code blocks being
trimmed incorrectly.
We were erroneously popping from the current element stack even if we
didn't push an element onto the stack.
Added test coverage for this case as well, so we don't regress.
Release Notes:
- N/A
This is still pretty raw, so I'd like to hold off on shipping it to all
users.
Release Notes:
- Hide the prompt library for non-staff until it is in a more complete
state.
This PR adds a `/rustdoc` slash command for retrieving and inserting
rustdoc docs into the Assistant.
Right now the command accepts the crate name as an argument and will
return the top-level docs from `docs.rs`.
Release Notes:
- N/A
- Added support for xdg trash when deleting files on linux
- moved ashpd depency to toplevel to use it in both fs and gpui
If I need to add test, or change anything, please let me know. I tested
locally by creating and deleting a file and confirming it showed up in
my trashcan, but that probably a less than ideal method of confirming
correct behavior
Also, I could remove the delete directory function for linux, and change
the one configured for macos to compile for both macos and linux (they
are the same, the version of the function they are calling is
different).
Release Notes:
- N/A
This PR adds a new crate for converting rustdoc output to Markdown.
We're leveraging Servo's `html5ever` to parse the Markdown content, and
then walking the DOM nodes to convert it to a Markdown string.
The Markdown output will be continued to be refined, but it's in a place
where it should be reasonable.
Release Notes:
- N/A
Release Notes:
- N/A
Picks up https://github.com/kvark/blade/pull/118Fixes#10351
Seeing that Zed loaded with Blade repository is consuming 260Mb of RAM.
We can tune this to be lower, but ultimately it doesn't matter: this
memory isn't wasted, it's just pools for memory and descriptors, which
may be used by bigger and more complex views of Zed.
- Confirming a completion now runs the command immediately
- Hitting `enter` on a line with a command now runs it
- The output of commands gets folded away and replaced with a custom
placeholder
- Eliminated ambient context
<img width="1588" alt="image"
src="https://github.com/zed-industries/zed/assets/482957/b1927a45-52d6-4634-acc9-2ee539c1d89a">
Release Notes:
- N/A
---------
Co-authored-by: Nathan Sobo <nathan@zed.dev>
Fixes https://github.com/zed-industries/zed/issues/9575
Fixes https://github.com/zed-industries/zed/issues/4294
### Problem
When a large git repository's `.git` folder changes (due to a `git
commit`, `git reset` etc), Zed needs to recompute the git status for
every file in that git repository. Part of computing the git status is
the *unstaged* part - the comparison between the content of the file and
the version in the git index. In a large git repository like `chromium`
or `linux`, this is inherently pretty slow.
Previously, we performed this git status all at once, and held a lock on
our `BackgroundScanner`'s state for the entire time. On my laptop, in
the `linux` repo, this would often take around 13 seconds.
When opening a file, Zed always refreshes the metadata for that file in
its in-memory snapshot of worktree. This is normally very fast, but if
another task is holding a lock on the `BackgroundScanner`, it blocks.
### Solution
I've restructured how Zed handles Git statuses, so that when a git
repository is updated, we recompute files' git statuses in fixed-sized
batches. In between these batches, the `BackgroundScanner` is free to
perform other work, so that file operations coming from the main thread
will still be responsive.
Release Notes:
- Fixed a bug that caused long delays in opening files right after
performing a commit in very large git repositories.