This allows us to drop the context *after* we ran all futures to
completion and that's crucial otherwise we'll never drop entities
and/or flush effects.
However, the randomized integration test is still failing:
```
ITERATIONS=100000 SEED=3027 OPERATIONS=200 cargo test --release test_random --package=zed-server -- --nocapture
```
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Instead, create an empty worktree on guests when a worktree is first *registered*, then update it via an initial UpdateWorktree message.
This prevents the host from referencing a worktree in definition RPC responses that hasn't yet been observed by the guest. We could have waited until the entire worktree was shared, but this could take a long time, so instead we create an empty one on guests and proceed from there.
We still have randomized test failures as of this commit:
SEED=9519 MAX_PEERS=2 ITERATIONS=10000 OPERATIONS=7 ct -p zed-server test_random_collaboration
Co-Authored-By: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Also, add completion requests to the randomized collaboration integration test,
to demonstrate that this is valid.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
Some types of messages, which entail state updates on the host, should be
processed in the order that they were sent. Other types of messages should
not block the processing of other messages.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
* On the server, spawn a separate task for each incoming message
* In the peer, eliminate the barrier that was used to enforce ordering
of responses with respect to other incoming messages
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Using a bounded channel may have blocked the collaboration server
from making progress handling RPC traffic.
There's no need to apply backpressure to calling code within the
same process - suspending a task that is attempting to call `send` has
an even greater memory cost than just buffering a protobuf message.
We do still want a bounded channel for incoming messages, so that
we provide backpressure to noisy peers - blocking their writes as opposed
to allowing them to buffer arbitrarily many messages in our server.
Co-Authored-By: Antonio Scandurra <me@as-cii.com>
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
We hold these locks for a short amount of time anyway, and using an
async lock could cause parallel sends to happen in an order different
than the order in which `send`/`request` was called.
Co-Authored-By: Nathan Sobo <nathan@zed.dev>
The main reason for this is that we need to include information about
a buffer's UndoMap into its protobuf representation. But it's a bit
complex to correctly incorporate this information into the current
protobuf representation.
If we want to continue reusing `Buffer::apply_remote_edit` for
incorporating the historical operations, we need to either make
that method capable of incorporating already-undone edits, or
serialize the UndoMap into undo *operations*, so that we can apply
these undo operations after the fact when deserializing. But this is
not trivial, because an UndoOperation requires information about
the full offset ranges that were undone.