Since we abuse TOML table syntax to define function aliases, an identical
function alias can be found more than once in the merged config. The merged
config doesn't preserve the definition order, so we need to load aliases
table per layer.
`jj sparse` is a bit different from other commands in that its `jj
sparse --list` is practically a separate command. Let's make it an
actual subcommand for consistency, and so we can more cleanly add
additional flags for `jj sparse list` in the future. I moved all the
other arguments to `jj sparse set`. I'm not sure if `jj sparse set
--reset` would have been better as `jj sparse reset`, but it is
technically just updating the sparse patterns just like the other
arguments (`--clear`, `--add` , `--remove`).
It's easier to follow the test if we try to resolve commits from the
`jj log` output just above. That's what we did until I changed it away
from that without thinking in fcda05c69b.
When I added `revsets.short-prefixes` in 20ef171d7a, I didn't notice
that I made the test of commit prefixes among hidden commits
ineffective because. This restores that by disabling short prefixes in
the test.
Otherwise, "jj init --git-repo ." would create extra table files per commit,
and merge them.
I considered adding an explicit GitBackend method to be called from
git::import_refs(), but the call order matters. The method should be invoked
before calling store.get_commit(..) or mut_repo.add_head(..). Since commits
are likely to be loaded from the head, we can instead make read_commit()
import ancestor metadata at all.
Alternatively, we could make a Git commit hidden until it's inserted into
the extra table. It's rather big change, and I wouldn't like to do that
without thinking more thoroughly.
I'm going to extract a helper function that converts git2::Commit to
backend::Commit struct, and the commit id can also be obtained from the
git2::Commit object.
My first attempt was to fix up corrupted index when merging, but it turned
out to be not easy because the self side may contain corrupted data. It's
also possible that two concurrent commit operations have exactly the same
view state (because change id isn't hashed into commit id), and only the
table heads diverge.
#924
GitBackend will reuse this lock to not assign multiple change ids to a
single commit. We could add a separate lock file that covers the section
from get_head() to save_table(), but I think reusing the table lock is good
enough.
This bug concerns the way `import_refs` that gets called by `fetch` computes
the heads that should be visible after the import.
Previously, the list of such heads was computed *before* local branches were
updated based on changes to the remote branches. So, commits that should have
been abandoned based on this update of the local branches weren't properly
abandoned.
Now, `import_refs` tracks the heads that need to be visible because of some ref
in a mapping keyed by the ref. If the ref moves or is deleted, the
corresponding heads are updated.
Fixes#864
Before this, it was difficult to run an integration test after adding any
directives from printf-style debugging to jj (e.g. `err!`, `eprintln!`,
`println!`), since `jj_cmd_success` fails if `jj` to output anything to stderr
while `jj_cmd_failure` fails if stdout is not empty.
This adds a `TestEnvironment::debug_allow_stderr` variable that lifts this
restriction for `jj_cmd_success` and makes it output anything `jj` output to
stderr instead. You can set it directly or by running the test with the
`DEBUG_ALLOW_STDERR` environment variable set. You can then add `err!`
anywhere.
You do need to run the test in a somewhat special way, as described in the
docstring.
Now that we return the written commit from `write_commit()`, let's
make the timestamps match what was actually written, accounting for
the whole-second precision and the adjustment we do to avoid
collisions.
The internal backend at Google doesn't let you write any value you
want for in the committer field. The `Store` type still caches the
value it attempted to write, which gets a little weird when the
written value is not what we tried to write. We should use the value
the backend actually wrote. However, we don't know if the backend
changed anything without reading the value back, which is often
wasteful. This commit changes the API to return the written value.
I only changed the signature of `write_commit()` for now. Maybe we
should make a similar change to `write_tree()`.
This has several advantages:
* Makes it possible to downcast to non-Git custom backends (might be
useful at Google, but we haven't needed it yet)
* Lets us access more specific functionality on the `GitBackend`,
making it possible to access the `git2::Repository` without
creating a copy of it.
* Removes the dependency on Git from the backend
This adds a config called `revsets.short-prefixes`, which lets the
user specify a revset in which to disambiguate otherwise ambiguous
change/commit ids. It defaults to the value of `revsets.log`.
I made it so you can disable the feature by setting
`revsets.short-prefixes = ""`. I don't like that the default value
(using `revsets.log`) cannot be configured explicitly by the
user. That will be addressed if we decide to merge the `[revsets]` and
`[revset-aliases]` sections some day.
In large repos, the unique prefixes can get somewhat long (~6 hex
digits seems typical in the Linux repo), which makes them less useful
for manually entering on the CLI. The user typically cares most about
a small set of commits, so it would be nice to give shorter unique ids
to those. That's what Mercurial enables with its
`experimental.revisions.disambiguatewithin` config. This commit
provides an implementation of that feature in `IdPrefixContext`.
In very large repos, it can also be slow to calculate the unique
prefixes, especially if it involves a request to a server. This
feature becomes much more important in such repos.