About salsa
Salsa is a Rust framework for writing incremental, on-demand programs -- these are programs that want to adapt to changes in their inputs, continuously producing a new output that is up-to-date. Salsa is based on the the incremental recompilation techniques that we built for rustc, and many (but not all) of its users are building compilers or other similar tooling.
If you'd like to learn more about Salsa, you can check out the Hello World example in the repository, or watch some of our videos.
If you'd like to chat about Salsa, or you think you might like to contribute, please jump on to our Zulip instance at salsa.zulipchat.com.
How to use Salsa
Common patterns
This section documents patterns for using Salsa.
Selection
The "selection" (or "firewall") pattern is when you have a query Qsel that reads from some other Qbase and extracts some small bit of information from Qbase that it returns. In particular, Qsel does not combine values from other queries. In some sense, then, Qsel is redundant -- you could have just extracted the information the information from Qbase yourself, and done without the salsa machinery. But Qsel serves a role in that it limits the amount of re-execution that is required when Qbase changes.
Example: the base query
For example, imagine that you have a query parse
that parses the input text of a request
and returns a ParsedResult
, which contains a header and a body:
#[derive(Clone, Debug, PartialEq, Eq)]
struct ParsedResult {
header: Vec<ParsedHeader>,
body: String,
}
#[derive(Clone, Debug, PartialEq, Eq)]
struct ParsedHeader {
key: String,
value: String,
}
#[salsa::query_group(Request)]
trait RequestParser {
/// The base text of the request.
#[salsa::input]
fn request_text(&self) -> String;
/// The parsed form of the request.
fn parse(&self) -> ParsedResult;
}
Example: a selecting query
And now you have a number of derived queries that only look at the header. For example, one might extract the "content-type' header:
#[salsa::query_group(Request)]
trait RequestUtil: RequestParser {
fn content_type(&self) -> Option<String>;
}
fn content_type(db: &dyn RequestUtil) -> Option<String> {
db.parse()
.header
.iter()
.find(|header| header.key == "content-type")
.map(|header| header.value.clone())
}
Why prefer a selecting query?
This content_type
query is an instance of the selection pattern. It only
"selects" a small bit of information from the ParsedResult
. You might not have
made it a query at all, but instead made it a method on ParsedResult
.
But using a query for content_type
has an advantage: now if there are downstream
queries that only depend on the content_type
(or perhaps on other headers extracted
via a similar pattern), those queries will not have to be re-executed when the request
changes unless the content-type header changes. Consider the dependency graph:
request_text --> parse --> content_type --> (other queries)
When the request_text
changes, we are always going to have to re-execute parse
.
If that produces a new parsed result, we are also going to re-execute content_type
.
But if the result of content_type
has not changed, then we will not re-execute
the other queries.
More levels of selection
In fact, in our example we might consider introducing another level of selection.
Instead of having content_type
directly access the results of parse
, it might be better
to insert a selecting query that just extracts the header:
#[salsa::query_group(Request)]
trait RequestUtil: RequestParser {
fn header(&self) -> Vec<ParsedHeader>;
fn content_type(&self) -> Option<String>;
}
fn header(db: &dyn RequestUtil) -> Vec<ParsedHeader> {
db.parse().header
}
fn content_type(db: &dyn RequestUtil) -> Option<String> {
db.header()
.iter()
.find(|header| header.key == "content-type")
.map(|header| header.value.clone())
}
This will result in a dependency graph like so:
request_text --> parse --> header --> content_type --> (other queries)
The advantage of this is that changes that only effect the "body" or
only consume small parts of the request will
not require us to re-execute content_type
at all. This would be particularly
valuable if there are a lot of dependent headers.
A note on cloning and efficiency
In this example, we used common Rust types like Vec
and String
,
and we cloned them quite frequently. This will work just fine in Salsa,
but it may not be the most efficient choice. This is because each clone
is going to produce a deep copy of the result. As a simple fix, you
might convert your data structures to use Arc
(e.g., Arc<Vec<ParsedHeader>>
),
which makes cloning cheap.
On-Demand (Lazy) Inputs
Salsa input queries work best if you can easily provide all of the inputs upfront. However sometimes the set of inputs is not known beforehand.
A typical example is reading files from disk. While it is possible to eagerly scan a particular directory and create an in-memory file tree in a salsa input query, a more straight-forward approach is to read the files lazily. That is, when someone requests the text of a file for the first time:
- Read the file from disk and cache it.
- Setup a file-system watcher for this path.
- Invalidate the cached file once the watcher sends a change notification.
This is possible to achieve in salsa, using a derived query and report_synthetic_read
and invalidate
queries.
The setup looks roughly like this:
#[salsa::query_group(VfsDatabaseStorage)]
trait VfsDatabase: salsa::Database + FileWatcher {
fn read(&self, path: PathBuf) -> String;
}
trait FileWatcher {
fn watch(&self, path: &Path);
fn did_change_file(&mut self, path: &Path);
}
fn read(db: &dyn salsa::Database, path: PathBuf) -> String {
db.salsa_runtime()
.report_synthetic_read(salsa::Durability::LOW);
db.watch(&path);
std::fs::read_to_string(&path).unwrap_or_default()
}
#[salsa::database(VfsDatabaseStorage)]
struct MyDatabase { ... }
impl FileWatcher for MyDatabase {
fn watch(&self, path: &Path) { ... }
fn did_change_file(&mut self, path: &Path) {
ReadQuery.in_db_mut(self).invalidate(path);
}
}
- We declare the query as a derived query (which is the default).
- In the query implementation, we don't call any other query and just directly read file from disk.
- Because the query doesn't read any inputs, it will be assigned a
HIGH
durability by default, which we override withreport_synthetic_read
. - The result of the query is cached, and we must call
invalidate
to clear this cache.
A complete, runnable file-watching example can be found in this git repo along with a write-up that explains more about the code and what it is doing.
Cycle handling
By default, when Salsa detects a cycle in the computation graph, Salsa will panic with a salsa::Cycle
as the panic value. The salsa::Cycle
structure that describes the cycle, which can be useful for diagnosing what went wrong.
Recovering via fallback
Panicking when a cycle occurs is ok for situations where you believe a cycle is impossible. But sometimes cycles can result from illegal user input and cannot be statically prevented. In these cases, you might prefer to gracefully recover from a cycle rather than panicking the entire query. Salsa supports that with the idea of cycle recovery.
To use cycle recovery, you annotate potential participants in the cycle with a #[salsa::recover(my_recover_fn)]
attribute. When a cycle occurs, if any participant P has recovery information, then no panic occurs. Instead, the execution of P is aborted and P will execute the recovery function to generate its result. Participants in the cycle that do not have recovery information continue executing as normal, using this recovery result.
The recovery function has a similar signature to a query function. It is given a reference to your database along with a salsa::Cycle
describing the cycle that occurred; it returns the result of the query. Example:
#![allow(unused)] fn main() { fn my_recover_fn( db: &dyn MyDatabase, cycle: &salsa::Cycle, ) -> MyResultValue }
The db
and cycle
argument can be used to prepare a useful error message for your users.
Important: Although the recovery function is given a db
handle, you should be careful to avoid creating a cycle from within recovery or invoking queries that may be participating in the current cycle. Attempting to do so can result in inconsistent results.
Figuring out why recovery did not work
If a cycle occurs and some of the participant queries have #[salsa::recover]
annotations and others do not, then the query will be treated as irrecoverable and will simply panic. You can use the Cycle::unexpected_participants
method to figure out why recovery did not succeed and add the appropriate #[salsa::recover]
annotations.
How Salsa works
Video available
To get the most complete introduction to Salsa's inner works, check out the "How Salsa Works" video. If you'd like a deeper dive, the "Salsa in more depth" video digs into the details of the incremental algorithm.
If you're in China, watch videos on "How Salsa Works", "Salsa In More Depth".
Key idea
The key idea of salsa
is that you define your program as a set of
queries. Every query is used like function K -> V
that maps from
some key of type K
to a value of type V
. Queries come in two basic
varieties:
- Inputs: the base inputs to your system. You can change these whenever you like.
- Functions: pure functions (no side effects) that transform your inputs into other values. The results of queries is memoized to avoid recomputing them a lot. When you make changes to the inputs, we'll figure out (fairly intelligently) when we can re-use these memoized values and when we have to recompute them.
How to use Salsa in three easy steps
Using salsa is as easy as 1, 2, 3...
- Define one or more query groups that contain the inputs and queries you will need. We'll start with one such group, but later on you can use more than one to break up your system into components (or spread your code across crates).
- Define the query functions where appropriate.
- Define the database, which contains the storage for all the inputs/queries you will be using. The query struct will contain the storage for all of the inputs/queries and may also contain anything else that your code needs (e.g., configuration data).
To see an example of this in action, check out the hello_world
example, which has a number of comments explaining how
things work.
Digging into the plumbing
Check out the plumbing chapter to see a deeper explanation of the code that salsa generates and how it connects to the salsa library.
Videos
There are currently two videos about Salsa available:
- How Salsa Works, which gives a high-level introduction to the key concepts involved and shows how to use salsa;
- Salsa In More Depth, which digs into the incremental algorithm and explains -- at a high-level -- how Salsa is implemented.
If you're in China, watch videos on How Salsa Works, Salsa In More Depth.
Plumbing
This chapter documents the code that salsa generates and its "inner workings". We refer to this as the "plumbing".
History
- 2020-07-05: Updated to take RFC 6 into account.
- 2020-06-24: Initial version.
Generated code
This page walks through the "Hello, World!" example and explains the code that it generates. Please take it with a grain of salt: while we make an effort to keep this documentation up to date, this sort of thing can fall out of date easily. See the page history below for major updates.
If you'd like to see for yourself, you can set the environment variable
SALSA_DUMP
to 1 while the procedural macro runs, and it will dump the full
output to stdout. I recommend piping the output through rustfmt.
Sources
The main parts of the source that we are focused on are as follows.
Query group
#[salsa::query_group(HelloWorldStorage)]
trait HelloWorld {
// For each query, we give the name, some input keys (here, we
// have one key, `()`) and the output type `Arc<String>`. We can
// use attributes to give other configuration:
//
// - `salsa::input` indicates that this is an "input" to the system,
// which must be explicitly set. The `salsa::query_group` method
// will autogenerate a `set_input_string` method that can be
// used to set the input.
#[salsa::input]
fn input_string(&self, key: ()) -> Arc<String>;
// This is a *derived query*, meaning its value is specified by
// a function (see Step 2, below).
fn length(&self, key: ()) -> usize;
}
Database
#[salsa::database(HelloWorldStorage)]
#[derive(Default)]
struct DatabaseStruct {
storage: salsa::Storage<Self>,
}
impl salsa::Database for DatabaseStruct {}
Diagram
This diagram shows the items that get generated from the Hello World query group and database struct. You can click on each item to be taken to the explanation of its purpose. The diagram is wide so be sure to scroll over!
graph LR classDef diagramNode text-align:left; subgraph query group HelloWorldTrait["trait HelloWorld: Database + HasQueryGroup(HelloWorldStroage)"] HelloWorldImpl["impl<DB> HelloWorld for DB<br>where DB: HasQueryGroup(HelloWorldStorage)"] click HelloWorldImpl "http:query_groups.html#impl-of-the-hello-world-trait" "more info" HelloWorldStorage["struct HelloWorldStorage"] click HelloWorldStorage "http:query_groups.html#the-group-struct-and-querygroup-trait" "more info" QueryGroupImpl["impl QueryGroup for HelloWorldStorage<br> type DynDb = dyn HelloWorld<br> type Storage = HelloWorldGroupStorage__;"] click QueryGroupImpl "http:query_groups.html#the-group-struct-and-querygroup-trait" "more info" HelloWorldGroupStorage["struct HelloWorldGroupStorage__"] click HelloWorldGroupStorage "http:query_groups.html#group-storage" "more info" subgraph for each query... LengthQuery[struct LengthQuery] LengthQueryImpl["impl Query for LengthQuery<br> type Key = ()<br> type Value = usize<br> type Storage = salsa::DerivedStorage(Self)<br> type QueryGroup = HelloWorldStorage"] LengthQueryFunctionImpl["impl QueryFunction for LengthQuery<br> fn execute(db: &dyn HelloWorld, key: ()) -> usize"] click LengthQuery "http:query_groups.html#for-each-query-a-query-struct" "more info" click LengthQueryImpl "http:query_groups.html#for-each-query-a-query-struct" "more info" click LengthQueryFunctionImpl "http:query_groups.html#for-each-query-a-query-struct" "more info" end class HelloWorldTrait,HelloWorldImpl,HelloWorldStorage,QueryGroupImpl,HelloWorldGroupStorage diagramNode; class LengthQuery,LengthQueryImpl,LengthQueryFunctionImpl diagramNode; end subgraph database DatabaseStruct["struct Database { .. storage: Storage(Self) .. }"] subgraph for each group... HasQueryGroup["impl plumbing::HasQueryGroup(HelloWorldStorage) for DatabaseStruct"] click HasQueryGroup "http:database.html#the-hasquerygroup-impl" "more info" end DatabaseStorageTypes["impl plumbing::DatabaseStorageTypes for DatabaseStruct<br> type DatabaseStorage = __SalsaDatabaseStorage"] click DatabaseStorageTypes "http:database.html#the-databasestoragetypes-impl" "more info" DatabaseStorage["struct __SalsaDatabaseStorage"] click DatabaseStorage "http:database.html#the-database-storage-struct" "more info" DatabaseOps["impl plumbing::DatabaseOps for DatabaseStruct"] click DatabaseOps "http:database.html#the-databaseops-impl" "more info" class DatabaseStruct,DatabaseStorage,DatabaseStorageTypes,DatabaseOps,HasQueryGroup diagramNode; end subgraph salsa crate DerivedStorage["DerivedStorage"] class DerivedStorage diagramNode; end LengthQueryImpl --> DerivedStorage; DatabaseStruct -- "used by" --> HelloWorldImpl HasQueryGroup -- "used by" --> HelloWorldImpl
Query groups and query group structs
When you define a query group trait:
#[salsa::query_group(HelloWorldStorage)]
trait HelloWorld {
// For each query, we give the name, some input keys (here, we
// have one key, `()`) and the output type `Arc<String>`. We can
// use attributes to give other configuration:
//
// - `salsa::input` indicates that this is an "input" to the system,
// which must be explicitly set. The `salsa::query_group` method
// will autogenerate a `set_input_string` method that can be
// used to set the input.
#[salsa::input]
fn input_string(&self, key: ()) -> Arc<String>;
// This is a *derived query*, meaning its value is specified by
// a function (see Step 2, below).
fn length(&self, key: ()) -> usize;
}
the salsa::query_group
macro generates a number of things, shown in the sample
generated code below (details in the sections to come).
and associated storage struct) that represent things which don't have "public"
Note that there are a number of structs and types (e.g., the group descriptor
names. We currently generate mangled names with __
afterwards, but those names
are not meant to be exposed to the user (ideally we'd use hygiene to enforce
this).
// First, a copy of the trait, though with extra supertraits and
// sometimes with some extra methods (e.g., `set_input_string`)
trait HelloWorld:
salsa::Database +
salsa::plumbing::HasQueryGroup<HelloWorldStorage>
{
fn input_string(&self, key: ()) -> Arc<String>;
fn set_input_string(&mut self, key: (), value: Arc<String>);
fn length(&self, key: ()) -> usize;
}
// Next, the "query group struct", whose name was given by the
// user. This struct implements the `QueryGroup` trait which
// defines a few associated types common to the entire group.
struct HelloWorldStorage { }
impl salsa::plumbing::QueryGroup for HelloWorldStorage {
type DynDb = dyn HelloWorld;
type GroupStorage = HelloWorldGroupStorage__;
}
// Next, a blanket impl of the `HelloWorld` trait. This impl
// works for any database `DB` that implements the
// appropriate `HasQueryGroup`.
impl<DB> HelloWorld for DB
where
DB: salsa::Database,
DB: salsa::plumbing::HasQueryGroup<HelloWorldStorage>,
{
...
}
// Next, for each query, a "query struct" that represents it.
// The query struct has inherent methods like `in_db` and
// implements the `Query` trait, which defines various
// details about the query (e.g., its key, value, etc).
pub struct InputQuery { }
impl InputQuery { /* definition for `in_db`, etc */ }
impl salsa::Query for InputQuery {
/* associated types */
}
// Same as above, but for the derived query `length`.
// For derived queries, we also implement `QueryFunction`
// which defines how to execute the query.
pub struct LengthQuery { }
impl salsa::Query for LengthQuery {
...
}
impl salsa::QueryFunction for LengthQuery {
...
}
// Finally, the group storage, which contains the actual
// hashmaps and other data used to implement the queries.
struct HelloWorldGroupStorage__ { .. }
The group struct and QueryGroup
trait
The group struct is the only thing we generate whose name is known to the user.
For a query group named Foo
, it is conventionally called FooStorage
, hence
the name HelloWorldStorage
in our example.
Despite the name "Storage", the struct itself has no fields. It exists only to
implement the QueryGroup
trait. This trait has a number of associated types
that reference various bits of the query group, including the actual "group
storage" struct:
struct HelloWorldStorage { }
impl salsa::plumbing::QueryGroup for HelloWorldStorage {
type DynDb = dyn HelloWorld;
type GroupStorage = HelloWorldGroupStorage__; // generated struct
}
We'll go into detail on these types below and the role they play, but one that
we didn't mention yet is GroupData
. That is a kind of hack used to manage
send/sync around slots, and it gets covered in the section on slots.
Impl of the hello world trait
Ultimately, every salsa query group is going to be implemented by your final database type, which is not currently known to us (it is created by combining multiple salsa query groups). In fact, this salsa query group could be composed into multiple database types. However, we want to generate the impl of the query-group trait here in this crate, because this is the point where the trait definition is visible and known to us (otherwise, we'd have to duplicate the method definitions).
So what we do is that we define a different trait, called plumbing::HasQueryGroup<G>
,
that can be implemented by the database type. HasQueryGroup
is generic over
the query group struct. So then we can provide an impl of HelloWorld
for any
database type DB
where DB: HasQueryGroup<HelloWorldStorage>
. This
HasQueryGroup
defines a few methods that, given a DB
, give access to the
data for the query group and a few other things.
Thus we can generate an impl that looks like:
impl<DB> HelloWorld for DB
where
DB: salsa::Database,
DB: salsa::plumbing::HasQueryGroup<HelloWorld>
{
...
fn length(&self, key: ()) -> Arc<String> {
<Self as salsa::plumbing::GetQueryTable<HelloWorldLength__>>::get_query_table(self).get(())
}
}
You can see that the various methods just hook into generic functions in the
salsa::plumbing
module. These functions are generic over the query types
(HelloWorldLength__
) that will be described shortly. The details of the "query
table" are covered in a future section, but in short this code pulls out the
hasmap for storing the length
results and invokes the generic salsa logic to
check for a valid result, etc.
For each query, a query struct
As we referenced in the previous section, each query in the trait gets a struct
that represents it. This struct is named after the query, converted into snake
case and with the word Query
appended. In typical Salsa workflows, these
structs are not meant to be named or used, but in some cases it may be required.
For e.g. the length
query, this structs might look something like:
struct LengthQuery { }
The struct also implements the plumbing::Query
trait, which defines
a bunch of metadata about the query (and repeats, for convenience,
some of the data about the group that the query is in):
impl salsa::Query for #qt
{
type Key = (#(#keys),*);
type Value = #value;
type Storage = #storage;
const QUERY_INDEX: u16 = #query_index;
const QUERY_NAME: &'static str = #query_name;
fn query_storage<'a>(
group_storage: &'a <Self as salsa::QueryDb<'_>>::GroupStorage,
) -> &'a std::sync::Arc<Self::Storage> {
&group_storage.#fn_name
}
fn query_storage_mut<'a>(
group_storage: &'a <Self as salsa::QueryDb<'_>>::GroupStorage,
) -> &'a std::sync::Arc<Self::Storage> {
&group_storage.#fn_name
}
}
Depending on the kind of query, we may also generate other impls, such as an
impl of salsa::plumbing::QueryFunction
, which defines the methods for
executing the body of a query. This impl would then include a call to the user's
actual function.
impl salsa::plumbing::QueryFunction for #qt
{
fn execute(db: &<Self as salsa::QueryDb<'_>>::DynDb, #key_pattern: <Self as salsa::Query>::Key)
-> <Self as salsa::Query>::Value {
invoke(db, #(#key_names),*)
}
recover
}
Group storage
The "group storage" is the actual struct that contains all the hashtables and
so forth for each query. The types of these are ultimately defined by the
Storage
associated type for each query type. The struct is generic over the
final database type:
struct HelloWorldGroupStorage__ {
input: <InputQuery as Query::Storage,
length: <LengthQuery as Query>::Storage,
}
We also generate some inherent methods. First, a new
method that takes
the group index as a parameter and passes it along to each of the query
storage new
methods:
impl #group_storage {
trait_vis fn new(group_index: u16) -> Self {
group_storage {
(
queries_with_storage:
std::sync::Arc::new(salsa::plumbing::QueryStorageOps::new(group_index)),
)*
}
}
}
And then various methods that will dispatch from a DatabaseKeyIndex
that
corresponds to this query group into the appropriate query within the group.
Each has a similar structure of matching on the query index and then delegating
to some method defined by the query storage:
impl #group_storage {
trait_vis fn fmt_index(
&self,
db: &(#dyn_db + '_),
input: salsa::DatabaseKeyIndex,
fmt: &mut std::fmt::Formatter<'_>,
) -> std::fmt::Result {
match input.query_index() {
fmt_ops
i => panic!("salsa: impossible query index {}", i),
}
}
trait_vis fn maybe_changed_after(
&self,
db: &(#dyn_db + '_),
input: salsa::DatabaseKeyIndex,
revision: salsa::Revision,
) -> bool {
match input.query_index() {
maybe_changed_ops
i => panic!("salsa: impossible query index {}", i),
}
}
trait_vis fn cycle_recovery_strategy(
&self,
db: &(#dyn_db + '_),
input: salsa::DatabaseKeyIndex,
) -> salsa::plumbing::CycleRecoveryStrategy {
match input.query_index() {
cycle_recovery_strategy_ops
i => panic!("salsa: impossible query index {}", i),
}
}
trait_vis fn for_each_query(
&self,
_runtime: &salsa::Runtime,
mut op: &mut dyn FnMut(&dyn salsa::plumbing::QueryStorageMassOps),
) {
for_each_ops
}
}
Database
Continuing our dissection, the other thing which a user must define is a database, which looks something like this:
#[salsa::database(HelloWorldStorage)]
#[derive(Default)]
struct DatabaseStruct {
storage: salsa::Storage<Self>,
}
impl salsa::Database for DatabaseStruct {}
The salsa::database
procedural macro takes a list of query group
structs (like HelloWorldStorage
) and generates the following items:
- a copy of the database struct it is applied to
- a struct
__SalsaDatabaseStorage
that contains all the storage structs for each query group. Note: these are the structs full of hashmaps etc that are generaetd by the query group procdural macro, not theHelloWorldStorage
struct itself. - an impl of
HasQueryGroup<G>
for each query groupG
- an impl of
salsa::plumbing::DatabaseStorageTypes
for the database struct - an impl of
salsa::plumbing::DatabaseOps
for the database struct
Key constraint: we do not know the names of individual queries
There is one key constraint in the design here. None of this code knows the
names of individual queries. It only knows the name of the query group storage
struct. This means that we often delegate things to the group -- e.g., the
database key is composed of group keys. This is similar to how none of the code
in the query group knows the full set of query groups, and so it must use
associated types from the Database
trait whenever it needs to put something in
a "global" context.
The database storage struct
The __SalsaDatabaseStorage
struct concatenates all of the query group storage
structs. In the hello world example, it looks something like:
struct __SalsaDatabaseStorage {
hello_world: <HelloWorldStorage as salsa::plumbing::QueryGroup<DatabaseStruct>>::GroupStorage
}
We also generate a Default
impl for __SalsaDatabaseStorage
. It invokes
a new
method on each group storage with the unique index assigned to that group.
This invokes the inherent new
method generated by the #[salsa::query_group]
macro.
The HasQueryGroup
impl
The HasQueryGroup
trait allows a given query group to access its definition
within the greater database. The impl is generated here:
has_group_impls.extend(quote! {
impl salsa::plumbing::HasQueryGroup<#group_path> for #database_name {
fn group_storage(&self) -> &#group_storage {
&self.#db_storage_field.query_store().#group_name_snake
}
fn group_storage_mut(&mut self) -> (&#group_storage, &mut salsa::Runtime) {
let (query_store_mut, runtime) = self.#db_storage_field.query_store_mut();
(&query_store_mut.#group_name_snake, runtime)
}
}
});
The HasQueryGroup
impl combines with the blanket impl from the
#[salsa::query_group]
macro so that the database can implement the query group
trait (e.g., the HelloWorld
trait) but without knowing all the names of the
query methods and the like.
The DatabaseStorageTypes
impl
Then there are a variety of other impls, like this one for DatabaseStorageTypes
:
output.extend(quote! {
impl salsa::plumbing::DatabaseStorageTypes for #database_name {
type DatabaseStorage = __SalsaDatabaseStorage;
}
});
The DatabaseOps
impl
Or this one for DatabaseOps
, which defines the for-each method to
invoke an operation on every kind of query in the database. It ultimately
delegates to the for_each
methods for the groups:
let mut fmt_ops = proc_macro2::TokenStream::new();
let mut maybe_changed_ops = proc_macro2::TokenStream::new();
let mut cycle_recovery_strategy_ops = proc_macro2::TokenStream::new();
let mut for_each_ops = proc_macro2::TokenStream::new();
for ((QueryGroup { group_path }, group_storage), group_index) in query_groups
.iter()
.zip(&query_group_storage_names)
.zip(0_u16..)
{
fmt_ops.extend(quote! {
group_index => {
let storage: &#group_storage =
<Self as salsa::plumbing::HasQueryGroup<#group_path>>::group_storage(self);
storage.fmt_index(self, input, fmt)
}
});
maybe_changed_ops.extend(quote! {
group_index => {
let storage: &#group_storage =
<Self as salsa::plumbing::HasQueryGroup<#group_path>>::group_storage(self);
storage.maybe_changed_after(self, input, revision)
}
});
cycle_recovery_strategy_ops.extend(quote! {
group_index => {
let storage: &#group_storage =
<Self as salsa::plumbing::HasQueryGroup<#group_path>>::group_storage(self);
storage.cycle_recovery_strategy(self, input)
}
});
for_each_ops.extend(quote! {
let storage: &#group_storage =
<Self as salsa::plumbing::HasQueryGroup<#group_path>>::group_storage(self);
storage.for_each_query(runtime, &mut op);
});
}
output.extend(quote! {
impl salsa::plumbing::DatabaseOps for #database_name {
fn ops_database(&self) -> &dyn salsa::Database {
self
}
fn ops_salsa_runtime(&self) -> &salsa::Runtime {
self.#db_storage_field.salsa_runtime()
}
fn ops_salsa_runtime_mut(&mut self) -> &mut salsa::Runtime {
self.#db_storage_field.salsa_runtime_mut()
}
fn fmt_index(
&self,
input: salsa::DatabaseKeyIndex,
fmt: &mut std::fmt::Formatter<'_>,
) -> std::fmt::Result {
match input.group_index() {
fmt_ops
i => panic!("salsa: invalid group index {}", i)
}
}
fn maybe_changed_after(
&self,
input: salsa::DatabaseKeyIndex,
revision: salsa::Revision
) -> bool {
match input.group_index() {
maybe_changed_ops
i => panic!("salsa: invalid group index {}", i)
}
}
fn cycle_recovery_strategy(
&self,
input: salsa::DatabaseKeyIndex,
) -> salsa::plumbing::CycleRecoveryStrategy {
match input.group_index() {
cycle_recovery_strategy_ops
i => panic!("salsa: invalid group index {}", i)
}
}
fn for_each_query(
&self,
mut op: &mut dyn FnMut(&dyn salsa::plumbing::QueryStorageMassOps),
) {
let runtime = salsa::Database::salsa_runtime(self);
for_each_ops
}
}
});
Runtime
This section documents the contents of the salsa crate. The salsa crate contains code that interacts with the generated code to create the complete "salsa experience".
Major types
The crate has a few major types.
The salsa::Storage
struct
The salsa::Storage
struct is what users embed into their database. It consists of two main parts:
- The "query store", which is the generated storage struct.
- The
salsa::Runtime
.
The salsa::Runtime
struct
The salsa::Runtime
struct stores the data that is used to track which queries are being executed and to coordinate between them. The Runtime
is embedded within the salsa::Storage
struct.
Important. The Runtime
does not store the actual data from the queries; they live alongside it in the salsa::Storage
struct. This ensures that the type of Runtime
is not generic which is needed to ensure dyn safety.
Threading
There is one salsa::Runtime
for each active thread, and each of them has a unique RuntimeId
. The Runtime
state itself is divided into;
SharedState
, accessible from all runtimes;LocalState
, accessible only from this runtime.
Query storage implementations and support code
For each kind of query (input, derived, interned, etc) there is a corresponding "storage struct" that contains the code to implement it. For example, derived queries are implemented by the DerivedStorage
struct found in the salsa::derived
module.
Storage structs like DerivedStorage
are generic over a query type Q
, which corresponds to the query structs in the generated code. The query structs implement the Query
trait which gives basic info such as the key and value type of the query and its ability to recover from cycles. In some cases, the Q
type is expected to implement additional traits: derived queries, for example, implement QueryFunction
, which defines the code that will execute when the query is called.
The storage structs, in turn, implement key traits from the plumbing module. The most notable is the QueryStorageOps
, which defines the basic operations that can be done on a query.
Query operations
Each of the query storage struct implements the QueryStorageOps
trait found in the plumbing
module:
pub trait QueryStorageOps<Q>
where
Self: QueryStorageMassOps,
Q: Query,
{
which defines the basic operations that all queries support. The most important are these two:
- maybe changed after: Returns true if the value of the query (for the given key) may have changed since the given revision.
- Fetch: Returms the up-to-date value for the given K (or an error in the case of an "unrecovered" cycle).
Maybe changed after
/// True if the value of `input`, which must be from this query, may have
/// changed after the given revision ended.
///
/// This function should only be invoked with a revision less than the current
/// revision.
fn maybe_changed_after(
&self,
db: &<Q as QueryDb<'_>>::DynDb,
input: DatabaseKeyIndex,
revision: Revision,
) -> bool;
The maybe_changed_after
operation computes whether a query's value may have changed after the given revision. In other words, Q.maybe_change_since(R)
is true if the value of the query Q
may have changed in the revisions (R+1)..R_now
, where R_now
is the current revision. Note that it doesn't make sense to ask maybe_changed_after(R_now)
.
Input queries
Input queries are set explicitly by the user. maybe_changed_after
can therefore just check when the value was last set and compare.
Interned queries
Derived queries
The logic for derived queries is more complex. We summarize the high-level ideas here, but you may find the flowchart useful to dig deeper. The terminology section may also be useful; in some cases, we link to that section on the first usage of a word.
- If an existing memo is found, then we check if the memo was verified in the current revision. If so, we can compare its changed at revision and return true or false appropriately.
- Otherwise, we must check whether dependencies have been modified:
- Let R be the revision in which the memo was last verified; we wish to know if any of the dependencies have changed since revision R.
- First, we check the durability. For each memo, we track the minimum durability of the memo's dependencies. If the memo has durability D, and there have been no changes to an input with durability D since the last time the memo was verified, then we can consider the memo verified without any further work.
- If the durability check is not sufficient, then we must check the dependencies individually. For this, we iterate over each dependency D and invoke the maybe changed after operation to check whether D has changed since the revision R.
- If no dependency was modified:
- We can mark the memo as verified and use its changed at revision to return true or false.
- Assuming dependencies have been modified:
- Then we execute the user's query function (same as in fetch), which potentially backdates the resulting value.
- Compare the changed at revision in the resulting memo and return true or false.
Fetch
/// Execute the query, returning the result (often, the result
/// will be memoized). This is the "main method" for
/// queries.
///
/// Returns `Err` in the event of a cycle, meaning that computing
/// the value for this `key` is recursively attempting to fetch
/// itself.
fn fetch(&self, db: &<Q as QueryDb<'_>>::DynDb, key: &Q::Key) -> Q::Value;
The fetch
operation computes the value of a query. It prefers to reuse memoized values when it can.
Input queries
Input queries simply load the result from the table.
Interned queries
Interned queries map the input into a hashmap to find an existing integer. If none is present, a new value is created.
Derived queries
The logic for derived queries is more complex. We summarize the high-level ideas here, but you may find the flowchart useful to dig deeper. The terminology section may also be useful; in some cases, we link to that section on the first usage of a word.
- If an existing memo is found, then we check if the memo was verified in the current revision. If so, we can directly return the memoized value.
- Otherwise, if the memo contains a memoized value, we must check whether dependencies have been modified:
- Let R be the revision in which the memo was last verified; we wish to know if any of the dependencies have changed since revision R.
- First, we check the durability. For each memo, we track the minimum durability of the memo's dependencies. If the memo has durability D, and there have been no changes to an input with durability D since the last time the memo was verified, then we can consider the memo verified without any further work.
- If the durability check is not sufficient, then we must check the dependencies individually. For this, we iterate over each dependency D and invoke the maybe changed after operation to check whether D has changed since the revision R.
- If no dependency was modified:
- We can mark the memo as verified and return its memoized value.
- Assuming dependencies have been modified or the memo does not contain a memoized value:
- Then we execute the user's query function.
- Next, we compute the revision in which the memoized value last changed:
- Backdate: If there was a previous memoized value, and the new value is equal to that old value, then we can backdate the memo, which means to use the 'changed at' revision from before.
- Thanks to backdating, it is possible for a dependency of the query to have changed in some revision R1 but for the output of the query to have changed in some revision R2 where R2 predates R1.
- Otherwise, we use the current revision.
- Backdate: If there was a previous memoized value, and the new value is equal to that old value, then we can backdate the memo, which means to use the 'changed at' revision from before.
- Construct a memo for the new value and return it.
Derived queries flowchart
Derived queries are by far the most complex. This flowchart documents the flow of the maybe changed after and fetch operations. This flowchart can be edited on draw.io:
Cycles
Cross-thread blocking
The interface for blocking across threads now works as follows:
- When one thread
T1
wishes to block on a queryQ
being executed by another threadT2
, it invokesRuntime::try_block_on
. This will check for cycles. Assuming no cycle is detected, it will blockT1
untilT2
has completed withQ
. At that point,T1
reawakens. However, we don't know the result of executingQ
, soT1
now has to "retry". Typically, this will result in successfully reading the cached value. - While
T1
is blocking, the runtime moves its query stack (aVec
) into the shared dependency graph data structure. WhenT1
reawakens, it recovers ownership of its query stack before returning fromtry_block_on
.
Cycle detection
When a thread T1
attempts to execute a query Q
, it will try to load the value for Q
from the memoization tables. If it finds an InProgress
marker, that indicates that Q
is currently being computed. This indicates a potential cycle. T1
will then try to block on the query Q
:
- If
Q
is also being computed byT1
, then there is a cycle. - Otherwise, if
Q
is being computed by some other threadT2
, we have to check whetherT2
is (transitively) blocked onT1
. If so, there is a cycle.
These two cases are handled internally by the Runtime::try_block_on
function. Detecting the intra-thread cycle case is easy; to detect cross-thread cycles, the runtime maintains a dependency DAG between threads (identified by RuntimeId
). Before adding an edge T1 -> T2
(i.e., T1
is blocked waiting for T2
) into the DAG, it checks whether a path exists from T2
to T1
. If so, we have a cycle and the edge cannot be added (then the DAG would not longer be acyclic).
When a cycle is detected, the current thread T1
has full access to the query stacks that are participating in the cycle. Consider: naturally, T1
has access to its own stack. There is also a path T2 -> ... -> Tn -> T1
of blocked threads. Each of the blocked threads T2 ..= Tn
will have moved their query stacks into the dependency graph, so those query stacks are available for inspection.
Using the available stacks, we can create a list of cycle participants Q0 ... Qn
and store that into a Cycle
struct. If none of the participants Q0 ... Qn
have cycle recovery enabled, we panic with the Cycle
struct, which will trigger all the queries on this thread to panic.
Cycle recovery via fallback
If any of the cycle participants Q0 ... Qn
has cycle recovery set, we recover from the cycle. To help explain how this works, we will use this example cycle which contains three threads. Beginning with the current query, the cycle participants are QA3
, QB2
, QB3
, QC2
, QC3
, and QA2
.
The cyclic
edge we have
failed to add.
:
A : B C
:
QA1 v QB1 QC1
┌► QA2 ┌──► QB2 ┌─► QC2
│ QA3 ───┘ QB3 ──┘ QC3 ───┐
│ │
└───────────────────────────────┘
Recovery works in phases:
- Analyze: As we enumerate the query participants, we collect their collective inputs (all queries invoked so far by any cycle participant) and the max changed-at and min duration. We then remove the cycle participants themselves from this list of inputs, leaving only the queries external to the cycle.
- Mark: For each query Q that is annotated with
#[salsa::recover]
, we mark it and all of its successors on the same thread by setting itscycle
flag to thec: Cycle
we constructed earlier; we also reset its inputs to the collective inputs gathering during analysis. If those queries resume execution later, those marks will trigger them to immediately unwind and use cycle recovery, and the inputs will be used as the inputs to the recovery value.- Note that we mark all the successors of Q on the same thread, whether or not they have recovery set. We'll discuss later how this is important in the case where the active thread (A, here) doesn't have any recovery set.
- Unblock: Each blocked thread T that has a recovering query is forcibly reawoken; the outgoing edge from that thread to its successor in the cycle is removed. Its condvar is signalled with a
WaitResult::Cycle(c)
. When the thread reawakens, it will see that and start unwinding with the cyclec
. - Handle the current thread: Finally, we have to choose how to have the current thread proceed. If the current thread includes any cycle with recovery information, then we can begin unwinding. Otherwise, the current thread simply continues as if there had been no cycle, and so the cyclic edge is added to the graph and the current thread blocks. This is possible because some other thread had recovery information and therefore has been awoken.
Let's walk through the process with a few examples.
Example 1: Recovery on the detecting thread
Consider the case where only the query QA2 has recovery set. It and QA3 will be marked with their cycle
flag set to c: Cycle
. Threads B and C will not be unblocked, as they do not have any cycle recovery nodes. The current thread (Thread A) will initiate unwinding with the cycle c
as the value. Unwinding will pass through QA3 and be caught by QA2. QA2 will substitute the recovery value and return normally. QA1 and QC3 will then complete normally and so forth, on up until all queries have completed.
Example 2: Recovery in two queries on the detecting thread
Consider the case where both query QA2 and QA3 have recovery set. It proceeds the same Example 1 until the the current initiates unwinding, as described in Example 1. When QA3 receives the cycle, it stores its recovery value and completes normally. QA2 then adds QA3 as an input dependency: at that point, QA2 observes that it too has the cycle mark set, and so it initiates unwinding. The rest of QA2 therefore never executes. This unwinding is caught by QA2's entry point and it stores the recovery value and returns normally. QA1 and QC3 then continue normally, as they have not had their cycle
flag set.
Example 3: Recovery on another thread
Now consider the case where only the query QB2 has recovery set. It and QB3 will be marked with the cycle c: Cycle
and thread B will be unblocked; the edge QB3 -> QC2
will be removed from the dependency graph. Thread A will then add an edge QA3 -> QB2
and block on thread B. At that point, thread A releases the lock on the dependency graph, and so thread B is re-awoken. It observes the WaitResult::Cycle
and initiates unwinding. Unwinding proceeds through QB3 and into QB2, which recovers. QB1 is then able to execute normally, as is QA3, and execution proceeds from there.
Example 4: Recovery on all queries
Now consider the case where all the queries have recovery set. In that case, they are all marked with the cycle, and all the cross-thread edges are removed from the graph. Each thread will independently awaken and initiate unwinding. Each query will recover.
Terminology
Backdate
Backdating is when we mark a value that was computed in revision R as having last changed in some earlier revision. This is done when we have an older memo M and we can compare the two values to see that, while the dependencies to M may have changed, the result of the query function did not.
Changed at
The changed at revision for a memo is the revision in which that memo's value last changed. Typically, this is the same as the revision in which the query function was last executed, but it may be an earlier revision if the memo was backdated.
Dependency
A dependency of a query Q is some other query Q1 that was invoked as part of computing the value for Q (typically, invoking by Q's query function).
Derived query
A derived query is a query whose value is defined by the result of a user-provided query function. That function is executed to get the result of the query. Unlike input queries, the result of a derived queries can always be recomputed whenever needed simply by re-executing the function.
Durability
Durability is an optimization that we use to avoid checking the dependencies of a query individually. It was introduced in RFC #5.
Input query
An input query is a query whose value is explicitly set by the user. When that value is set, a durability can also be provided.
LRU
the set_lru_capacity
method can be used to fix the maximum capacity for a query at a specific number of values. If more values are added after that point, then salsa will drop the values from older memos to conserve memory (we always retain the dependency information for those memos, however, so that we can still compute whether values may have changed, even if we don't know what that value is). The LRU mechanism was introduced in RFC #4.
Memo
A memo stores information about the last time that a query function for some query Q was executed:
- Typically, it contains the value that was returned from that function, so that we don't have to execute it again.
- However, this is not always true: some queries don't cache their result values, and values can also be dropped as a result of LRU collection. In those cases, the memo just stores dependency information, which can still be useful to determine if other queries that have Q as a dependency may have changed.
- The revision in which the memo last verified.
- The changed at revision in which the memo's value last changed. (Note that it may be backdated.)
- The minimum durability of the memo's dependencies.
- The complete set of dependencies, if available, or a marker that the memo has an untracked dependency.
Query
Query function
The query function is the user-provided function that we execute to compute the value of a derived query. Salsa assumed that all query functions are a 'pure' function of their dependencies unless the user reports an untracked read. Salsa always assumes that functions have no important side-effects (i.e., that they don't send messages over the network whose results you wish to observe) and thus that it doesn't have to re-execute functions unless it needs their return value.
Revision
A revision is a monotonically increasing integer that we use to track the "version" of the database. Each time the value of an input query is modified, we create a new revision.
Untracked dependency
An untracked dependency is an indication that the result of a derived query depends on something not visible to the salsa database. Untracked dependencies are created by invoking report_untracked_read
or report_synthetic_read
. When an untracked dependency is present, derived queries are always re-executed if the durability check fails (see the description of the fetch operation for more details).
Verified
A memo is verified in a revision R if we have checked that its value is still up-to-date (i.e., if we were to reexecute the query function, we are guaranteed to get the same result). Each memo tracks the revision in which it was last verified to avoid repeatedly checking whether dependencies have changed during the fetch and maybe changed after operations.
RFCs
The Salsa RFC process is used to describe the motivations for major changes made to Salsa. RFCs are recorded here in the Salsa book as a historical record of the considerations that were raised at the time. Note that the contents of RFCs, once merged, is typically not updated to match further changes. Instead, the rest of the book is updated to include the RFC text and then kept up to date as more PRs land and so forth.
Creating an RFC
If you'd like to propose a major new Salsa feature, simply clone the repository and create a new chapter under the list of RFCs based on the RFC template. Then open a PR with a subject line that starts with "RFC:".
RFC vs Implementation
The RFC can be in its own PR, or it can also includ work on the implementation together, whatever works best for you.
Does my change need an RFC?
Not all PRs require RFCs. RFCs are only needed for larger features or major changes to how Salsa works. And they don't have to be super complicated, but they should capture the most important reasons you would like to make the change. When in doubt, it's ok to just open a PR, and we can always request an RFC if we want one.
Description/title
Metadata
- Author: (Github username(s) or real names, as you prefer)
- Date: (today's date)
- Introduced in: https://github.com/salsa-rs/salsa/pull/1 (please update once you open your PR)
Summary
Summarize the effects of the RFC bullet point form.
Motivation
Say something about your goals here.
User's guide
Describe effects on end users here.
Reference guide
Describe implementation details or other things here.
Frequently asked questions
Use this section to add in design notes, downsides, rejected approaches, or other considerations.
Query group traits
Metadata
- Author: nikomatsakis
- Date: 2019-01-15
- Introduced in: https://github.com/salsa-rs/salsa-rfcs/pull/1
Motivation
- Support
dyn QueryGroup
for each query group trait as well asimpl QueryGroup
dyn QueryGroup
will be much more convenient, at the cost of runtime efficiency
- Don't require you to redeclare each query in the final database, just the query groups
User's guide
Declaring a query group
User's will declare query groups by decorating a trait with salsa::query_group
:
#[salsa::query_group(MyGroupStorage)]
trait MyGroup {
// Inputs are annotated with `#[salsa::input]`. For inputs, the final trait will include
// a `set_my_input(&mut self, key: K1, value: V1)` method automatically added,
// as well as possibly other mutation methods.
#[salsa::input]
fn my_input(&self, key: K1) -> V1;
// "Derived" queries are just a getter.
fn my_query(&self, key: K2) -> V2;
}
The query_group
attribute is a procedural macro. It takes as
argument the name of the storage struct for the query group --
this is a struct, generated by the macro, which represents the query
group as a whole. It is attached to a trait definition which defines the
individual queries in the query group.
The macro generates three things that users interact with:
- the trait, here named
MyGroup
. This will be used when writing the definitions for the queries and other code that invokes them. - the storage struct, here named
MyGroupStorage
. This will be used later when constructing the final database. - query structs, named after each query but converted to camel-case
and with the word query (e.g.,
MyInputQuery
formy_input
). These types are rarely needed, but are presently useful for things like invoking the GC. These types violate our rule that "things the user needs to name should be given names by the user", but we choose not to fully resolve this question in this RFC.
In addition, the macro generates a number of structs that users should not have to be aware of. These are described in the "reference guide" section.
Controlling query modes
Input queries, as described in the trait, are specified via the
#[salsa::input]
attribute.
Derived queries can be customized by the following attributes,
attached to the getter method (e.g., fn my_query(..)
):
#[salsa::invoke(foo::bar)]
specifies the path to the function to invoke when the query is called (default ismy_query
).#[salsa::volatile]
specifies a "volatile" query, which is assumed to read untracked input and hence must be re-executed on every revision.#[salsa::dependencies]
specifies a "dependencies-only" query, which is assumed to read untracked input and hence must be re-executed on every revision.
Creating the database
Creating a salsa database works by using a #[salsa::database(..)]
attribute. The ..
content should be a list of paths leading to the
storage structs for each query group that the database will
implement. It is no longer necessary to list the individual
queries. In addition to the salsa::database
query, the struct must
have access to a salsa::Runtime
and implement the salsa::Database
trait. Hence the complete declaration looks roughly like so:
#[salsa::database(MyGroupStorage)]
struct MyDatabase {
runtime: salsa::Runtime<MyDatabase>,
}
impl salsa::Database for MyDatabase {
fn salsa_runtime(&self) -> salsa::Runtime<MyDatabase> {
&self.runtime
}
}
This (procedural) macro generates various impls and types that cause
MyDatabase
to implement all the traits for the query groups it
supports, and which customize the storage in the runtime to have all
the data needed. Users should not have to interact with these details,
and they are written out in the reference guide section.
Reference guide
The goal here is not to give the full details of how to do the
lowering, but to describe the key concepts. Throughout the text, we
will refer to names (e.g., MyGroup
or MyGroupStorage
) that appear
in the example from the User's Guide -- this indicates that we use
whatever name the user provided.
The plumbing::QueryGroup
trait
The QueryGroup
trait is a new trait added to the plumbing module. It
is implemented by the query group storage struct MyGroupStorage
. Its
role is to link from that struct to the various bits of data that the
salsa runtime needs:
pub trait QueryGroup<DB: Database> {
type GroupStorage;
type GroupKey;
}
This trait is implemented by the storage struct (MyGroupStorage
)
in our example. You can see there is a bit of confusing nameing going
on here -- what we call (for user's) the "storage struct" actually
does not wind up containing the true storage (that is, the hasmaps
and things salsa uses). Instead, it merely implements the QueryGroup
trait, which has associated types that lead us to structs we need:
- the group storage contains the hashmaps and things for all the queries in the group
- the group key is an enum with variants for each of the queries. It basically stores all the data needed to identify some particular query value from within the group -- that is, the name of the query, plus the keys used to invoke it.
As described further on, the #[salsa::query_group]
macro is
responsible will generate an impl of this trait for the
MyGroupStorage
struct, along with the group storage and group key
type definitions.
The plumbing::HasQueryGroup<G>
trait
The HasQueryGroup<G>
struct a new trait added to the plumbing
module. It is implemented by the database struct MyDatabase
for
every query group that MyDatabase
supports. Its role is to offer
methods that move back and forth between the context of the full
database to the context of an individual query group:
pub trait HasQueryGroup<G>: Database
where
G: QueryGroup<Self>,
{
/// Access the group storage struct from the database.
fn group_storage(db: &Self) -> &G::GroupStorage;
/// "Upcast" a group key into a database key.
fn database_key(group_key: G::GroupKey) -> Self::DatabaseKey;
}
Here the "database key" is an enum that contains variants for each group. Its role is to take group key and puts it into the context of the entire database.
The Query
trait
The query trait (pre-existing) is extended to include links to its group, and methods to convert from the group storage to the query storage, plus methods to convert from a query key up to the group key:
pub trait Query<DB: Database>: Debug + Default + Sized + 'static {
/// Type that you you give as a parameter -- for queries with zero
/// or more than one input, this will be a tuple.
type Key: Clone + Debug + Hash + Eq;
/// What value does the query return?
type Value: Clone + Debug;
/// Internal struct storing the values for the query.
type Storage: plumbing::QueryStorageOps<DB, Self> + Send + Sync;
/// Associate query group struct.
type Group: plumbing::QueryGroup<
DB,
GroupStorage = Self::GroupStorage,
GroupKey = Self::GroupKey,
>;
/// Generated struct that contains storage for all queries in a group.
type GroupStorage;
/// Type that identifies a particular query within the group + its key.
type GroupKey;
/// Extact storage for this query from the storage for its group.
fn query_storage(group_storage: &Self::GroupStorage) -> &Self::Storage;
/// Create group key for this query.
fn group_key(key: Self::Key) -> Self::GroupKey;
}
Converting to/from the context of the full database generically
Putting all the previous plumbing traits together, this means that given:
- a database
DB
that implementsHasGroupStorage<G>
; - a group struct
G
that implementsQueryGroup<DB>
; and, - and a query struct
Q
that implementsQuery<DB, Group = G>
we can (generically) get the storage for the individual query
Q
out from the database db
via a two-step process:
let group_storage = HasGroupStorage::group_storage(db);
let query_storage = Query::query_storage(group_storage);
Similarly, we can convert from the key to an individual query up to the "database key" in a two-step process:
let group_key = Query::group_key(key);
let db_key = HasGroupStorage::database_key(group_key);
Lowering query groups
The role of the #[salsa::query_group(MyGroupStorage)] trait MyGroup { .. }
macro is primarily to generate the group storage struct and the
impl of QueryGroup
. That involves generating the following things:
- the query trait
MyGroup
itself, but with:salsa::foo
attributes stripped#[salsa::input]
methods expanded to include setters:fn set_my_input(&mut self, key: K1, value__: V1);
fn set_constant_my_input(&mut self, key: K1, value__: V1);
- the query group storage struct
MyGroupStorage
- We also generate an impl of
QueryGroup<DB>
forMyGroupStorage
, linking to the internal strorage struct and group key enum
- We also generate an impl of
- the individual query types
- Ideally, we would use Rust hygiene to hide these struct, but as
that is not currently possible they are given names based on the
queries, but converted to camel-case (e.g.,
MyInputQuery
andMyQueryQuery
). - They implement the
salsa::Query
trait.
- Ideally, we would use Rust hygiene to hide these struct, but as
that is not currently possible they are given names based on the
queries, but converted to camel-case (e.g.,
- the internal group storage struct
- Ideally, we would use Rust hygiene to hide this struct, but as
that is not currently possible it is entitled
MyGroupGroupStorage<DB>
. Note that it is generic with respect to the databaseDB
. This is because the actual query storage requires sometimes storing database key's and hence we need to know the final database type. - It contains one field per query with a link to the storage information
for that query:
my_query: <MyQueryQuery as salsa::plumbing::Query<DB>>::Storage
- (the
MyQueryQuery
type is also generated, see the "individual query types" below)
- The internal group storage struct offers a public, inherent method
for_each_query
:fn for_each_query(db: &DB, op: &mut dyn FnMut(...)
- this is invoked by the code geneated by
#[salsa::database]
when implementing thefor_each_query
method of theplumbing::DatabaseOps
trait
- Ideally, we would use Rust hygiene to hide this struct, but as
that is not currently possible it is entitled
- the group key
- Again, ideally we would use hygiene to hide the name of this struct,
but since we cannot, it is entitled
MyGroupGroupKey
- It is an enum which contains one variant per query with the value being the key:
my_query(<MyQueryQuery as salsa::plumbing::Query<DB>>::Key)
- The group key enum offers a public, inherent method
maybe_changed_after
:fn maybe_changed_after<DB>(db: &DB, db_descriptor: &DB::DatabaseKey, revision: Revision)
- it is invoked when implementing
maybe_changed_after
for the database key
- Again, ideally we would use hygiene to hide the name of this struct,
but since we cannot, it is entitled
Lowering database storage
The #[salsa::database(MyGroup)]
attribute macro creates the links to the query groups.
It generates the following things:
- impl of
HasQueryGroup<MyGroup>
forMyDatabase
- Naturally, there is one such impl for each query group.
- the database key enum
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
__SalsaDatabaseKey
. - The database key is an enum with one variant per query group:
MyGroupStorage(<MyGroupStorage as QueryGroup<MyDatabase>>::GroupKey)
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
- the database storage struct
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
__SalsaDatabaseStorage
. - The database storage struct contains one field per query group, storing
its internal storage:
my_group_storage: <MyGroupStorage as QueryGroup<MyDatabase>>::GroupStorage
- Ideally, we would use Rust hygiene to hide this enum, but currently
it is called
- impl of
plumbing::DatabaseStorageTypes
forMyDatabase
- This is a plumbing trait that links to the database storage / database key types.
- The
salsa::Runtime
uses it to determine what data to include. The query types use it to determine a database-key.
- impl of
plumbing::DatabaseOps
forMyDatabase
- This contains a
for_each_query
method, which is implemented by invoking, in turn, the inherent methods defined on each query group storage struct.
- This contains a
- impl of
plumbing::DatabaseKey
for the database key enum- This contains a method
maybe_changed_after
. We implement this by matching to get a particular group key, and then invoking the inherent method on the group key struct.
- This contains a method
Alternatives
This proposal results from a fair amount of iteration. Compared to the status quo, there is one primary downside. We also explain a few things here that may not be obvious.
Why include a group storage struct?
You might wonder why we need the MyGroupStorage
struct at all. It is a touch of boilerplate,
but there are several advantages to it:
- You can't attach associated types to the trait itself. This is because the "type version"
of the trait (
dyn MyGroup
) may not be available, since not all traits are dyn-capable. - We try to keep to the principle that "any type that might be named
externally from the macro is given its name by the user". In this
case, the
[salsa::database]
attribute needed to name group storage structs.- In earlier versions, we tried to auto-generate these names, but
this failed because sometimes users would want to
pub use
the query traits and hide their original paths. - (One exception to this principle today are the per-query structs.)
- In earlier versions, we tried to auto-generate these names, but
this failed because sometimes users would want to
- We expect that we can use the
MyGroupStorage
to achieve more encapsulation in the future. While the struct must be public and named from the database, the trait (and query key/value types) actually does not have to be.
Downside: Size of a database key
Database keys now wind up with two discriminants: one to identify the
group, and one to identify the query. That's a bit sad. This could be
overcome by using unsafe code: the idea would be that a group/database
key would be stored as the pair of an integer and a union
. Each
group within a given database would be assigned a range of integer
values, and the unions would store the actual key values. We leave
such a change for future work.
Future possibilities
Here are some ideas we might want to do later.
No generics
We leave generic parameters on the query group trait etc for future work.
Public / private
We'd like the ability to make more details from the query groups private. This will require some tinkering.
Inline query definitions
Instead of defining queries in separate functions, it might be nice to have the option of defining query methods in the trait itself:
#[salsa::query_group(MyGroupStorage)]
trait MyGroup {
#[salsa::input]
fn my_input(&self, key: K1) -> V1;
fn my_query(&self, key: K2) -> V2 {
// define my-query right here!
}
}
It's a bit tricky to figure out how to handle this, so that is left for future work. Also, it would mean that the method body itself is inside of a macro (the procedural macro) which can make IDE integration harder.
Non-query functions
It might be nice to be able to include functions in the trait that are
not queries, but rather helpers that compose queries. This should be
pretty easy, just need a suitable #[salsa]
attribute.
Summary
- We introduce
#[salsa::interned]
queries which convert aKey
type into a numeric index of typeValue
, whereValue
is either the typeInternId
(defined by a salsa) or some newtype thereof. - Each interned query
foo
also produces an inverselookup_foo
method that converts back from theValue
to theKey
that was interned. - The
InternId
type (defined by salsa) is basically a newtype'd integer, but it internally usesNonZeroU32
to enable space-saving optimizations in memory layout. - The
Value
types can be any type that implements thesalsa::InternIndex
trait, also introduced by this RFC. This trait has two methods,from_intern_id
andas_intern_id
. - The interning is integrated into the GC and tracked like any other query, which means that interned values can be garbage-collected, and any computation that was dependent on them will be collected.
Motivation
The need for interning
Many salsa applications wind up needing the ability to construct
"interned keys". Frequently this pattern emerges because we wish to
construct identifiers for things in the input. These identifiers
generally have a "tree-like shape". For example, in a compiler, there
may be some set of input files -- these are enumerated in the inputs
and serve as the "base" for a path that leads to items in the user's
input. But within an input file, there are additional structures, such
as struct
or impl
declarations, and these structures may contain
further structures within them (such as fields or methods). This gives
rise to a path like so that can be used to identify a given item:
PathData = <file-name>
| PathData / <identifier>
These paths could be represented in the compiler with an Arc
, but
because they are omnipresent, it is convenient to intern them instead
and use an integer. Integers are Copy
types, which is convenient,
and they are also small (32 bits typically suffices in practice).
Why interning is difficult today: garbage collection
Unfortunately, integrating interning into salsa at present presents some hard choices, particularly with a long-lived application. You can easily add an interning table into the database, but unless you do something clever, it will simply grow and grow forever. But as the user edits their programs, some paths that used to exist will no longer be relevant -- for example, a given file or impl may be removed, invalidating all those paths that were based on it.
Due to the nature of salsa's recomputation model, it is not easy to detect when paths that used to exist in a prior revision are no longer relevant in the next revision. This is because salsa never explicitly computes "diffs" of this kind between revisions -- it just finds subcomputations that might have gone differently and re-executes them. Therefore, if the code that created the paths (e.g., that processed the result of the parser) is part of a salsa query, it will simply not re-create the invalidated paths -- there is no explicit "deletion" point.
In fact, the same is true of all of salsa's memoized query values. We
may find that in a new revision, some memoized query values are no
longer relevant. For example, in revision R1, perhaps we computed
foo(22)
and foo(44)
, but in the new input, we now only need to
compute foo(22)
. The foo(44)
value is still memoized, we just
never asked for its value. This is why salsa includes a garbage
collector, which can be used to cleanup these memoized values that are
no longer relevant.
But using a garbage collection strategy with a hand-rolled interning scheme is not easy. You could trace through all the values in salsa's memoization tables to implement a kind of mark-and-sweep scheme, but that would require for salsa to add such a mechanism. It might also be quite a lot of tracing! The current salsa GC mechanism has no need to walk through the values themselves in a memoization table, it only examines the keys and the metadata (unless we are freeing a value, of course).
How this RFC changes the situation
This RFC presents an alternative. The idea is to move the interning into salsa itself by creating special "interning queries". Dependencies on these queries are tracked like any other query and hence they integrate naturally with salsa's garbage collection mechanisms.
User's guide
This section covers how interned queries are expected to be used.
Declaring an interned query
You can declare an interned query like so:
#[salsa::query_group]
trait Foo {
#[salsa::interned]
fn intern_path_data(&self, data: PathData) -> salsa::InternId;
]
Query keys. Like any query, these queries can take any number of keys. If multiple
keys are provided, then the interned key is a tuple of each key
value. In order to be interned, the keys must implement Clone
,
Hash
and Eq
.
Return type. The return type of an interned key may be of any type
that implements salsa::InternIndex
: salsa provides an impl for the
type salsa::InternId
, but you can implement it for your own.
Inverse query. For each interning query, we automatically generate
a reverse query that will invert the interning step. It is named
lookup_XXX
, where XXX
is the name of the query. Hence here it
would be fn lookup_intern_path(&self, key: salsa::InternId) -> Path
.
The expected us
Using an interned query is quite straightforward. You simply invoke it
with a key, and you will get back an integer, and you can use the
generated lookup
method to convert back to the original value:
let key = db.intern_path(path_data1);
let path_data2 = db.lookup_intern_path_data(key);
Note that the interned value will be cloned -- so, like all Salsa values, it is best if that is a cheap operation. Interestingly, interning can help to keep recursive, tree-shapes values cheap, because the "pointers" within can be replaced with interned keys.
Custom return types
The return type for an intern query does not have to be a InternId
. It can
be any type that implements the salsa::InternKey
trait:
pub trait InternKey {
/// Create an instance of the intern-key from a `InternId` value.
fn from_intern_id(v: InternId) -> Self;
/// Extract the `InternId` with which the intern-key was created.
fn as_intern_id(&self) -> InternId;
}
Recommended practice
This section shows the recommended practice for using interned keys,
building on the Path
and PathData
example that we've been working
with.
Naming Convention
First, note the recommended naming convention: the intern key is
Foo
and the key's associated data FooData
(in our case, Path
and
PathData
). The intern key is given the shorter name because it is
used far more often. Moreover, other types should never store the full
data, but rather should store the interned key.
Defining the intern key
The intern key should always be a newtype struct that implements
the InternKey
trait. So, something like this:
pub struct Path(InternId);
impl salsa::InternKey for Path {
fn from_intern_id(v: InternId) -> Self {
Path(v)
}
fn as_intern_id(&self) -> InternId {
self.0
}
}
Convenient lookup method
It is often convenient to add a lookup
method to the newtype key:
impl Path {
// Adding this method is often convenient, since you can then
// write `path.lookup(db)` to access the data, which reads a bit better.
pub fn lookup(&self, db: &impl MyDatabase) -> PathData {
db.lookup_intern_path_data(*self)
}
}
Defining the data type
Recall that our paths were defined by a recursive grammar like so:
PathData = <file-name>
| PathData / <identifier>
This recursion is quite typical of salsa applications. The recommended
way to encode it in the PathData
structure itself is to build on other
intern keys, like so:
#[derive(Clone, Hash, Eq, ..)]
enum PathData {
Root(String),
Child(Path, String),
// ^^^^ Note that the recursive reference here
// is encoded as a Path.
}
Note though that the PathData
type will be cloned whenever the value
for an interned key is looked up, and it may also be cloned to store
dependency information between queries. So, as an optimization, you
might prefer to avoid String
in favor of Arc<String>
-- or even
intern the strings as well.
Interaction with the garbage collector
Interned keys can be garbage collected as normal, with one caveat. Even if requested, Salsa will never collect the results generated in the current revision. This is because it would permit the same key to be interned twice in the same revision, possibly mapping to distinct intern keys each time.
Note that if an interned key is collected, its index will be re-used. Salsa's dependency tracking system should ensure that anything incorporating the older value is considered dirty, but you may see the same index showing up more than once in the logs.
Reference guide
Interned keys are implemented using a hash-map that maps from the interned data to its index, as well as a vector containing (for each index) various bits of data. In addition to the interned data, we must track the revision in which the value was interned and the revision in which it was last accessed, to help manage the interaction with the GC. Finally, we have to track some sort of free list that tracks the keys that are being re-used. The current implementation never actually shrinks the vectors and maps from their maximum size, but this might be a useful thing to be able to do (this is effectively a memory allocator, so standard allocation strategies could be used here).
InternId
Presently the InternId
type is implemented to wrap a NonZeroU32
:
pub struct InternId {
value: NonZeroU32,
}
This means that Option<InternId>
(or Option<Path>
, continuing our
example from before) will only be a single word. To accommodate this,
the InternId
constructors require that the value is less than
InternId::MAX
; the value is deliberately set low (currently to
0xFFFF_FF00
) to allow for more sentinel values in the future (Rust
doesn't presently expose the capability of having sentinel values
other than zero on stable, but it is possible on nightly).
Alternatives and future work
None at present.
Summary
Allow to specify a dependency on a query group without making it a super trait.
Motivation
Currently, there's only one way to express that queries from group A
can use
another group B
: namely, B
can be a super-trait of A
:
#[salsa::query_group(AStorage)]
trait A: B {
}
This approach works and allows one to express complex dependencies. However,
this approach falls down when one wants to make a dependency a private
implementation detail: Clients with db: &impl A
can freely call B
methods on
the db
.
This is a bad situation from software engineering point of view: if everything is accessible, it's hard to make distinction between public API and private implementation details. In the context of salsa the situation is even worse, because it breaks "firewall" pattern. It's customary to wrap low-level frequently-changing or volatile queries into higher-level queries which produce stable results and contain invalidation. In the current salsa, however, it's very easy to accidentally call a low-level volatile query instead of a wrapper, introducing and undesired dependency.
User's guide
To specify query dependencies, a requires
attribute should be used:
#[salsa::query_group(SymbolsDatabaseStorage)]
#[salsa::requires(SyntaxDatabase)]
#[salsa::requires(EnvDatabase)]
pub trait SymbolsDatabase {
fn get_symbol_by_name(&self, name: String) -> Symbol;
}
The argument of requires
is a path to a trait. The traits from all requires
attributes are available when implementing the query:
fn get_symbol_by_name(
db: &(impl SymbolsDatabase + SyntaxDatabase + EnvDatabase),
name: String,
) -> Symbol {
// ...
}
However, these traits are not available without explicit bounds:
fn fuzzy_find_symbol(db: &impl SymbolsDatabase, name: String) {
// Can't accidentally call methods of the `SyntaxDatabase`
}
Note that, while the RFC does not propose to add per-query dependencies, query
implementation can voluntarily specify only a subset of traits from requires
attribute:
fn get_symbol_by_name(
// Purposefully don't depend on EnvDatabase
db: &(impl SymbolsDatabase + SyntaxDatabase),
name: String,
) -> Symbol {
// ...
}
Reference guide
The implementation is straightforward and consists of adding traits from
requires
attributes to various where
bounds. For example, we would generate
the following blanket for above example:
impl<T> SymbolsDatabase for T
where
T: SyntaxDatabase + EnvDatabase,
T: salsa::plumbing::HasQueryGroup<SymbolsDatabaseStorage>
{
...
}
Alternatives and future work
The semantics of requires
closely resembles where
, so we could imagine a
syntax based on magical where clauses:
#[salsa::query_group(SymbolsDatabaseStorage)]
pub trait SymbolsDatabase
where ???: SyntaxDatabase + EnvDatabase
{
fn get_symbol_by_name(&self, name: String) -> Symbol;
}
However, it's not obvious what should stand for ???
. Self
won't be ideal,
because supertraits are a sugar for bounds on Self
, and we deliberately want
different semantics. Perhaps picking a magical identifier like DB
would work
though?
One potential future development here is per-query-function bounds, but they can already be simulated by voluntarily requiring less bounds in the implementation function.
Another direction for future work is privacy: because traits from requires
clause are not a part of public interface, in theory it should be possible to
restrict their visibility. In practice, this still hits public-in-private lint,
at least with a trivial implementation.
Summary
Add Least Recently Used values eviction as a supplement to garbage collection.
Motivation
Currently, the single mechanism for controlling memory usage in salsa is garbage collection. Experience with rust-analyzer shown that it is insufficient for two reasons:
-
It's hard to determine which values should be collected. Current implementation in rust-analyzer just periodically clears all values of specific queries.
-
GC is in generally run in-between revision. However, especially after just opening the project, the number of values within a single revision can be high. In other words, GC doesn't really help with keeping peak memory usage under control. While it is possible to run GC concurrently with calculations (and this is in fact what rust-analyzer is doing right now to try to keep high water mark of memory lower), this is highly unreliable an inefficient.
The mechanism of LRU targets both of these weaknesses:
-
LRU tracks which values are accessed, and uses this information to determine which values are actually unused.
-
LRU has a fixed cap on the maximal number of entries, thus bounding the memory usage.
User's guide
It is possible to call set_lru_capacity(n)
method on any non-input query. The
effect of this is that the table for the query stores at most n
values in
the database. If a new value is computed, and there are already n
existing
ones in the database, the least recently used one is evicted. Note that
information about query dependencies is not evicted. It is possible to
change lru capacity at runtime at any time. n == 0
is a special case, which
completely disables LRU logic. LRU is not enabled by default.
Reference guide
Implementation wise, we store a linked hash map of keys, in the recently-used order. Because reads of the queries are considered uses, we now need to write-lock the query map even if the query is fresh. However we don't do this bookkeeping if LRU is disabled, so you don't have to pay for it unless you use it.
A slight complication arises with volatile queries (and, in general, with any query with an untracked input). Similarly to GC, evicting such a query could lead to an inconsistent database. For this reason, volatile queries are never evicted.
Alternatives and future work
LRU is a compromise, as it is prone to both accidentally evicting useful queries and needlessly holding onto useless ones. In particular, in the steady state and without additional GC, memory usage will be proportional to the lru capacity: it is not only an upper bound, but a lower bound as well!
In theory, some deterministic way of evicting values when you for sure don't need them anymore maybe more efficient. However, it is unclear how exactly that would work! Experiments in rust-analyzer show that it's not easy to tame a dynamic crate graph, and that simplistic phase-based strategies fall down.
It's also worth noting that, unlike GC, LRU can in theory be more memory efficient than deterministic memory management. Unlike a traditional GC, we can safely evict "live" objects and recalculate them later. That makes possible to use LRU for problems whose working set of "live" queries is larger than the available memory, at the cost of guaranteed recomputations.
Currently, eviction is strictly LRU base. It should be possible to be smarter and to take size of values and time that is required to recompute them into account when making decisions about eviction.
Summary
- Introduce a user-visibile concept of
Durability
- Adjusting the "durability" of an input can allow salsa to skip a lot of validation work
- Garbage collection -- particularly of interned values -- however becomes more complex
- Possible future expansion: automatic detection of more "durable" input values
Motivation
Making validation faster by optimizing for "durability"
Presently, salsa's validation logic requires traversing all dependencies to check that they have not changed. This can sometimes be quite costly in practice: rust-analyzer for example sometimes spends as much as 90ms revalidating the results from a no-op change. One option to improve this is simply optimization -- salsa#176 for example reduces validation times significantly, and there remains opportunity to do better still. However, even if we are able to traverse the dependency graph more efficiently, it will still be an O(n) process. It would be nice if we could do better.
One observation is that, in practice, there are often input values
that are known to change quite infrequently. For example, in
rust-analyzer, the standard library and crates downloaded from
crates.io are unlikely to change (though changes are possible; see
below). Similarly, the Cargo.toml
file for a project changes
relatively infrequently compared to the sources. We say then that
these inputs are more durable -- that is, they change less frequently.
This RFC proposes a mechanism to take advantage of durability for optimization purposes. Imagine that we have some query Q that depends solely on the standard library. The idea is that we can track the last revision R when the standard library was changed. Then, when traversing dependencies, we can skip traversing the dependencies of Q if it was last validated after the revision R. Put another way, we only need to traverse the dependencies of Q when the standard library changes -- which is unusual. If the standard library does change, for example by user's tinkering with the internal sources, then yes we walk the dependencies of Q to see if it is affected.
User's guide
The durability type
We add a new type salsa::Durability
which has there associated constants:
#[derive(Copy, Clone, Debug, Ord)]
pub struct Durability(..);
impl Durability {
// Values that change regularly, like the source to the current crate.
pub const LOW: Durability;
// Values that change infrequently, like Cargo.toml.
pub const MEDIUM: Durability;
// Values that are not expected to change, like sources from crates.io or the stdlib.
pub const HIGH: Durability;
}
h## Specifying the durability of an input
When setting an input foo
, one can now invoke a method
set_foo_with_durability
, which takes a Durability
as the final
argument:
// db.set_foo(key, value) is equivalent to:
db.set_foo_with_durability(key, value, Durability::LOW);
// This would indicate that `foo` is not expected to change:
db.set_foo_with_durability(key, value, Durability::HIGH);
Durability of interned values
Interned values are always considered Durability::HIGH
. This makes
sense as many queries that only use high durability inputs will also
make use of interning internally. A consequence of this is that they
will not be garbage collected unless you use the specific patterns
recommended below.
Synthetic writes
Finally, we add one new method, synthetic_write(durability)
,
available on the salsa runtime:
db.salsa_runtime().synthetic_write(Durability::HIGH)
As the name suggests, synthetic_write
causes salsa to act as
though a write to an input of the given durability had taken
place. This can be used for benchmarking, but it's also important to
controlling what values get garbaged collected, as described below.
Tracing and garbage collection
Durability affects garbage collection. The SweepStrategy
struct is
modified as follows:
/// Sweeps values which may be outdated, but which have not
/// been verified since the start of the current collection.
/// These are typically memoized values from previous computations
/// that are no longer relevant.
pub fn sweep_outdated(self) -> SweepStrategy;
/// Sweeps values which have not been verified since the start
/// of the current collection, even if they are known to be
/// up to date. This can be used to collect "high durability" values
/// that are not *directly* used by the main query.
///
/// So, for example, imagine a main query `result` which relies
/// on another query `threshold` and (indirectly) on a `threshold_inner`:
///
/// ```
/// result(10) [durability: Low]
/// |
/// v
/// threshold(10) [durability: High]
/// |
/// v
/// threshold_inner(10) [durability: High]
/// ```
///
/// If you modify a low durability input and then access `result`,
/// then `result(10)` and its *immediate* dependencies will
/// be considered "verified". However, because `threshold(10)`
/// has high durability and no high durability input was modified,
/// we will not verify *its* dependencies, so `threshold_inner` is not
/// verified (but it is also not outdated).
///
/// Collecting unverified things would therefore collect `threshold_inner(10)`.
/// Collecting only *outdated* things (i.e., with `sweep_outdated`)
/// would collect nothing -- but this does mean that some high durability
/// queries that are no longer relevant to your main query may stick around.
///
/// To get the most precise garbage collection, do a synthetic write with
/// high durability -- this will force us to verify *all* values. You can then
/// sweep unverified values.
pub fn sweep_unverified(self) -> SweepStrategy;
Reference guide
Review: The need for GC to collect outdated values
In general, salsa's lazy validation scheme can lead to the accumulation of garbage that is no longer needed. Consider a query like this one:
fn derived1(db: &impl Database, start: usize) {
let middle = self.input(start);
self.derived2(middle)
}
Now imagine that, on some particular run, we compute derived1(22)
:
derived1(22)
- executes
input(22)
, which returns44
- then executes
derived2(44)
- executes
The end result of this execution will be a dependency graph like:
derived1(22) -> derived2(44)
|
v
input(22)
Now. imagine that the user modifies input(22)
to have the value 45
.
The next time derived1(22)
executes, it will load input(22)
as before,
but then execute derived2(45)
. This leaves us with a dependency
graph as follows:
derived1(22) -> derived2(45)
|
v
input(22) derived2(44)
Notice that we still see derived2(44)
in the graph. This is because
we memoized the result in last round and then simply had no use for it
in this round. The role of GC is to collect "outdated" values like
this one.
###Review: Tracing and GC before durability
In the absence of durability, when you execute a query Q in some new revision where Q has not previously executed, salsa must trace back through all the queries that Q depends on to ensure that they are still up to date. As each of Q's dependencies is validated, we mark it to indicate that it has been checked in the current revision (and thus, within a particular revision, we would never validate or trace a particular query twice).
So, to continue our example, when we first executed derived1(22)
in revision R1, we might have had a graph like:
derived1(22) -> derived2(44)
[verified: R1] [verified: R1]
|
v
input(22)
Now, after we modify input(22)
and execute derived1(22)
again, we
would have a graph like:
derived1(22) -> derived2(45)
[verified: R2] [verified: R2]
|
v
input(22) derived2(44)
[verified: R1]
Note that derived2(44)
, the outdated value, never had its "verified"
revision updated, because we never accessed it.
Salsa leverages this validation stamp to serve as the "marking" phase of a simple mark-sweep garbage collector. The idea is that the sweep method can collect any values that are "outdated" (whose "verified" revision is less than the current revision).
The intended model is that one can do a "mark-sweep" style garbage collection like so:
// Modify some input, triggering a new revision.
db.set_input(22, 45);
// The **mark** phase: execute the "main query", with the intention
// that we wish to retain all the memoized values needed to compute
// this main query, but discard anything else. For example, in an IDE
// context, this might be a "compute all errors" query.
db.derived1(22);
// The **sweep** phase: discard anything that was not traced during
// the mark phase.
db.sweep_all(...);
In the case of our example, when we execute sweep_all
, it would
collect derived2(44)
.
Challenge: Durability lets us avoid tracing
This tracing model is affected by the move to durability. Now, if some derived value has a high durability, we may skip tracing its descendants altogether. This means that they would never be "verified" -- that is, their "verified date" would never be updated.
This is why we modify the definition of "outdated" as follows:
- For a query value
Q
with durabilityD
, letR_lc
be the revision when values of durabilityD
last changed. LetR_v
be the revision whenQ
was last verified. Q
is outdated ifR_v < R_lc
.- In other words, if
Q
may have changed since it was last verified.
- In other words, if
Collecting interned and untracked values
Most values can be collected whenever we like without influencing correctness. However, interned values and those with untracked dependencies are an exception -- they can only be collected when outdated. This is because their values may not be reproducible -- in other words, re-executing an interning query (or one with untracked dependencies, which can read arbitrary program state) twice in a row may produce a different value. In the case of an interning query, for example, we may wind up using a different integer than we did before. If the query is outdated, this is not a problem: anything that dependend on its result must also be outdated, and hence would be re-executed and would observe the new value. But if the query is not outdated, then we could get inconsistent result.s
Alternatives and future work
Rejected: Arbitrary durabilities
We considered permitting arbitrary "levels" of durability -- for example, allowing the user to specify a number -- rather than offering just three. Ultimately it seemed like that level of control wasn't really necessary and that having just three levels would be sufficient and simpler.
Rejected: Durability lattices
We also considered permitting a "lattice" of durabilities -- e.g., to mirror the crate DAG in rust-analyzer -- but this is tricky because the lattice itself would be dependent on other inputs.
Dynamic databases
Metadata
- Author: nikomatsakis
- Date: 2020-06-29
- Introduced in: salsa-rs/salsa#1 (please update once you open your PR)
Summary
- Retool Salsa's setup so that the generated code for a query group is not
dependent on the final database type, and interacts with the database only
through
dyn
trait values. - This imposes a certain amount of indirecton but has the benefit that when a query group definition changes, less code must be recompiled as a result.
- Key changes include:
- Database keys are "interned" in the database to produce a
DatabaseKeyIndex
. - The values for cached query are stored directly in the hashtable instead of
in an
Arc
. There is still an Arc per cached query, but it stores the dependency information. - The various traits are changed to make
salsa::Database
dyn-safe. Invoking methods on the runtime must now go through asalsa::Runtime
trait. - The
salsa::requires
functionality is removed.
- Database keys are "interned" in the database to produce a
- Upsides of the proposal:
- Potentially improved recompilation time. Minimal code is regenerated.
- Removing the
DatabaseData
unsafe code hack that was required by slots.
- Downsides of the proposal:
- The effect on runtime performance must be measured.
DatabaseKeyIndex
values will leak, as we propose no means to reclaim them. However, the same is true ofSlot
values today.- Storing values for the tables directly in the hashtable makes it less obvious how we would return references to them in a safe fashion (before, I had planned to have a separate module that held onto the Arc for the slot, so we were sure the value would not be deallocated; one can still imagine supporting this feature, but it would require some fancier unsafe code reasoning, although it would be more efficient.)
- The
salsa::requires
functionality is removed.
Motivation
Under the current salsa setup, all of the "glue code" that manages cache
invalidation and other logic is ultimately parameterized by a type DB
that
refers to the full database. The problem is that, if you consider a typical
salsa crate graph, the actual value for that type is not available until the
final database crate is compiled:
graph TD; Database["Crate that defines the database"]; QueryGroupA["Crate with query group A"]; QueryGroupB["Crate with query group B"]; SalsaCrate["the `salsa` crate"]; Database -- depends on --> QueryGroupA; Database -- depends on --> QueryGroupB; QueryGroupA -- depends on --> SalsaCrate; QueryGroupB -- depends on --> SalsaCrate;
The result is that we do not actually compile a good part of the code from
QueryGroupA
or QueryGroupB
until we build the final database crate.
What you can do today: dyn traits
What you can do today is to use define a "dyn-compatible" query group
trait and then write your derived functions using a dyn
type as the
argument:
#[salsa::query_group(QueryGroupAStorage)]
trait QueryGroupA {
fn derived(&self, key: usize) -> usize;
}
fn derived(db: &dyn QueryGroupA, key: usize) -> usize {
key * 2
}
This has the benefit that the derived
function is not generic. However, it's
still true that the glue code salsa makes will be generic over a DB
type --
this includes the impl of QueryGroupA
but also the Slot
and other machinery.
This means that even if the only change is to query group B, in a different
crate, the glue code for query group A ultimately has to be recompiled whenever
the Database
crate is rebuilt (though incremental compilation may help here).
Moreover, as reported in salsa-rs/salsa#220, measurements of rust-analyzer
suggest that this code may be duplicated and accounting for more of the binary
than we would expect.
FIXME: I'd like to have better measurements on the above!
Our goal
The primary goal of this RFC is to make it so that the glue code we generate for query groups is not dependent on the database type, thus enabling better incremental rebuilds.
User's guide
Most of the changes in this RFC are "under the hood". But there are various user visibile changes proposed here.
All query groups must be dyn safe
The largest one is that all Salsa query groups must now be dyn-safe. The existing salsa query methods are all dyn-safe, so what this really implies is that one cannot have super-traits that use generic methods or other things that are not dyn safe. For example, this query group would be illegal:
#[salsa::query_group(QueryGroupAStorage)]
trait QueryGroupA: Foo {
}
trait Foo {
fn method<T>(t: T) { }
}
We could support query groups that are not dyn safe, but it would require us to have two "similar but different" ways of generating plumbing, and I'm not convinced that it's worth it. Moreover, it would require some form of opt-in so that would be a measure of user complexity as well.
All query functions must take a dyn database
You used to be able to implement queries by using impl MyDatabase
, like so:
fn my_query(db: &impl MyDatabase, ...) { .. }
but you must now use dyn MyDatabase
:
fn my_query(db: &dyn MyDatabase, ...) { .. }
Databases embed a Storage<DB>
with a fixed field name
The "Hello World" database becomes the following:
#[salsa::database(QueryGroup1, ..., QueryGroupN)]
struct MyDatabase {
storage: salsa::Storage<Self>
}
impl salsa::Database for MyDatabase {}
In particular:
- You now embed a
salsa::Storage<Self>
instead of asalsa::Runtime<Self>
- The field must be named
storage
by default; we can include a#[salsa::storge_field(xxx)]
annotation to change that default if desired.- Or we could scrape the struct declaration and infer it, I suppose.
- You no longer have to define the
salsa_runtime
andsalsa_runtime_mut
methods, they move to theDatabaseOps
trait and are manually implemented by doingself.storage.runtime()
and so forth.
Why these changes, and what is this Storage
struct? This is because the actual
storage for queries is moving outside of the runtime. The Storage struct just
combines the Runtime
(whose type no longer references DB
directly) with an
Arc<DB::Storage>
. The full type of Storage
, since it includes the database
type, cannot appear in any public interface, it is just used by the various
implementations that are created by salsa::database
.
Instead of db.query(Q)
, you write Q.in_db(&db)
As a consequence of the previous point, the existing query
and query_mut
methods on the salsa::Database
trait are changed to methods on the query types
themselves. So instead of db.query(SomeQuery)
, one would write
SomeQuery.in_db(&db)
(or in_db_mut
). This both helps by making the
salsa::Database
trait dyn-safe and also works better with the new use of dyn
types, since it permits a coercion from &db
to the appropriate dyn
database
type at the point of call.
The salsa-event mechanism will move to dynamic dispatch
A further consequence is that the existing salsa_event
method will be
simplified and made suitable for dynamic dispatch. It used to take a closure
that would produce the event if necessary; it now simply takes the event itself.
This is partly because events themselves no longer contain complex information:
they used to have database-keys, which could require expensive cloning, but they
now have simple indices.
fn salsa_event(&self, event: Event) {
#![allow(unused_variables)]
}
This may imply some runtime cost, since various parts of the machinery invoke
salsa_event
, and those calls will now be virtual calls. They would previously
have been static calls that would likely have been optimized away entirely.
It is however possible that ThinLTO or other such optimization could remove those calls, this has not been tested, and in any case the runtime effects are not expected to be high, since all the calls will always go to the same function.
The salsa::requires
function is removed
We currently offer a feature for "private" dependencies between query groups
called #[salsa::requires(ExtraDatabase)]
. This then requires query
functions to be written like:
fn query_fn(db: &impl Database + ExtraDatabase, ...) { }
This format is not compatible with dyn
, so this feature is removed.
Reference guide
Example
To explain the proposal, we'll use the Hello World example, lightly adapted:
#[salsa::query_group(HelloWorldStorage)]
trait HelloWorld: salsa::Database {
#[salsa::input]
fn input_string(&self, key: ()) -> Arc<String>;
fn length(&self, key: ()) -> usize;
}
fn length(db: &dyn HelloWorld, (): ()) -> usize {
// Read the input string:
let input_string = db.input_string(());
// Return its length:
input_string.len()
}
#[salsa::database(HelloWorldStorage)]
struct DatabaseStruct {
runtime: salsa::Runtime<DatabaseStruct>,
}
impl salsa::Database for DatabaseStruct {
fn salsa_runtime(&self) -> &salsa::Runtime<Self> {
&self.runtime
}
fn salsa_runtime_mut(&mut self) -> &mut salsa::Runtime<Self> {
&mut self.runtime
}
}
Identifying queries using the DatabaseKeyIndex
We introduce the following struct that represents a database key using a series of indices:
struct DatabaseKeyIndex {
/// Identifies the query group.
group_index: u16,
/// Identifies the query within the group.
query_index: u16,
/// Identifies the key within the query.
key_index: u32,
}
This struct allows the various query group structs to refer to database keys
without having to use a type like DB::DatabaseKey
that is dependent on the
DB
.
The group/query indices will be assigned by the salsa::database
and
salsa::query_group
macros respectively. When query group storage is created,
it will be passed in its group index by the database. Each query will be able to
access its query-index through the Query
trait, as they are statically known
at the time that the query is compiled (the group index, in contrast, depends on
the full set of groups for the database).
The key index can be assigned by the query as it executes without any central
coordination. Each query will use a IndexMap
(from the indexmap
crate)
mapping Q::Key -> QueryState
. Inserting new keys into this map also creates
new indices, and it is possible to index into the map in O(1) time later to
obtain the state (or key) from a given query. This map replaces the existing
Q::Key -> Arc<Slot<..>>
map that is used today.
One notable implication: we cannot remove entries from the query index map (e.g., for GC) because that would invalidate the existing indices. We can however replace the query-state with a "not computed" value. This is not new: slots already take this approach today. In principle, we could extend the tracing GC to permit compressing and perhaps even rewriting indices, but it's not clear that this is a problem in practice.
The DatabaseKeyIndex
also supports a debug
method that returns a value with
a human readable debug!
output, so that you can do debug!("{:?}", index.debug(db))
. This works by generating a fmt_debug
method that is
supported by the various query groups.
The various query traits are not generic over a database
Today, the Query
, QueryFunction
, and QueryGroup
traits are generic over
the database DB
, which allows them to name the final database type and
associated types derived from it. In the new scheme, we never want to do that,
and so instead they will now have an associated type, DynDb
, that maps to the
dyn
version of the query group trait that the query is associated with.
Therefore QueryFunction
for example can become:
pub trait QueryFunction: Query {
fn execute(db: &<Self as QueryDb<'_>>::DynDb, key: Self::Key) -> Self::Value;
fn recover(db: &<Self as QueryDb<'_>>::DynDb, cycle: &[DB::DatabaseKey], key: &Self::Key) -> Option<Self::Value> {
let _ = (db, cycle, key);
None
}
}
Storing query results and tracking dependencies
In today's setup, we have all the data for a particular query stored in a
Slot<Q, DB, MP>
, and these slots hold references to one another to track
dependencies. Because the type of each slot is specific to the particular query
Q
, the references between slots are done using a Arc<dyn DatabaseSlot<DB>>
handle. This requires some unsafe hacks, including the DatabaseData
associated
type.
This RFC proposes to alter this setup. Dependencies will store a DatabaseIndex
instead. This means that validating dependencies is less efficient, as we no
longer have a direct pointer to the dependency information but instead must
execute three index lookups (one to find the query group, one to locate the
query, and then one to locate the key). Similarly the LRU list can be reverted
to a LinkedHashMap
of indices.
We may tinker with other approaches too: the key change in the RFC is that we
do not need to store a DB::DatabaseKey
or Slot<..DB..>
, but instead can use
some type for dependencies that is independent of the dtabase type DB
.
Dispatching methods from a DatabaseKeyIndex
There are a number of methods that can be dispatched through the database
interface on a DatabaseKeyIndex
. For example, we already mentioned
fmt_debug
, which emits a debug representation of the key, but there is also
maybe_changed_after
, which checks whether the value for a given key may have
changed since the given revision. Each of these methods is a member of the
DatabaseOps
trait and they are dispatched as follows.
First, the #[salsa::database]
procedural macro is the one which
generates the DatabaseOps
impl for the database. This base method
simply matches on the group index to determine which query group
contains the key, and then dispatches to an inherent
method defined on the appropriate query group struct:
impl salsa::plumbing::DatabaseOps for DatabaseStruct {
// We'll use the `fmt_debug` method as an example
fn fmt_debug(&self, index: DatabaseKeyIndex, fmt: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match index.group_index() {
0 => {
let storage = <Self as HasQueryGroup<HelloWorld>>::group_storage(self);
storage.fmt_debug(index, fmt)
}
_ => panic!("Invalid index")
}
}
}
The query group struct has a very similar inherent method that dispatches based on the query index and invokes a method on the query storage:
impl HelloWorldGroupStorage__ {
// We'll use the `fmt_debug` method as an example
fn fmt_debug(&self, index: DatabaseKeyIndex, fmt: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match index.query_index() {
0 => self.appropriate_query_field.fmt_debug(index, fmt),
1 => ...
_ => panic!("Invalid index")
}
}
}
Finally, the query storage can use the key index to lookup the appropriate
data from the FxIndexSet
.
Wrap runtime in a Storage<DB>
type
The Salsa runtime is currently Runtime<DB>
but it will change to just
Runtime
and thus not be generic over the database. This means it can be
referenced directly by query storage implementations. This is very useful
because it allows that type to have a number of pub(crate)
details that query
storage implementations make use of but which are not exposed as part of our
public API.
However, the Runtime
crate used to contain a DB::Storage
, and without the
DB
in its type, it no longer can. Therefore, we will introduce a new type
Storage<DB>
type which is defined like so:
pub struct Storage<DB: DatabaseImpl> {
query_store: Arc<DB::DatabaseStorage>,
runtime: Runtime,
}
impl<DB> Storage<DB> {
pub fn query_store(&self) -> &DB::DatabaseStorage {
&self.query_store
}
pub fn salsa_runtime(&self) -> &Runtime {
&self.runtime
}
pub fn salsa_runtime_mut(&mut self) -> &mut Runtime {
&self.runtime
}
/// Used for parallel queries
pub fn snapshot(&self) -> Self {
Storage {
query_store: query_store.clone(),
runtime: runtime.snapshot(),
}
}
}
The user is expected to include a field storage: Storage<DB>
in their database
definition. The salsa::database
procedural macro, when it generates impls of
traits like HasQueryGroup
, will embed code like self.storage
that looks for
that field.
salsa_runtime
methods move to DatabaseOps
trait
The salsa_runtime
methods used to be manually implemented by users to define
the field that contains the salsa runtime. This was always boilerplate. The
salsa::database
macro now handles that job by defining them to invoke the
corresponding methods on Storage
.
Salsa database trait becomes dyn safe
Under this proposal, the Salsa database must be dyn safe. This implies that we have to make a few changes:
- The
query
andquery_mut
methods move to an extension trait. - The
DatabaseStorageTypes
supertrait is removed (that trait is renamed and altered, see next section). - The
salsa_event
method changes, as described in the User's guide.
Salsa database trait requires 'static
, at least for now
One downside of this proposal is that the salsa::Database
trait now has a
'static
bound. This is a result of the lack of GATs -- in particular, the
queries expect a <Q as QueryDb<'_>>::DynDb
as argument. In the query definition, we have
something like type DynDb = dyn QueryGroupDatabase
, which in turn defaults to
dyn::QueryGroupDatabase + 'static
.
At the moment, this limitation is harmless, since salsa databases don't support generic parameters. But it would be good to lift in the future, especially as we would like to support arena allocation and other such patterns. The limitation could be overcome in the future by:
- converting to a GAT like
DynDb<'a>
, if those were available; - or by simulating GATs by introducing a trait to carry the
DynDb
definition, likeQueryDb<'a>
, whereQuery
has the supertraitfor<'a> Self: QueryDb<'a>
. This would permit theDynDb
type to be referenced by writing<Q as QueryDb<'a>>::DynDb
.
Salsa query group traits are extended with Database
and HasQueryGroup
supertrait
When #[salsa::query_group]
is applied to a trait, we currently generate a copy
of the trait that is "more or less" unmodified (although we sometimes add
additional synthesized methods, such as the set
method for an input). Under
this proposal, we will also introduce a HasQueryGroup<QueryGroupStorage>
supertrait. Therefore the following input:
#[salsa::query_group(HelloWorldStorage)]
trait HelloWorld { .. }
will generate a trait like:
trait HelloWorld:
salsa::Database +
salsa::plumbing::HasQueryGroup<HelloWorldStorage>
{
..
}
The Database
trait is the standard salsa::Database
trait and contains
various helper methods. The HasQueryGroup
trait is implemented by the database
and defines various plumbing methods that are used by the storage
implementations.
One downside of this is that salsa::Database
methods become available on the
trait; we might want to give internal plumbing methods more obscure names.
Bounds were already present on the blanket impl of salsa query group trait
The new bounds that are appearing on the trait were always present on the
blanket impl that the salsa::query_group
procedural macro generated, which
looks like so (and continues unchanged under this RFC):
impl<DB> HelloWorld for DB
where
DB: salsa::Database +
DB: salsa::plumbing::HasQueryGroup<HelloWorldStorage>
{
...
}
The reason we generate the impl is so that the salsa::database
procedural
macro can simply create the HasQueryGroup
impl and never needs to know the
name of the HelloWorld
trait, only the HelloWorldStorage
type.
Storage types no longer parameterized by the database
Today's storage types, such as Derived
, are parameterized over both a query Q
and a DB
(along with the memoization policy MP
):
// Before this RFC:
pub struct DerivedStorage<DB, Q, MP>
where
Q: QueryFunction<DB>,
DB: Database + HasQueryGroup<Q::Group>,
MP: MemoizationPolicy<DB, Q>,
The DB
parameter should no longer be needed after the previously described
changes are made, so that the signature looks like:
// Before this RFC:
pub struct DerivedStorage<Q, MP>
where
Q: QueryFunction,
MP: MemoizationPolicy<DB, Q>,
Alternatives and future work
The 'linch-pin' of this design is the DatabaseKeyIndex
type, which allows for
signatures to refer to "any query in the system" without reference to the DB
type. The biggest downside of the system is that this type is an integer which
then requires a tracing GC to recover index values. The primary alternative
would be to use an Arc
-like scheme,but this has some severe downsides:
- Requires reference counting, allocation
- Hashing and equality comparisons have more data to process versus an integer
- Equality comparisons must still be deep since you may have older and newer keys co-existing
- Requires a
Arc<dyn DatabaseKey>
-like setup, which then encounters the problems that this type is notSend
orSync
, leading to hacks like theDB::DatabaseData
we use today.
Opinionated cancelation
Metadata
- Author: nikomatsakis
- Date: 2021-05-15
- Introduced in: salsa-rs/salsa#265
Summary
- Define stack unwinding as the one true way to handle cancelation in salsa queries
- Modify salsa queries to automatically initiate unwinding when they are canceled
- Use a distinguished value for this panic so that people can test if the panic was a result of cancelation
Motivation
Salsa's database model is fundamentally like a read-write lock. There is always a single master copy of the database which supports writes, and any number of concurrent snapshots that support reads. Whenever a write to the database occurs, any queries executing in those snapshots are considered canceled, because their results are based on stale data. The write blocks until they complete before it actually takes effect. It is therefore advantageous for those reads to complete as quickly as possible.
cancelation in salsa is currently quite minimal. Effectively, a flag becomes true, and queries can manually check for this flag. This is easy to forget to do. Moreover, we support two modes of cancelation: you can either use Result
values or use unwinding. In practice, though, there isn't much point to using Result
: you can't really "recover" from cancelation.
The largest user of salsa, rust-analyzer, uses a fairly opinionated and aggressive form of cancelation:
- Every query is instrumented, using salsa's various hooks, to check for cancelation before it begins.
- If a query is canceled, then it immediately panics, using a special sentinel value.
- Any worker threads holding a snapshot of the DB recognize this value and go back to waiting for work.
We propose to make this model of cancelation the only model of cancelation.
User's guide
When you do a write to the salsa database, that write will block until any queries running in background threads have completed. You really want those queries to complete quickly, though, because they are now operating on stale data and their results are therefore not meaningful. To expedite the process, salsa will cancel those queries. That means that the queries will panic as soon as they try to execute another salsa query. Those panics occur using a sentinel value that you can check for if you wish. If you have a query that contains a long loop which does not execute any intermediate queries, salsa won't be able to cancel it automatically. You may wish to check for cancelation yourself by invoking the unwind_if_canceled
method.
Reference guide
The changes required to implement this RFC are as follows:
- Remove on
is_current_revision_canceled
. - Introduce a sentinel cancellation token that can be used with
resume_unwind
- Introduce a
unwind_if_canceled
method into theDatabase
which checks whether cancelation has occured and panics if so.- This method also triggers a
salsa_event
callback. - This should probably be inline for the
if
with an outlined function to do the actual panic.
- This method also triggers a
- Modify the code for the various queries to invoke
unwind_if_canceled
when they are invoked or validated.
Frequently asked questions
Isn't it hard to write panic-safe code?
It is. However, the salsa runtime is panic-safe, and all salsa queries must already avoid side-effects for other reasons, so in our case, being panic-safe happens by default.
Isn't recovering from panics a bad idea?
No. It's a bad idea to do "fine-grained" recovery from panics, but catching a panic at a high-level of your application and soldiering on is actually exactly how panics were meant to be used. This is especially true in salsa, since all code is already panic-safe.
Does this affect users of salsa who do not use threads?
No. Cancelation in salsa only occurs when there are parallel readers and writers.
What about people using panic-as-abort?
This does mean that salsa is not compatible with panic-as-abort. Strictly speaking, you could still use salsa in single-threaded mode, so that cancelation is not possible.
Remove garbage collection
Metadata
- Author: nikomatsakis
- Date: 2021-06-06
- Introduced in: https://github.com/salsa-rs/salsa/pull/267
Summary
- Remove support for tracing garbage collection
- Make interned keys immortal, for now at least
Motivation
Salsa has traditionally supported "tracing garbage collection", which allowed the user to remove values that were not used in the most recent revision. While this feature is nice in theory, it is not used in practice. Rust Analyzer, for example, prefers to use the LRU mechanism, which offers stricter limits. Considering that it is not used, supporting the garbage collector involves a decent amount of complexity and makes it harder to experiment with Salsa's structure. Therefore, this RFC proposes to remove support for tracing garbage collection. If desired, it can be added back at some future date in an altered form.
User's guide
The primary effect for users is that the various 'sweep' methods from the database and queries are removed. The only way to control memory usage in Salsa now is through the LRU mechanisms.
Reference guide
Removing the GC involves deleting a fair bit of code. The most interesting and subtle code is in the interning support. Previously, interned keys tracked the revision in which they were interned, but also the revision in which they were last accessed. when the sweeping method would run, any interned keys that had not been accessed in the current revision were collected. Since we permitted the GC to run with the read only, we had to be prepared for accesses to interned keys to occur concurrently with the GC, and thus for the possibility that various operations could fail. This complexity is removed, but it means that there is no way to remove interned keys at present.
Frequently asked questions
Why not just keep the GC?
The complex.
Are any users relying on the sweeping functionality?
Hard to say for sure, but none that we know of.
Don't we want some mechanism to control memory usage?
Yes, but we don't quite know what it looks like. LRU seems to be adequate in practice for present.
What about for interned keys in particular?
We could add an LRU-like mechanism to interning.
Description/title
Metadata
- Author: nikomatsakis
- Date: 2021-10-31
- Introduced in: https://github.com/salsa-rs/salsa/pull/285
Summary
- Permit cycle recovery as long as at least one participant has recovery enabled.
- Modify cycle recovery to take a
&Cycle
. - Introduce
Cycle
type that carries information about a cycle and lists participants in a deterministic order.
Motivation
Cycle recovery has been found to have some subtle bugs that could lead to panics. Furthermore, the existing cycle recovery APIs require all participants in a cycle to have recovery enabled and give limited and non-deterministic information. This RFC tweaks the user exposed APIs to correct these shortcomings. It also describes a major overhaul of how cycles are handled internally.
User's guide
By default, cycles in the computation graph are considered a "programmer bug" and result in a panic. Sometimes, though, cycles are outside of the programmer's control. Salsa provides mechanisms to recover from cycles that can help in those cases.
Default cycle handling: panic
By default, when Salsa detects a cycle in the computation graph, Salsa will panic with a salsa::Cycle
as the panic value. Your queries should not attempt to catch this value; rather, the salsa::Cycle
is meant to be caught by the outermost thread, which can print out information from it to diagnose what went wrong. The Cycle
type offers a few methods for inspecting the participants in the cycle:
participant_keys
-- returns an iterator over theDatabaseKeyIndex
for each participant in the cycle.all_participants
-- returns an iterator overString
values for each participant in the cycle (debug output).unexpected_participants
-- returns an iterator overString
values for each participant in the cycle that doesn't have recovery information (see next section).
Cycle
implements Debug
, but because the standard trait doesn't provide access to the database, the output can be kind of inscrutable. To get more readable Debug
values, use the method cycle.debug(db)
, which returns an impl Debug
that is more readable.
Cycle recovery
Panicking when a cycle occurs is ok for situations where you believe a cycle is impossible. But sometimes cycles can result from illegal user input and cannot be statically prevented. In these cases, you might prefer to gracefully recover from a cycle rather than panicking the entire query. Salsa supports that with the idea of cycle recovery.
To use cycle recovery, you annotate potential participants in the cycle with a #[salsa::recover(my_recover_fn)]
attribute. When a cycle occurs, if any participant P has recovery information, then no panic occurs. Instead, the execution of P is aborted and P will execute the recovery function to generate its result. Participants in the cycle that do not have recovery information continue executing as normal, using this recovery result.
The recovery function has a similar signature to a query function. It is given a reference to your database along with a salsa::Cycle
describing the cycle that occurred; it returns the result of the query. Example:
#![allow(unused)] fn main() { fn my_recover_fn( db: &dyn MyDatabase, cycle: &salsa::Cycle, ) -> MyResultValue }
The db
and cycle
argument can be used to prepare a useful error message for your users.
Important: Although the recovery function is given a db
handle, you should be careful to avoid creating a cycle from within recovery or invoking queries that may be participating in the current cycle. Attempting to do so can result in inconsistent results.
Figuring out why recovery did not work
If a cycle occurs and some of the participant queries have #[salsa::recover]
annotations and others do not, then the query will be treated as irrecoverable and will simply panic. You can use the Cycle::unexpected_participants
method to figure out why recovery did not succeed and add the appropriate #[salsa::recover]
annotations.
Reference guide
This RFC accompanies a rather long and complex PR with a number of changes to the implementation. We summarize the most important points here.
Cycles
Cross-thread blocking
The interface for blocking across threads now works as follows:
- When one thread
T1
wishes to block on a queryQ
being executed by another threadT2
, it invokesRuntime::try_block_on
. This will check for cycles. Assuming no cycle is detected, it will blockT1
untilT2
has completed withQ
. At that point,T1
reawakens. However, we don't know the result of executingQ
, soT1
now has to "retry". Typically, this will result in successfully reading the cached value. - While
T1
is blocking, the runtime moves its query stack (aVec
) into the shared dependency graph data structure. WhenT1
reawakens, it recovers ownership of its query stack before returning fromtry_block_on
.
Cycle detection
When a thread T1
attempts to execute a query Q
, it will try to load the value for Q
from the memoization tables. If it finds an InProgress
marker, that indicates that Q
is currently being computed. This indicates a potential cycle. T1
will then try to block on the query Q
:
- If
Q
is also being computed byT1
, then there is a cycle. - Otherwise, if
Q
is being computed by some other threadT2
, we have to check whetherT2
is (transitively) blocked onT1
. If so, there is a cycle.
These two cases are handled internally by the Runtime::try_block_on
function. Detecting the intra-thread cycle case is easy; to detect cross-thread cycles, the runtime maintains a dependency DAG between threads (identified by RuntimeId
). Before adding an edge T1 -> T2
(i.e., T1
is blocked waiting for T2
) into the DAG, it checks whether a path exists from T2
to T1
. If so, we have a cycle and the edge cannot be added (then the DAG would not longer be acyclic).
When a cycle is detected, the current thread T1
has full access to the query stacks that are participating in the cycle. Consider: naturally, T1
has access to its own stack. There is also a path T2 -> ... -> Tn -> T1
of blocked threads. Each of the blocked threads T2 ..= Tn
will have moved their query stacks into the dependency graph, so those query stacks are available for inspection.
Using the available stacks, we can create a list of cycle participants Q0 ... Qn
and store that into a Cycle
struct. If none of the participants Q0 ... Qn
have cycle recovery enabled, we panic with the Cycle
struct, which will trigger all the queries on this thread to panic.
Cycle recovery via fallback
If any of the cycle participants Q0 ... Qn
has cycle recovery set, we recover from the cycle. To help explain how this works, we will use this example cycle which contains three threads. Beginning with the current query, the cycle participants are QA3
, QB2
, QB3
, QC2
, QC3
, and QA2
.
The cyclic
edge we have
failed to add.
:
A : B C
:
QA1 v QB1 QC1
┌► QA2 ┌──► QB2 ┌─► QC2
│ QA3 ───┘ QB3 ──┘ QC3 ───┐
│ │
└───────────────────────────────┘
Recovery works in phases:
- Analyze: As we enumerate the query participants, we collect their collective inputs (all queries invoked so far by any cycle participant) and the max changed-at and min duration. We then remove the cycle participants themselves from this list of inputs, leaving only the queries external to the cycle.
- Mark: For each query Q that is annotated with
#[salsa::recover]
, we mark it and all of its successors on the same thread by setting itscycle
flag to thec: Cycle
we constructed earlier; we also reset its inputs to the collective inputs gathering during analysis. If those queries resume execution later, those marks will trigger them to immediately unwind and use cycle recovery, and the inputs will be used as the inputs to the recovery value.- Note that we mark all the successors of Q on the same thread, whether or not they have recovery set. We'll discuss later how this is important in the case where the active thread (A, here) doesn't have any recovery set.
- Unblock: Each blocked thread T that has a recovering query is forcibly reawoken; the outgoing edge from that thread to its successor in the cycle is removed. Its condvar is signalled with a
WaitResult::Cycle(c)
. When the thread reawakens, it will see that and start unwinding with the cyclec
. - Handle the current thread: Finally, we have to choose how to have the current thread proceed. If the current thread includes any cycle with recovery information, then we can begin unwinding. Otherwise, the current thread simply continues as if there had been no cycle, and so the cyclic edge is added to the graph and the current thread blocks. This is possible because some other thread had recovery information and therefore has been awoken.
Let's walk through the process with a few examples.
Example 1: Recovery on the detecting thread
Consider the case where only the query QA2 has recovery set. It and QA3 will be marked with their cycle
flag set to c: Cycle
. Threads B and C will not be unblocked, as they do not have any cycle recovery nodes. The current thread (Thread A) will initiate unwinding with the cycle c
as the value. Unwinding will pass through QA3 and be caught by QA2. QA2 will substitute the recovery value and return normally. QA1 and QC3 will then complete normally and so forth, on up until all queries have completed.
Example 2: Recovery in two queries on the detecting thread
Consider the case where both query QA2 and QA3 have recovery set. It proceeds the same Example 1 until the the current initiates unwinding, as described in Example 1. When QA3 receives the cycle, it stores its recovery value and completes normally. QA2 then adds QA3 as an input dependency: at that point, QA2 observes that it too has the cycle mark set, and so it initiates unwinding. The rest of QA2 therefore never executes. This unwinding is caught by QA2's entry point and it stores the recovery value and returns normally. QA1 and QC3 then continue normally, as they have not had their cycle
flag set.
Example 3: Recovery on another thread
Now consider the case where only the query QB2 has recovery set. It and QB3 will be marked with the cycle c: Cycle
and thread B will be unblocked; the edge QB3 -> QC2
will be removed from the dependency graph. Thread A will then add an edge QA3 -> QB2
and block on thread B. At that point, thread A releases the lock on the dependency graph, and so thread B is re-awoken. It observes the WaitResult::Cycle
and initiates unwinding. Unwinding proceeds through QB3 and into QB2, which recovers. QB1 is then able to execute normally, as is QA3, and execution proceeds from there.
Example 4: Recovery on all queries
Now consider the case where all the queries have recovery set. In that case, they are all marked with the cycle, and all the cross-thread edges are removed from the graph. Each thread will independently awaken and initiate unwinding. Each query will recover.
Frequently asked questions
Why have other threads retry instead of giving them the value?
In the past, when one thread T1 blocked on some query Q being executed by another thread T2, we would create a custom channel between the threads. T2 would then send the result of Q directly to T1, and T1 had no need to retry. This mechanism was simplified in this RFC because we don't always have a value available: sometimes the cycle results when T2 is just verifying whether a memoized value is still valid. In that case, the value may not have been computed, and so when T1 retries it will in fact go on to compute the value. (Previously, this case was overlooked by the cycle handling logic and resulted in a panic.)
Why do we use unwinding to manage cycle recovery?
When a query Q participates in cycle recovery, we use unwinding to get from the point where the cycle is detected back to the query's execution function. This ensures that the rest of Q never runs. This is important because Q might otherwise go on to create new cycles even while recovery is proceeding. Consider an example like:
#![allow(unused)] fn main() { #[salsa::recovery] fn query_q1(db: &dyn Database) { db.query_q2() db.query_q3() // <-- this never runs, thanks to unwinding } #[salsa::recovery] fn query_q2(db: &dyn Database) { db.query_q1() } #[salsa::recovery] fn query_q3(db: &dyn Database) { db.query_q1() } }
Why not invoke the recovery functions all at once?
The code currently unwinds frame by frame and invokes recovery as it goes. Another option might be to invoke the recovery function for all participants in the cycle up-front. This would be fine, but it's a bit difficult to do, since the types for each cycle are different, and the Runtime
code doesn't know what they are. We also don't have access to the memoization tables and so forth.
Parallel friendly caching
Metadata
- Author: nikomatsakis
- Date: 2021-05-29
- Introduced in: (please update once you open your PR)
Summary
- Rework query storage to be based on concurrent hashmaps instead of slots with read-write locked state.
Motivation
Two-fold:
- Simpler, cleaner, and hopefully faster algorithm.
- Enables some future developments that are not part of this RFC:
- Derived queries whose keys are known to be integers.
- Fixed point cycles so that salsa and chalk can be deeply integrated.
- Non-synchronized queries that potentially execute on many threads in parallel (required for fixed point cycles, but potentially valuable in their own right).
User's guide
No user visible changes.
Reference guide
Background: Current structure
Before this RFC, the overall structure of derived queries is as follows:
- Each derived query has a
DerivedStorage<Q>
(stored in the database) that contains:- the
slot_map
, a monotonically growing, indexable map from keys (Q::Key
) to theSlot<Q>
for the given key - lru list
- the
- Each
Slot<Q>
has- r-w locked query-state that can be:
- not-computed
- in-progress with synchronization storage:
id
of the runtime computing the valueanyone_waiting
:AtomicBool
set to true if other threads are awaiting result
- a
Memo<Q>
- r-w locked query-state that can be:
- A
Memo<Q>
has- an optional value
Option<Q::Value>
- dependency information:
- verified-at
- changed-at
- durability
- input set (typically a
Arc<[DatabaseKeyIndex]>
)
- an optional value
Fetching the value for a query currently works as follows:
- Acquire the read lock on the (indexable)
slot_map
and hash key to find the slot.- If no slot exists, acquire write lock and insert.
- Acquire the slot's internal lock to perform the fetch operation.
Verifying a dependency uses a scheme introduced in RFC #6. Each dependency is represented as a DatabaseKeyIndex
which contains three indices (group, query, and key). The group and query indices are used to find the query storage via match
statements and then the next operation depends on the query type:
- Acquire the read lock on the (indexable)
slot_map
and use key index to load the slot. Read lock is released afterwards. - Acquire the slot's internal lock to perform the maybe changed after operation.
New structure (introduced by this RFC)
The overall structure of derived queries after this RFC is as follows:
- Each derived query has a
DerivedStorage<Q>
(stored in the database) that contains:- a set of concurrent hashmaps:
key_map
: maps fromQ::Key
to an internal key indexK
memo_map
: maps fromK
to cached memoArcSwap<Memo<Q>>
sync_map
: maps fromK
to aSync<Q>
synchronization value
- lru set
- a set of concurrent hashmaps:
- A
Memo<Q>
has- an immutable optional value
Option<Q::Value>
- dependency information:
- updatable verified-at (
AtomicCell<Option<Revision>>
) - immutable changed-at (
Revision
) - immutable durability (
Durability
) - immutable input set (typically a
Arc<[DatabaseKeyIndex]>
)
- updatable verified-at (
- information for LRU:
DatabaseKeyIndex
lru_index
, anAtomicUsize
- an immutable optional value
- A
Sync<Q>
hasid
of the runtime computing the valueanyone_waiting
:AtomicBool
set to true if other threads are awaiting result
Fetching the value for a derived query will work as follows:
- Find internal index
K
by hashing key, as today.- Precise operation for this will depend on the concurrent hashmap implementation.
- Load memo
M: Arc<Memo<Q>>
frommemo_map[K]
(if present):- If verified is
None
, then another thread has found this memo to be invalid; ignore it. - Else, let
Rv
be the "last verified revision". - If
Rv
is the current revision, or last change to an input with durabilityM.durability
was beforeRv
:- Update "last verified revision" and return memoized value.
- If verified is
- Atomically check
sync_map
for an existingSync<Q>
:- If one exists, block on the thread within and return to step 2 after it completes:
- If this results in a cycle, unwind as today.
- If none exists, insert a new entry with current runtime-id.
- If one exists, block on the thread within and return to step 2 after it completes:
- Check dependencies deeply
- Iterate over each dependency
D
and checkdb.maybe_changed_after(D, Rv)
.- If no dependency has changed, update
verified_at
to current revision and return memoized value.
- If no dependency has changed, update
- Mark memo as invalid by storing
None
in the verified-at.
- Iterate over each dependency
- Construct the new memo:
- Push query onto the local stack and execute the query function:
- If this query is found to be a cycle participant, execute recovery function.
- Backdate result if it is equal to the old memo's value.
- Allocate new memo.
- Push query onto the local stack and execute the query function:
- Store results:
- Store new memo into
memo_map[K]
. - Remove query from the
sync_map
.
- Store new memo into
- Return newly constructed value._
Verifying a dependency for a derived query will work as follows:
- Find internal index
K
by hashing key, as today.- Precise operation for this will depend on the concurrent hashmap implementation.
- Load memo
M: Arc<Memo<Q>>
frommemo_map[K]
(if present):- If verified is
None
, then another thread has found this memo to be invalid; ignore it. - Else, let
Rv
be the "last verified revision". - If
Rv
is the current revision, return true or false depending on whether changed-at from memo. - If last change to an input with durability
M.durability
was beforeRv
:- Update
verified_at
to current revision and return memoized value.
- Update
- Iterate over each dependency
D
and checkdb.maybe_changed_after(D, Rv)
.- If no dependency has changed, update
verified_at
to current revision and return memoized value.
- If no dependency has changed, update
- Mark memo as invalid by storing
None
in the verified-at.
- If verified is
- Atomically check
sync_map
for an existingSync<Q>
:- If one exists, block on the thread within and return to step 2 after it completes:
- If this results in a cycle, unwind as today.
- If none exists, insert a new entry with current runtime-id.
- If one exists, block on the thread within and return to step 2 after it completes:
- Construct the new memo:
- Push query onto the local stack and execute the query function:
- If this query is found to be a cycle participant, execute recovery function.
- Backdate result if it is equal to the old memo's value.
- Allocate new memo.
- Push query onto the local stack and execute the query function:
- Store results:
- Store new memo into
memo_map[K]
. - Remove query from the
sync_map
.
- Store new memo into
- Return true or false depending on whether memo was backdated.
Frequently asked questions
Why use ArcSwap
?
It's a relatively minor implementation detail, but the code in this PR uses ArcSwap
to store the values in the memo-map. In the case of a cache hit or other transient operations, this allows us to read from the arc while avoiding a full increment of the ref count. It adds a small bit of complexity because we have to be careful to do a full load before any recursive operations, since arc-swap only gives a fixed number of "guards" per thread before falling back to more expensive loads.
Do we really need maybe_changed_after
and fetch
?
Yes, we do. "maybe changed after" is very similar to "fetch", but it doesn't require that we have a memoized value. This is important for LRU.
The LRU map in the code is just a big lock!
That's not a question. But it's true, I simplified the LRU code to just use a mutex. My assumption is that there are relatively few LRU-ified queries, and their values are relatively expensive to compute, so this is ok. If we find it's a bottleneck, though, I believe we could improve it by using a similar "zone scheme" to what we use now. We would add a lru_index
to the Memo
so that we can easily check if the memo is in the "green zone" when reading (if so, no updates are needed). The complexity there is that when we produce a replacement memo, we have to install it and swap the index. Thinking about that made my brain hurt a little so I decided to just take the simple option for now.
How do the synchronized / atomic operations compare after this RFC?
After this RFC, to perform a read, in the best case:
- We do one "dashmap get" to map key to key index.
- We do another "dashmap get" from key index to memo.
- We do an "arcswap load" to get the memo.
- We do an "atomiccell read" to load the current revision or durability information.
dashmap is implemented with a striped set of read-write locks, so this is roughly the same (two read locks) as before this RFC. However:
- We no longer do any atomic ref count increments.
- It is theoretically possible to replace dashmap with something that doesn't use locks.
- The first dashmap get should be removable, if we know that the key is a 32 bit integer.
- I plan to propose this in a future RFC.
Yeah yeah, show me some benchmarks!
I didn't run any. I'll get on that.
Meta: about the book itself
Linking policy
We try to avoid links that easily become fragile.
Do:
- Link to
docs.rs
types to document the public API, but modify the link to uselatest
as the version. - Link to modules in the source code.
- Create "named anchors" and embed source code directly.
Don't:
- Link to direct lines on github, even within a specific commit, unless you are trying to reference a historical piece of code ("how things were at the time").