mirror of
https://github.com/loro-dev/loro.git
synced 2024-11-28 17:41:49 +00:00
d942e3d7a2
* feat: richtext wip * feat: add insert to style range map wip * feat: richtext state * fix: fix style state inserting and style map * fix: tiny vec merge err * fix: comment err * refactor: use new generic-btree & refine impl * feat: fugue tracker * feat: tracker * feat: tracker * fix: fix a few err in impl * feat: init richtext content state * feat: refactor arena * feat: extract anchor_type info out of style flag * refactor: state apply op more efficiently we can now reuse the repr in state and op * fix: new clippy errors * refactor: use state chunk as delta item * refactor: use two op to insert style start and style end * feat: diff calc * feat: handler * fix: tracker checkout err * fix: pass basic richtext handler tests * fix: pass handler basic marking tests * fix: pass all peritext criteria * feat: snapshot encoding for richtext init * refactor: replace Text with Richtext * refacotr: rm text code * fix: richtext checkout err * refactor: diff of text and map * refactor: del span * refactor: event * fix: fuzz err * fix: pass all tests * fix: fuzz err * fix: list child cache err * chore: rm debug code * fix: encode enhanced err * fix: encode enchanced * fix: fix several richtext issue * fix: richtext anchor err * chore: rm debug code * fix: richtext fuzz err * feat: speedup text snapshot decode * perf: optimize snapshot encoding * perf: speed up decode & insert * fix: fugue span merge err * perf: speedup delete & id cursor map * fix: fugue merge err * chore: update utils * perf: speedup text insert / del * fix: cursor cache * perf: reduce conversion by introducing InsertText * perf: speed up by refined cursor cache * chore: update gbtree dep * refactor(wasm): use quill delta format * chore: fix warnings |
||
---|---|---|
.. | ||
benches | ||
examples | ||
fuzz | ||
src | ||
Cargo.toml | ||
deno.json | ||
README.md |
compact-bytes
It's a append-only bytes arena. Appending new bytes will get a pointer to a slice of the append-only bytes. It will try to reuse the allocated old bytes to reduce memory usage, if possible.
Example
use compact_bytes::CompactBytes;
let mut arena = CompactBytes::new();
let bytes1 = arena.alloc(b"hello");
let bytes2 = arena.alloc(b"world");
assert_eq!(bytes1.as_bytes(), b"hello");
assert_eq!(bytes2.as_bytes(), b"world");
// bytes3 will be a pointer to the same bytes as bytes1
let bytes3 = arena.alloc(b"hello");
assert_eq!(bytes3.as_bytes(), b"hello");
assert_eq!(bytes3.start(), bytes1.start());
assert_eq!(bytes3.start(), 0);
assert_eq!(bytes3.end(), 5);
// Allocatting short bytes will not reuse the old bytes.
// Because it will make merging neighboring slices easier so that when
// serializing the bytes it will be more compact.
let mut bytes4 = arena.alloc(b"h");
assert_eq!(bytes4.start(), 10);
let bytes5 = arena.alloc(b"e");
assert_eq!(bytes5.start(), 11);
// bytes4 and bytes5 can be merged
assert_eq!(bytes4.can_merge(&bytes5), true);
assert!(bytes4.try_merge(&bytes5).is_ok());
In advance mode, it will try to reuse the old bytes as much as possible. So it will break the bytes into small pieces to reuse them.
use compact_bytes::CompactBytes;
use std::ops::Range;
let mut arena = CompactBytes::new();
let bytes1 = arena.alloc(b"hello");
// it breaks the bytes into 3 pieces "hi ", "hello", " world"
let bytes2: Vec<Range<usize>> = arena.alloc_advance(b"hi hello world");
Or you can use append
to not reuse the old bytes at all.
use compact_bytes::CompactBytes;
let mut arena = CompactBytes::new();
let bytes1 = arena.alloc(b"hello");
let bytes2 = arena.append(b"hello");
assert_ne!(bytes1.start(), bytes2.start());
TODO
- More memory efficient implementation