-
Notifications
You must be signed in to change notification settings - Fork 303
feat(event cache): introduce an absolute local event ordering #5225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #5225 +/- ##
==========================================
+ Coverage 90.13% 90.15% +0.02%
==========================================
Files 334 335 +1
Lines 104717 105687 +970
Branches 104717 105687 +970
==========================================
+ Hits 94387 95286 +899
- Misses 6277 6329 +52
- Partials 4053 4072 +19 ☔ View full report in Codecov by Sentry. |
2acde34
to
fc99597
Compare
fc99597
to
7673e19
Compare
…ferent accumulators In the next patch, we're going to introduce another user of `UpdatesToVectorDiff` which doesn't require accumulating the `VectorDiff` updates; so as to make it optional, let's generalize the algorithm with a trait, that carries the same semantics. No changes in functionality.
…ordering of the current items This is a new data structure that will help figuring out a local, absolute ordering for events in the current linked chunk. It's designed to work even if the linked chunk is being lazily loaded, and it provides a few high-level primitives that make it possible to work nicely with the event cache.
…by the event cache The one hardship is that lazy-loading updates must NOT affect the order tracker, otherwise its internal state will be incorrect (disynchronized from the store) and thus return incorrect values upon shrink/lazy-load. In this specific case, some updates must be ignored, the same way we do it for the store using `let _ = store_updates().take()` in a few places. The author considered that a right place where to flush the pending updates was at the same time we flushed the updates-as-vector-diffs, since they would be observable at the same time.
7673e19
to
b218b10
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the code, it's clear and well explained.
I'm wondering how the event order will be used. I'm actually intrigued by the complexity induced by a lazy-loaded vs. a fully-loaded. I've a feeling that having a negative or positive positions would work and could simplify stuff. I'm not super comfortable with the idea of loading all the chunks (even without the events), it defeats the purpose of having such a light structure.
Are you loading all the chunks to always return the position of an event? It means you want the position of an event from the store, not from the in-memory room event cache? If this is the case, I think the strategy should entirely move onto the EventCacheStore
store trait. If that's not the case, I don't understand why we need to load all the chunks. Can you explain to me please?
pub(super) trait UpdatesAccumulator<Item> { | ||
/// Create a new accumulator with a rough estimation of the number of | ||
/// updates this accumulator is going to receive. | ||
fn new(num_updates_hint: usize) -> Self; | ||
|
||
/// Fold the accumulator with the given new updates. | ||
fn fold(&mut self, updates: impl IntoIterator<Item = VectorDiff<Item>>); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't we use std::default::Default
and std::iter::Extend
here? It would avoid re-inventing the wheel probably. Just a personal taste.
I should roughly look like:
pub(super) trait UpdatesAccumulator<Item>: Default + Extend<Item> {}
(a trait might even not be necessary actually)
We loose the num_updates_hint
though, don't know how impactful it is.
Vec::with_capacity(num_updates_hint) | ||
} | ||
|
||
fn fold(&mut self, updates: impl IntoIterator<Item = VectorDiff<Item>>) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you prefer to not use Default
and Extend
, please rename fold
to extend
:
fold
produces a single value by combining a previous value and a new one,extend
takes a mutable reference to a collection and pushes items at its end.
The meanings are too different.
pub fn order_tracker( | ||
&mut self, | ||
all_chunks_iterator: Option<Iter<'_, CAP, Item, Gap>>, | ||
) -> Option<OrderTracker<Item, Gap>> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idea 💡: If the order is defined as i64
(which can represent negative indexes), you may not need to get an overview of all the chunks or items, thus supporting lazy-loaded as fully-loaded seamlessly.
use crate::linked_chunk::UpdateToVectorDiff; | ||
|
||
#[derive(Debug)] | ||
pub struct OrderTracker<Item, Gap> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have a documentation for this type please 🥹?
impl<Item> super::UpdatesAccumulator<Item> for NullAccumulator<Item> { | ||
fn new(_num_updates_hint: usize) -> Self { | ||
Self { _phantom: std::marker::PhantomData } | ||
} | ||
|
||
fn fold(&mut self, _updates: impl IntoIterator<Item = eyeball_im::VectorDiff<Item>>) { | ||
// Nothing to do, this is a no-op accumulator. | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we use Default
+ Extend
as suggested earlier, NullAccumulator
can be ()
(because, yeah, Extend
is implemented for ()
).
Also, removing the need for the special trait UpdatesAccumulator
, mechanically remove the need for the PhantomData
(for the Item
generic).
@@ -294,6 +294,12 @@ impl<Item, Gap> UpdatesInner<Item, Gap> { | |||
slice | |||
} | |||
|
|||
/// Has the given reader, identified by its [`ReaderToken`], some pending | |||
/// updates, or has it consumed all the pending updates? | |||
pub(super) fn is_reader_up_to_date(&self, token: ReaderToken) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use debug_assertions
here please?
#[cfg(debug_assertions)]
So that it exists only for non-release build. That's what debug_assert!
uses internally.
/// If `inhibit` is `true`, the updates are ignored; otherwise, they are | ||
/// consumed normally. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When do we need to use inhibit = true
?
) -> Self { | ||
let mut linked_chunk = linked_chunk.unwrap_or_else(LinkedChunk::new_with_update_history); | ||
|
||
let chunks_updates_as_vectordiffs = linked_chunk | ||
.as_vector() | ||
// SAFETY: The `LinkedChunk` has been built with `new_with_update_history`, so |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why removing the SAFETY
comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the expect()
says the same thing one line below, so there's no real value in repeating it in a code comment IMO.
@@ -231,6 +255,35 @@ impl RoomEvents { | |||
self.chunks.items() | |||
} | |||
|
|||
/// Return the order of an event in the room (main) linked chunk. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove “(main)” I think:
- it can only create confusion,
- to me, it's clear that a room has a linked chunk, and that each thread will have its own linked chunk,
- you are saying “in the room”, so we know it's not for the thread.
The comment is already crystal clear, you don't need “(main)”.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right! I suppose I've confused myself with respect to the "main conversation thread", but the room is so much more than that…
// If loading the full linked chunk failed, we'll clear the event cache, as it | ||
// indicates that at some point, there's some malformed data. | ||
let fully_loaded_linked_chunk = | ||
match Self::load_full_linked_chunk(&*store_lock, linked_chunk_id).await { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we are loading all chunks with all their items. It seems to be quite annoying and quite a blocker to me. It will impact performance too much. At best we must only load the chunks without their items.
Thanks for the review! These are very valid questions.
The requirement is that we want to be able to compare the relative position of two events, be they loaded in the in-memory linked chunk or not. So, this must work, independently of the actual position of the event. Some alternative ways to build this:
It sounds like the first and last solutions might be preferable in the short term, and would both require loading only the metadata of a chunk. So I could start with this. Then, I find the concern around performance totally legit. The best way to resolve it would be benchmarking or using some real-world measures, so I may look into this and get back to you here.
As said above, a solution only based on the Loading all the chunks at start seemed like a reasonable solution to build the initial state, and then maintain it cleanly over time. But yeah, at the very least we'd need to only load the minimal amount of data for the order tracker to work correctly. |
We've discussed this, and lazy loading the order tracker brings many other complications, so we're going to roll with the first suggestion only (load a limited set of metadata about each chunk of a linked chunk, and use that instead of the fully loaded linked chunk), and see what comes out of that in terms of performance. We can load the entire metadata in a single SQL query, so that ought to be rather efficient. |
This PR introduces a local absolute ordering for items of a linked chunk, or equivalently, for events within a room's timeline. The idea is to reuse the same underlying mechanism we had for
AsVector
, but restricting it to only counting the number of items in a chunk; given an item'sPosition
, we can then compute its absolute order as the total number of items before its containing chunk + its index within the chunk.This will help us order edits that would apply to a thread event, for instance; this is deferred to a future PR, to not make this one too heavyweight.
Attention to reviewers: sorry, this is a bulky PR (mostly because of tests), but I think it's important to see how the
OrderTracker
methods are used in 280bd32, to make sense of their raison d'être.Part of #4869 / #5123.