Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add value support to Recon #217

Merged
merged 1 commit into from
Jan 23, 2024
Merged

feat: add value support to Recon #217

merged 1 commit into from
Jan 23, 2024

Conversation

nathanielc
Copy link
Collaborator

With this change Recon is a key value store instead of simply a key store.

Recon synchronization is the same in principle but its underlying protocol has been refactored.

The protocol design is now has clear initiator and responder roles. Generally the initiator makes requests and the responder responds. Messages themselves are small and represent a small amount of work. This way both nodes can be actively working concurrently to synchronize. Prior to this change nodes would be idle while the other was working and would frequently deadlock if a single message size grew too large.

Values are synchronized by sending ValueRequests and ValueResponses along side synchronization messages.

The API into the Recon protocol is now a Stream + Sink allowing any system that can transport messages in full duplex a possible transport for Recon. The protocol implementation is no longer specific to libp2p. This means we can use HTTP in the future. However this PR removes the HTTP implementation as its not trivial with the HTTP server we are using to create a full duplex channel.

api/src/server.rs Outdated Show resolved Hide resolved
one/src/lib.rs Show resolved Hide resolved
p2p/src/node.rs Outdated Show resolved Hide resolved
p2p/src/node.rs Outdated Show resolved Hide resolved
recon/src/libp2p.rs Show resolved Hide resolved
recon/src/libp2p.rs Show resolved Hide resolved
recon/src/libp2p/handler.rs Show resolved Hide resolved
recon/state.mmd Show resolved Hide resolved
pub first: K,
/// Hash of all keys in between first and last exclusive.
/// Hash is zero if there are no keys within the range.
pub hash: HashCount<H>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be an Optional HashCount

core::RangeOpen and recon:Range feel like they are similar do we want difrent types for ranges based on wether the hash and count are known?

Obviously we need the hash if we want to send it across the wire.
Maybe hash_range should take a RangeOpen and return a Range?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually started with an Option but it turns out since Hash has a well defined zero value that always expecting the hash was better. This way sqlite computing a sum doesn't have to have logic to check for a zero and return a None instead. In other words there is no meaningful difference between a None hash and a zero hash so an Option only adds unneeded complexity.

Having two types works well for me. However we could maybe have better names? And yes a nice refactor later would be to change all the method that accept left_fencepost and right_fencepost to simply accept a RangeOpen.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The None would be unknown not zero. But look at it the second time having one type for an open range and different one for a range with its hash and count is clearer. There is no one place where we have ranges but don't know if we retrieved the hash and count yet.

Copy link
Collaborator

@smrz2001 smrz2001 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks again for the detailed overview!! LGTM 🚀🚀

recon/state.mmd Outdated Show resolved Hide resolved
.bind(&self.sort_key)
.bind(key.as_bytes())
.fetch_all(self.pool.writer())
.await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it appropriate to use ? here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep that would simplify, will make the change.

With this change Recon is a key value store instead of simply a key
store.

Recon synchronization is the same in principle but its underlying
protocol has been refactored.

The protocol design is now has clear initiator and responder roles.
Generally the initiator makes requests and the responder responds.
Messages themselves are small and represent a small amount of work. This
way both nodes can be actively working concurrently to synchronize.
Prior to this change nodes would be idle while the other was working and
would frequently deadlock if a single message size grew too large.

Values are synchronized by sending ValueRequests and ValueResponses
along side synchronization messages.

The API into the Recon protocol is now a Stream + Sink allowing any
system that can transport messages in full duplex a possible transport
for Recon. The protocol implementation is no longer specific to libp2p.
This means we can use HTTP in the future. However this PR removes the
HTTP implementation as its not trivial with the HTTP server we are using
to create a full duplex channel.
@nathanielc nathanielc enabled auto-merge January 23, 2024 15:45
@nathanielc nathanielc added this pull request to the merge queue Jan 23, 2024
Merged via the queue into main with commit 5284db8 Jan 23, 2024
4 checks passed
@nathanielc nathanielc deleted the feat/recon-values branch January 23, 2024 16:06
@github-actions github-actions bot mentioned this pull request Feb 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants