-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DONT_MERGE] Rocksdb storage #527
Open
mafintosh
wants to merge
61
commits into
main
Choose a base branch
from
rocksdb
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* methods use atomic write/read batches, remove flush * update test to teardown and use async commit * tree.get can be passed batch * add getByteRange helper * proof helpers all support batches * remove stale code * update to new storage api * update to use peakLastTreeNode * use deleteRange for truncations * reopen a tree from db storage * remove cache * right span available nodes * skip cache test * skip all but merkle tree tests * update for batch api * rename to dbBatch for clarity * standard * kill stale code * rename to read/writeBatch * truncation should delete parent nodes
* use rocks db storage for bitfield pages * update tests to use rocksdb storage * rocksdb returns buffer with length * enable bitfield tests
* remove oplog and load header from rocks * move block store to rocksdb * remove oplog entries * create write batches and pass to persisting methods * comment out copyPrologue and insertBatch for now * enable lib/core test * write header info directly to batch * batch updates and flush to onupdate later * update user data api * userData key is string in storage * merkle tree does not own storage * merkle tree write operations take in batch * block operations are sync * bitfield flush is sync and takes write batch * pass db directly to Core.open * set user data operates on a batch * bitfield does not ref storage
* update index.js to use new core api * encode manifest when storing * tweak basic tests * flush reader after caching * tryFlush --------- Co-authored-by: Mathias Buus <[email protected]>
* update index.js to use new core api * encode manifest when storing * tweak basic tests * use opts.discoveryKey if passed * merkle tree proof settled externally * enable basic tests * replicator creates batch and fulfills request separately * ensure verify ops are using top level write batches * bugfix: tree indexes should be multiplied by 2 * always return tree proof and await handleRequests * update replicate tests * always return proof request * clear blocks now takes start and end explicitly * update lib/multisig with new proof api * flush manifest to batch during append * standard * update tests and enable as many as possible * only finalise batch after write is flushed * persist correct value for fork * tidy up tests * fix top level user data api * update snapshot tests * finalise reorg batch * standard * reorg batch is not finalised * only clear parent nodes if ancestors > 0 * only upgrade batches need be finalised * verifyBatchUpgrade optionally does not write * tidy up todos * always write manifest if we can * fixup * enable cache tests * assert that merkle tree is committed before finalising * ensure stream is drained before handling requests * missing new keyword * add todo for critical bugfix * store user updates on write batch * store pending bit sets until after flush * use sorted list of intervals instead of map * verify methods create their own batches * allow updateContig to check dirty bitfield * only one batch is allowed to be active at any time * await flushing before processing get * add @mafintosh interval tracker * alloc and set using uintarray * reenable bitfield tests * tidy update flow and redirect for easier refactor * add BitInterlude class * bit interlude wraps bitfield and generates pages * use bit interlude to calculate contig length * optimise setting bitfield buffers * use quickbit in bit interlude * move all batch and flush logic into handle request * add initial update abstraction * extend usage of update * move clear and clearBatch to update * contiguous length is drop dependent * remove last methods using write batch directly * tidy up * no longer need to mark pages as dirty * remove dead props * flushed returns if no active batch
* implement copyPrologue * standard fixes * review by @mafintosh
* define scoped states per session * return byteLength from updated batch * pass length to createNamedSession * add tree method for merging one tree into another * add core method for committing state * state stores treeLength * session gets from state * handle case when no tree nodes are added * fix session get and truncate * core truncate passes entire state down * batch has all info after being reconciled * update batch tests * session uses state props * pass whole batch to be flushed * copy over bitfield to named session * minor corrections * update batch tests to be on named sessions * update core tests * make sure we pass state to createUpdate * add checkout option for batches * optionally pass discovery key to core * each state has an independent mutex * bitfield should full copy entire buffer * user data is available sync for default state * named session is always writable * commit can specify length * corestore owns storage * update flushed length prop * expose restore batch on main class * encryption should use session state * rebase main * fix storage usage * test core length is increasing * standard fixes * use state props and pass tree to multisig * fix bitfield page copy * update tests * move to session state to dedicated abstraction * pass parent state as capability * ensure we have correct treeLength and up to date bitfield when opening state * only write each bitfield page once per flush * truncate and clear mutate treeLength * fixes for batch tests * enable batch tests * overwrite if session when opts.refresh is set * enable batch clear test * close storage when we close the core * storage closes automatically * auto teardown any created core * we have to pass an error to mutex destruction... * close all test cores * more missing close * more closes * missing session close * close db in createIfMissing test * more closes * make sure all sessions are closed too * checkout should only truncate named session * state tracks active sessions on open and close * screen for failing test * core closes state * close existing state when creating named session * missing session close * more closes * pass runner to helper * close core instead * close state first and fix teardown order * close state last * missing close * missing close * missing close --------- Co-authored-by: Mathias Buus <[email protected]>
* wip * adjust tests * truncate issues a deletion of the range also * start + 1 -> end
* rename clone/close to ref/unref * add snapshot method to session state * snapshots are not writable * snapshotted session takes state snapshot * fall back to core if snapshot does not have data * review by @mafintosh * gc snapshot * fix snapshot teardown * _snapshot -> storageSnapshot * dry it a bit --------- Co-authored-by: Mathias Buus <[email protected]>
chm-diederichs
force-pushed
the
rocksdb
branch
from
September 10, 2024 12:28
e9b2162
to
6c213d7
Compare
--------- Co-authored-by: HDegroote <[email protected]> Co-authored-by: Mathias Buus <[email protected]> Co-authored-by: rafapaezbas <[email protected]>
* add failing test * fix the active counter with the fix from rocks branch * do not allow core teardown during peer attachment if sessioned * can just bump sessions instead of the extra flag * move peer session flow fully to the makepeer lifecycle
* block reqs are not readded while being processed * check we do not have block locally before requesting * check bitfield before deciding to wait * add comment --------- Co-authored-by: Mathias Buus <[email protected]>
* add force close option * do not handle requests if core is closed * check replicator.destroyed * close take opts object * _close is called with force arg instead of opts obj * add test for force close * force close explicitly closes all sessions
* only create single batch per bg call * iterate over blocks as they were read * catch error in case uncaught * skip read if we do not have block
* commit acquires lock directly * check conditions after acquiring lock
* remove unused args * fix own length when core is empty
* add failing batch test * simplify
* new atomic single sweep copy prologue * remove tests that use additional which is gone now * review by @andrewosh * helper comment * also fwd user data to header userData
* cross reference current and previous tree when opening state * we only need to check sharedLength
* wip * checkpoint * tests pass now * add safe close api on core and move replicator there * more stuff in core * all hooks live in core * inline replicator * premerge * no need to skip * add static helper for making core cores * bye bye non-sparse mode * Core manages replicator lifecycle (#592) * state unref is sync * core closes storage * replicator no longer needs session * remove force close from session * fix bad condition * core being set is invariant * remove stale method * query sessions on underlying core * move autoClose onto Core * rename close to destroy * core destroys state * preload is now simply a promise * preload promise can return opts * no need to check if state is active * add onidle hook * idle waits for mutex to be free also * remove from option * exclusive sessions * always emit close * fix test * pass userdata to create for atomicity * core always refs default state * session explicitly refs state --------- Co-authored-by: Christophe Diederichs <[email protected]> Co-authored-by: Christophe Diederichs <[email protected]>
* add memory overlay test * fix put * implement memory overlay block deletion * add memory overlay block deletion tests * move tip list to abstraction and add tests * treeNodes use tip list * move memory overlay to hypercore-on-the-rocks * rename peak to peek
* commit acquires lock directly * check conditions after acquiring lock * update dependency in place and delete blocks * update dependency on truncate and clear if necessary * move memory overlay to hypercore-on-the-rocks * rename peak to peek * use dependencies to compute flushed length * bit-interlude supports multiple deletions * add explicit state overwrite method * memory overlay always has flushedLength as -1 * signature is verified against committed tree * use flat-tree patch method * review from @mafintosh * only one call to splice
* change bitfield setRange api to start + end * update usage of remote bitfield
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Opening the main PR so CI can run explicitly and we can follow