Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GD Optimizations #8

Closed
wants to merge 91 commits into from
Closed

GD Optimizations #8

wants to merge 91 commits into from

Conversation

coderofstuff
Copy link
Owner

Initial changes from ori's branch to easily view it

@coderofstuff
Copy link
Owner Author

coderofstuff commented Jun 1, 2024

Errors Encounted

Error 1

Running this code as it is will result in the error:

2024-05-31 23:21:55.059-06:00 [INFO ] Validating level 2 from the pruning point proof (2000 headers)
2024-05-31 23:21:55.463-06:00 [INFO ] Validating level 1 from the pruning point proof (2000 headers)
2024-05-31 23:21:55.882-06:00 [INFO ] Validating level 0 from the pruning point proof (2000 headers)
2024-05-31 23:22:01.507-06:00 [ERROR] thread 'tokio-runtime-worker' panicked at consensus/src/processes/pruning_proof/mod.rs:291:83: called `Result::unwrap()` on an `Err` value: HashAlreadyExists(c0cf645d4d638738bed2bb6c65c72f2b347bda57e958df24b8f921ff9f6cb0d6)
thread 'tokio-runtime-worker' panicked at consensus/src/processes/pruning_proof/mod.rs:291:83:
called `Result::unwrap()` on an `Err` value: HashAlreadyExists(c0cf645d4d638738bed2bb6c65c72f2b347bda57e958df24b8f921ff9f6cb0d6)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Exiting...

This first one is resolved by doing this (assumes self.ghostdag_stores[0] == self.ghostdag_primary_store as the cause for the HashAlreadyExists:

               if level == 0 {
                    let gd = if let Some(gd) = trusted_gd_map.get(&header.hash) {
                        gd.clone()
                    } else {
                        let calculated_gd = self.ghostdag_managers[level].ghostdag(&parents);
                        // Override the ghostdag data with the real blue score and blue work
                        GhostdagData {
                            blue_score: header.blue_score,
                            blue_work: header.blue_work,
                            selected_parent: calculated_gd.selected_parent,
                            mergeset_blues: calculated_gd.mergeset_blues.clone(),
                            mergeset_reds: calculated_gd.mergeset_reds.clone(),
                            blues_anticone_sizes: calculated_gd.blues_anticone_sizes.clone(),
                        }
                    };

                    self.ghostdag_primary_store.insert(header.hash, Arc::new(gd)).unwrap();
                } else {
                    // This line used to be above the if-statement
                    self.ghostdag_stores[level].insert(header.hash, Arc::new(gd)).unwrap();
                }

Error 2

Next, I get an error later on related to relations_store/relations_service:

2024-05-31 13:40:11.804-06:00 [WARN ] Failed to find sufficient root for level 38 after 12 tries. Retrying again to find with depth 10240000
2024-05-31 13:40:11.805-06:00 [ERROR] thread 'tokio-runtime-worker' panicked at consensus/src/processes/pruning_proof/mod.rs:878:63: called `Result::unwrap()` on an `Err` value: KeyNotFound(RelationsChildren/38/5d6d18f0ca4620fc2612b3686ddbc026fcf6385c5c32b679f81b03757554ca9c)
thread 'tokio-runtime-worker' panicked at consensus/src/processes/pruning_proof/mod.rs:878:63:
called `Result::unwrap()` on an `Err` value: KeyNotFound(RelationsChildren/38/5d6d18f0ca4620fc2612b3686ddbc026fcf6385c5c32b679f81b03757554ca9c)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Copy link
Owner Author

@coderofstuff coderofstuff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial thoughts on the code

consensus/src/processes/ghostdag/protocol.rs Outdated Show resolved Hide resolved
consensus/src/processes/pruning_proof/mod.rs Show resolved Hide resolved
consensus/src/processes/pruning_proof/mod.rs Show resolved Hide resolved
consensus/src/processes/pruning_proof/mod.rs Show resolved Hide resolved
@coderofstuff coderofstuff force-pushed the gd-optimization branch 3 times, most recently from ad7ef2c to 1645929 Compare June 4, 2024 05:36
selected_tip,
level,
proof_selected_tip_gd,
) {
let selected_tip_blue_work_diff =
SignedInteger::from(proof_selected_tip_gd.blue_work) - SignedInteger::from(proof_common_ancestor_gd.blue_work);
for parent in self.parents_manager.parents_at_level(&current_pp_header, level).iter().copied() {
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let parent_blue_work = self.ghostdag_stores[level_idx].get_blue_work(parent).unwrap();

This needs to a better substitute

if proof_selected_tip_gd.blue_score < 2 * self.pruning_proof_m {
continue;
}

match relations_read[level_idx].get_parents(current_pp).unwrap_option() {
Some(parents) => {
if parents
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rethink:

self.ghostdag_stores[level_idx].get_blue_score(parent).unwrap() < 2 * self.pruning_proof_m)

@coderofstuff
Copy link
Owner Author

In 01befd3 I added a fix for the find_selected_parent_header_at_level logic which fixes the Error 2 described above. apply_proof only takes the parents at a level of a header, filters to use only those parents that have GD entries (in the old GD stores), and inserts the entry to the relationship store at that level. Error 2 occurs because the old code of find_selected_parent_header_at_level attempts to use all the parents at a level, even those that were not inserted via apply_proof. Fix for now is to make the filtering consistent. There may be a different fix where apply_proof is also updated but I didn't want to tackle that right now.

With this fix, the calc_gd_for_all_levels now completes but I'm encountering a new error which is described below

Error 3

[consensus/src/processes/pruning_proof/mod.rs:1078:25] next_level_chain_m.len() = 585
[consensus/src/processes/pruning_proof/mod.rs:1079:25] ghostdag_stores[level +
                    1].get_compact_data(next_level_tip).unwrap().blue_score = 2001
[consensus/src/processes/pruning_proof/mod.rs:1080:25] ghostdag_stores[level +
                    1].get_compact_data(next_level_block_m).unwrap().blue_score = 998
[consensus/src/processes/pruning_proof/mod.rs:1081:25] ghostdag_stores[level].get_compact_data(selected_tip).unwrap().blue_score = 79920001
[consensus/src/processes/pruning_proof/mod.rs:1082:25] ghostdag_stores[level].get_compact_data(block_at_depth_2m).unwrap().blue_score = 79918000
[consensus/src/processes/pruning_proof/mod.rs:1083:25] level = 0
[consensus/src/processes/pruning_proof/mod.rs:1083:25] selected_tip = e3470e13009f2f42c3935d39035d87de6e18ccff819fe563a99aa0122a812f74
[consensus/src/processes/pruning_proof/mod.rs:1083:25] block_at_depth_2m = be761d59dd509e84d821b88a8a95e1075ff628a8322aeb66a3279aafb2039a98
[consensus/src/processes/pruning_proof/mod.rs:1083:25] root = d21241657e7139b1a23baa90e13fb1405eac488ae82b78d2f2ca1c340b8a1dd2
2024-06-10 23:41:37.683-06:00 [ERROR] thread 'tokio-runtime-worker' panicked at consensus/src/processes/pruning_proof/mod.rs:1084:25: Assert 2M chain -- missing block be761d59dd509e84d821b88a8a95e1075ff628a8322aeb66a3279aafb2039a98 at index 1156 out of 1157 chain blocks
thread 'tokio-runtime-worker' panicked at consensus/src/processes/pruning_proof/mod.rs:1084:25:
Assert 2M chain -- missing block be761d59dd509e84d821b88a8a95e1075ff628a8322aeb66a3279aafb2039a98 at index 1156 out of 1157 chain blocks
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Exiting...

Notes:

  • When I uncomment the line assert!(self.reachability_service.is_dag_ancestor_of(root, old_root));, this assertion fails which means that the new root that was found the find_sufficient_root is above the old root.
  • When I look at the hashes in the above error, this is indeed the case. The new root found is a child of the old root.
  • I think this means that either the find_sufficient_root has some off-by-one like issue current, or the chain_up_to_depth logic needs tweaking. I'm exploring both routes.

@coderofstuff coderofstuff force-pushed the gd-optimization branch 4 times, most recently from 03004e4 to e5ff82b Compare June 19, 2024 23:06
coderofstuff and others added 17 commits June 25, 2024 18:56
Co-authored-by: Ori Newman <[email protected]>
Co-authored-by: Ori Newman <[email protected]>
Co-authored-by: Ori Newman <[email protected]>
apply_proof only inserts parent entries for a header from the proof into
the relations store for a level if there was GD data in the old stores
for that header.

This adds a check to filter out parent records not in relations store
…cient root

This happens when there's not enough headers in the pruning proof but it satisfies validation
relations.get_parents on GD gets extra parents that aren't in the
current GD store. so get_blue_work throws an error

next, ORIGIN was mising from the GD so add that
gvbgduh and others added 27 commits October 3, 2024 00:36
* replace statrs and statest deps.

* remove todo in toml.cargo and fmt & lints.

* do a run of `cargo audit fix` for some miscellaneous reports.

* use maintained alt ks crate.

* add cargo.lock.

* update

* use new command

* newline

* refresh cargo lock with a few more version updates

* fix minor readme glitches

---------

Co-authored-by: Michael Sutton <[email protected]>
* sighash reused trait

* benches are implemented

* use cache per iteration per function

* fix par versions

* fix benches

* use upgreadable read

* use concurrent cache

* use hashcache

* dont apply cache

* rollback rwlock and indexmap.

* remove scc

* apply par iter to `check_scripts`

* refactor check_scripts fn, fix tests

* fix clippy

* add bench with custom threadpool

* style: fmt

* suppress warnings

* Merge branch 'master' into bcm-parallel-processing

* renames + map err

* reuse code

* bench: avoid exposing cache map + iter pools in powers of 2

* simplify check_sig_op_counts

* use thread pool also if a single input
1. to avoid confusion
2. since tokio blocking threads are not meant to be used for processing anyway

* remove todo

* clear cache instead of recreate

* use and_then (so map_err can be called in a single location)

* extend check scripts tests for better coverage of the par_iter case

---------

Co-authored-by: Michael Sutton <[email protected]>
* Parallelize MuHash calculations

MuHash calculations are additive and can be done in chunks then later combined

* Reimplement validate tx with muhash as a separate fn

* Use smallvec for muhash parallel

Co-authored-by: Michael Sutton <[email protected]>

* Add independent rayon order test

* Filter some data

* Use tuple_windows for test iter

---------

Co-authored-by: Michael Sutton <[email protected]>
…#581)

* semantic: add `from` ext methods

* muhash from txs benchmark

* optimization: in u3072 mul test if lhs is one

* extract `parallelism_in_power_steps`

* comment
- use BlueWorkType
- fix some comments
* rust 1.82 fixes

* sig op count std check
Co-authored-by: Michael Sutton <[email protected]>
@coderofstuff
Copy link
Owner Author

Merged in mainline

@coderofstuff coderofstuff deleted the gd-optimization branch November 11, 2024 03:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants