Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(DO NOT MERGE):skunkworks #267

Draft
wants to merge 27 commits into
base: kw/msm-ref-pippenger
Choose a base branch
from

Conversation

kevaundray
Copy link
Contributor

@kevaundray kevaundray commented Sep 6, 2024

This is based off of #247 as I needed the bacth_add method

This scales better than the blst method due to batch_multi_add though I'd like to see if we can port the way blst does it over to this and then do batch_multi_add using that ported method. The blst method seems to be a lot more cache friendly.

EDIT:

This branch has been modified to be skunkworks -- overall we get a 50% decrease in time. It implements all of the modern papers on scalar multiplication and extends them to do simultaneous scalar multiplications. The Hunter algorithm seems to be the best however it has larger complexity than the strauss like method used by blst.

@kevaundray kevaundray changed the base branch from master to kw/msm-ref-pippenger September 6, 2024 12:40
Comment on lines +16 to +18
pub struct FixedBaseMSMPrecompBLST {
table: Vec<Vec<G1Affine>>, // TODO: Make this a Vec<> and then just do the maths in msm function for offsetting
// table: Vec<G1Affine>,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both seem to perform the same

@kevaundray kevaundray changed the title chore: add precomp method (blst) skunkworks Sep 23, 2024
@kevaundray kevaundray changed the title skunkworks chore(DO NOT MERGE):skunkworks Sep 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant