-
Notifications
You must be signed in to change notification settings - Fork 138
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Loading status checks…
Add Brakedown multilinear PCS (#131)
* added hyrax PCS * Add univariate and multilinear Ligero PCS Co-authored-by: Hossein Moghaddas <[email protected]> Co-authored-by: Antonio Mejías Gil <[email protected]> * Add Brakedown * adapt the scheme to arkworks-rs/algebra#691 * move tests shared across univariate and ML ligero to utils * adapt the scheme to arkworks-rs/algebra#691 * move tests shared across schemes to utils * remove unused no-std import * adapt the scheme to arkworks-rs/algebra#691 * remove unused code in hyrax * Improve the choice of dimensions for polynomial matrix * Update comments * parallelised row encoding and col-to-leaf hashing; significant performance gains * parallelised row encoding and col-to-leaf hashing; significant performance gains * expanded on Future Optimisations section * fixed GH action failures: formatted and added feature flag * fixed GH action failures: formatted and added feature flag * remove Prepared data types from `PolynomialCommitment` trait * remove Prepared data types from `PolynomialCommitment` trait * Remove Prepared data types from `PolynomialCommitment` trait impl * added necessary dependencies overwritten by previous merge commit * fixed hashbrown version * Add back the cfg dependency for no-std build * fixed hashbrown version * pulled * created separate benchmark files * fixed duplicate dependency to match other branches * patched bn254 dep * restructured benchmark macros to accept ML schemes; benches working * moved hashing structures to bench-templates crate, started ligero bench coding * completed ligero benchmarks * added ligero benchmark file * adapted to new crate structure and created benchmark for ML brakedown * Hyrax fix bench (#42) * fix bench call * set num vars from 12-20 * Brakedown fix bench (#41) * fix bench call * set num vars from 12-20 * Ligero fix benches (#40) * fix bench call * set num vars from 12-20 * Hyrax parallel `commit` (#39) * Enable parallel commitment in hyrax amend * make `rand` optional * remove dead code * Make Hyrax hiding again (#43) * removed evaluation randomness from proof and ignored claimed value in check to make scheme hiding * fmt * removed unnecessary usage of argument in check, added _ * remove cfg(benches) attributes as that feature is no longer used * Fix tests: sponge config for univariate ligero * Fix the comment Co-authored-by: Marcin <[email protected]> * Delete `IOPTranscript`, update with master (#44) (aka Brakedown++) * Add the trait bounds * Add `CommitmentState` * Update benches for the new type * Fix the name of local variable * Merge `PCCommitmentState` with `PCRandomness` * Update `README.md` * Fix a bug * Complete the merge * Simplify `hash_column` * Delete comments * Add `CommitmentState` * Make `fmt` happy * Refactor, remove `hash_columns` * Rename all params * remove cfg(benches) attributes as that feature is no longer used * Brakedown+++ (#46) * conversion to `into_iter` is a no-op * remove explicit casts to vecs * rename to use singular of `labeled_commitment` * simplify the iterators even further by zipping two iters * Apply suggestions from code review * Maybe `empty` not return `Self` * Make `empty` return `Self` * Rename `rand` to `state` * Add the type `Randomness` * Rename nonnative to emulated, as in `r1cs-std` (#137) * Rename nonnative to emulated, as in `r1cs-std` * Run `fmt` * Temporarily change `Cargo.toml` * Revert `Cargo.toml` * Refactor `FoldedPolynomialStream` partially * Substitute `ChallengeGenerator` by the generic sponge (#139) * Rename nonnative to emulated, as in `r1cs-std` * Run `fmt` * Temporarily change `Cargo.toml` * Substitute `ChallengeGenerator` with the generic sponge * Run `fmt` * Remove the extra file * Update modules * Delete the unnecessary loop * Revert `Cargo.toml` * Refactor `FoldedPolynomialStream` partially * Update README * Make the diff more readable * Bring the whitespace back * Make diff more readable, 2 * Fix according to breaking changes in `ark-ec` (#141) * Fix for KZG10 * Fix the breaking changes in `ark-ec` * Remove the extra loop * Fix the loop range * re-use the preprocessing table * also re-use the preprocessing table for multilinear_pc --------- Co-authored-by: mmagician <[email protected]> * Auxiliary opening data (#134) * Add the trait bounds * Add `CommitmentState` * Update benches for the new type * Fix the name of local variable * Merge `PCCommitmentState` with `PCRandomness` * Update `README.md` * Fix a bug * Put `Randomness` in `CommitmentState` * Add a comment * Remove the extra loop * Update the comment for `CommitmentState` Co-authored-by: Marcin <[email protected]> * cargo fmt --------- Co-authored-by: Marcin <[email protected]> * `batch_mul_with_preprocessing` no longer takes `self` as argument (#142) * batch_mul_with_preprocessing no longer takes `self` as argument * Apply suggestions from code review Co-authored-by: Pratyush Mishra <[email protected]> * fix variable name --------- Co-authored-by: Pratyush Mishra <[email protected]> * Remove `ChallengeGenerator` for Brakedown (#53) * Squash and merge `delete-chalgen` onto here * Fix Brakedown for `ChallengeGenerator` and `AsRef` for Merkle tree * Remove `IOPTranscript` (#52) * Replace the `IOPTranscript` with `CryptographicSponge` * Delete extra comments * Delete TODOs and do not absorb what you just squeezed * Remove the extra loop * Revert the incorrect changes in `bench-tamplates` --------- Co-authored-by: mmagician <[email protected]> Co-authored-by: Pratyush Mishra <[email protected]> * Update a comment * Delete `IOPTranscript`, update with master (#50) (aka Hyrax++) * Add the trait bounds * Add `CommitmentState` * Update benches for the new type * Fix the name of local variable * Merge `PCCommitmentState` with `PCRandomness` * Update `README.md` * Fix a bug * Change `Randomness` to `CommitmentState` * Maybe `empty` not return `Self` * Make `empty` return `Self` * Rename `rand` to `state` * Partially integrate the new design into Hyrax * Update Hyrax with the shared state * Rename nonnative to emulated, as in `r1cs-std` (#137) * Rename nonnative to emulated, as in `r1cs-std` * Run `fmt` * Temporarily change `Cargo.toml` * Revert `Cargo.toml` * Refactor `FoldedPolynomialStream` partially * Substitute `ChallengeGenerator` by the generic sponge (#139) * Rename nonnative to emulated, as in `r1cs-std` * Run `fmt` * Temporarily change `Cargo.toml` * Substitute `ChallengeGenerator` with the generic sponge * Run `fmt` * Remove the extra file * Update modules * Delete the unnecessary loop * Revert `Cargo.toml` * Refactor `FoldedPolynomialStream` partially * Update README * Make the diff more readable * Bring the whitespace back * Make diff more readable, 2 * Fix according to breaking changes in `ark-ec` (#141) * Fix for KZG10 * Fix the breaking changes in `ark-ec` * Remove the extra loop * Fix the loop range * re-use the preprocessing table * also re-use the preprocessing table for multilinear_pc --------- Co-authored-by: mmagician <[email protected]> * Auxiliary opening data (#134) * Add the trait bounds * Add `CommitmentState` * Update benches for the new type * Fix the name of local variable * Merge `PCCommitmentState` with `PCRandomness` * Update `README.md` * Fix a bug * Put `Randomness` in `CommitmentState` * Add a comment * Remove the extra loop * Update the comment for `CommitmentState` Co-authored-by: Marcin <[email protected]> * cargo fmt --------- Co-authored-by: Marcin <[email protected]> * `batch_mul_with_preprocessing` no longer takes `self` as argument (#142) * batch_mul_with_preprocessing no longer takes `self` as argument * Apply suggestions from code review Co-authored-by: Pratyush Mishra <[email protected]> * fix variable name --------- Co-authored-by: Pratyush Mishra <[email protected]> * Remove ChallengeGenerator for Ligero (#56) * Squash and merge `delete-chalgen` onto here * Fix for `ChallengeGenerator` * Delete `IOPTranscript` for Hyrax (#55) * Use the sponge generic and rearrange `use`s * Use sponge instead of `IOPTransript` * Fix benches * Remove the extra loop --------- Co-authored-by: mmagician <[email protected]> Co-authored-by: Pratyush Mishra <[email protected]> * Delete `merlin` from dependencies * Delete `IOPTranscript`, update with master (#51) (aka Ligero++) * Add the trait bounds * Add `CommitmentState` * Update benches for the new type * Fix the name of local variable * Merge `PCCommitmentState` with `PCRandomness` * Update `README.md` * Fix a bug * Simplify `hash_column` * Delete comments * Add `CommitmentState` * Make `fmt` happy * Refactor, remove `hash_columns` * Rename all params * Maybe `empty` not return `Self` * Make `empty` return `Self` * Rename `rand` to `state` * Add type `Randomness` * Ligero+++ (#46) * conversion to `into_iter` is a no-op * remove explicit casts to vecs * rename to use singular of `labeled_commitment` * simplify the iterators even further by zipping two iters * Apply suggestions from code review * Fix tests: sponge config for univariate ligero * Rename nonnative to emulated, as in `r1cs-std` (#137) * Rename nonnative to emulated, as in `r1cs-std` * Run `fmt` * Temporarily change `Cargo.toml` * Revert `Cargo.toml` * Refactor `FoldedPolynomialStream` partially * Substitute `ChallengeGenerator` by the generic sponge (#139) * Rename nonnative to emulated, as in `r1cs-std` * Run `fmt` * Temporarily change `Cargo.toml` * Substitute `ChallengeGenerator` with the generic sponge * Run `fmt` * Remove the extra file * Update modules * Delete the unnecessary loop * Revert `Cargo.toml` * Refactor `FoldedPolynomialStream` partially * Update README * Make the diff more readable * Bring the whitespace back * Make diff more readable, 2 * Fix according to breaking changes in `ark-ec` (#141) * Fix for KZG10 * Fix the breaking changes in `ark-ec` * Remove the extra loop * Fix the loop range * re-use the preprocessing table * also re-use the preprocessing table for multilinear_pc --------- Co-authored-by: mmagician <[email protected]> * Auxiliary opening data (#134) * Add the trait bounds * Add `CommitmentState` * Update benches for the new type * Fix the name of local variable * Merge `PCCommitmentState` with `PCRandomness` * Update `README.md` * Fix a bug * Put `Randomness` in `CommitmentState` * Add a comment * Remove the extra loop * Update the comment for `CommitmentState` Co-authored-by: Marcin <[email protected]> * cargo fmt --------- Co-authored-by: Marcin <[email protected]> * `batch_mul_with_preprocessing` no longer takes `self` as argument (#142) * batch_mul_with_preprocessing no longer takes `self` as argument * Apply suggestions from code review Co-authored-by: Pratyush Mishra <[email protected]> * fix variable name --------- Co-authored-by: Pratyush Mishra <[email protected]> * Remove `ChallengeGenerator` and `IOPTranscript` for Ligero (#57) * Squash and merge `delete-chalgen` onto here * Fix Ligero for `ChallengeGenerator` and `AsRef` for Merkle tree * Fix tests: sponge config for univariate ligero * Delete `IOPTranscript` for Ligero (#54) * Replace the `IOPTranscript` with `CryptographicSponge` * Delete extra comments * Run fmt * Fix tests: sponge config for univariate ligero * Delete TODOs and do not absorb what you just squeezed * Fix unused import * Revert "Fix unused import" This reverts commit e85af90. * Try to fix * Remove the extra loop --------- Co-authored-by: mmagician <[email protected]> Co-authored-by: Pratyush Mishra <[email protected]> * Add a few comments and update `Cargo.toml` * Remove extra `cfg_iter!` Co-authored-by: Pratyush Mishra <[email protected]> * Change `pedersen_commit` and add `cfg_into_iter!` * Hash and absorb * BrakedownPCSParams need to be exported publicly * only enable num-traits on aarch (#58) * added Sync trait bound Co-authored-by: Cesar Descalzo <[email protected]> * removed TODO * Fixed error whereby boolean value returned by path.verify was neglected Co-authored-by: Cesar Descalzo <[email protected]> Co-authored-by: mmagician <[email protected]> * removed unnecessary qualification which linter didn't like * changed potential panic to returning Err, stopping early Co-authored-by: Cesar Descalzo <[email protected]> * removed unnecessary function defined inside check() Co-authored-by: Cesar Descalzo <[email protected]> * various minor fixes * Add `ark-std` to patch * Reorder Hyrax checks Co-authored-by: Antonio Mejías Gil <[email protected]> * Add `ark-std` to patch * Downgrade `hashbrown` * Fix breaking change from algebra/poly (#72) * Reorder deps * Add dummy doc for nightly * Fix `hashbrown` + Replace Blake2 by Blake3 * Revert to Blake2 * Fix merging issues * Test if CI is happy * Revert and cleanup * Delete dummy doc * Bring back `num_traits` * Fix merge conflict for README.md Co-authored-by: Pratyush Mishra <[email protected]> * Add `/` to Cargo.toml --------- Co-authored-by: Antonio Mejías Gil <[email protected]> Co-authored-by: mmagician <[email protected]> Co-authored-by: Pratyush Mishra <[email protected]> Co-authored-by: Cesar Descalzo <[email protected]> Co-authored-by: Cesar199999 <[email protected]>
1 parent
78aa1d7
commit 1329599
Showing
15 changed files
with
1,146 additions
and
58 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
use ark_crypto_primitives::{ | ||
crh::{sha256::Sha256, CRHScheme, TwoToOneCRHScheme}, | ||
merkle_tree::{ByteDigestConverter, Config}, | ||
}; | ||
use ark_pcs_bench_templates::*; | ||
use ark_poly::{DenseMultilinearExtension, MultilinearExtension}; | ||
|
||
use ark_bn254::Fr; | ||
use ark_ff::PrimeField; | ||
|
||
use ark_poly_commit::linear_codes::{LinearCodePCS, MultilinearBrakedown}; | ||
use blake2::Blake2s256; | ||
use rand_chacha::ChaCha20Rng; | ||
|
||
// Brakedown PCS over BN254 | ||
struct MerkleTreeParams; | ||
type LeafH = LeafIdentityHasher; | ||
type CompressH = Sha256; | ||
impl Config for MerkleTreeParams { | ||
type Leaf = Vec<u8>; | ||
|
||
type LeafDigest = <LeafH as CRHScheme>::Output; | ||
type LeafInnerDigestConverter = ByteDigestConverter<Self::LeafDigest>; | ||
type InnerDigest = <CompressH as TwoToOneCRHScheme>::Output; | ||
|
||
type LeafHash = LeafH; | ||
type TwoToOneHash = CompressH; | ||
} | ||
|
||
pub type MLE<F> = DenseMultilinearExtension<F>; | ||
type MTConfig = MerkleTreeParams; | ||
type ColHasher<F> = FieldToBytesColHasher<F, Blake2s256>; | ||
type Brakedown<F> = LinearCodePCS< | ||
MultilinearBrakedown<F, MTConfig, MLE<F>, ColHasher<F>>, | ||
F, | ||
MLE<F>, | ||
MTConfig, | ||
ColHasher<F>, | ||
>; | ||
|
||
fn rand_poly_brakedown_ml<F: PrimeField>( | ||
num_vars: usize, | ||
rng: &mut ChaCha20Rng, | ||
) -> DenseMultilinearExtension<F> { | ||
DenseMultilinearExtension::rand(num_vars, rng) | ||
} | ||
|
||
fn rand_point_brakedown_ml<F: PrimeField>(num_vars: usize, rng: &mut ChaCha20Rng) -> Vec<F> { | ||
(0..num_vars).map(|_| F::rand(rng)).collect() | ||
} | ||
|
||
const MIN_NUM_VARS: usize = 12; | ||
const MAX_NUM_VARS: usize = 22; | ||
|
||
bench!( | ||
Brakedown<Fr>, | ||
rand_poly_brakedown_ml, | ||
rand_point_brakedown_ml | ||
); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,353 @@ | ||
use super::utils::SprsMat; | ||
use super::BrakedownPCParams; | ||
use super::LinCodeParametersInfo; | ||
use crate::linear_codes::utils::calculate_t; | ||
use crate::utils::ceil_div; | ||
use crate::utils::{ceil_mul, ent}; | ||
use crate::{PCCommitterKey, PCUniversalParams, PCVerifierKey}; | ||
|
||
use ark_crypto_primitives::crh::{CRHScheme, TwoToOneCRHScheme}; | ||
use ark_crypto_primitives::merkle_tree::{Config, LeafParam, TwoToOneParam}; | ||
use ark_ff::PrimeField; | ||
use ark_std::log2; | ||
use ark_std::rand::RngCore; | ||
use ark_std::vec::Vec; | ||
#[cfg(all(not(feature = "std"), target_arch = "aarch64"))] | ||
use num_traits::Float; | ||
|
||
impl<F, C, H> PCUniversalParams for BrakedownPCParams<F, C, H> | ||
where | ||
F: PrimeField, | ||
C: Config, | ||
H: CRHScheme, | ||
{ | ||
fn max_degree(&self) -> usize { | ||
usize::MAX | ||
} | ||
} | ||
|
||
impl<F, C, H> PCCommitterKey for BrakedownPCParams<F, C, H> | ||
where | ||
F: PrimeField, | ||
C: Config, | ||
H: CRHScheme, | ||
{ | ||
fn max_degree(&self) -> usize { | ||
usize::MAX | ||
} | ||
|
||
fn supported_degree(&self) -> usize { | ||
<BrakedownPCParams<F, C, H> as PCCommitterKey>::max_degree(self) | ||
} | ||
} | ||
|
||
impl<F, C, H> PCVerifierKey for BrakedownPCParams<F, C, H> | ||
where | ||
F: PrimeField, | ||
C: Config, | ||
H: CRHScheme, | ||
{ | ||
fn max_degree(&self) -> usize { | ||
usize::MAX | ||
} | ||
|
||
fn supported_degree(&self) -> usize { | ||
<BrakedownPCParams<F, C, H> as PCVerifierKey>::max_degree(self) | ||
} | ||
} | ||
|
||
impl<F, C, H> LinCodeParametersInfo<C, H> for BrakedownPCParams<F, C, H> | ||
where | ||
F: PrimeField, | ||
C: Config, | ||
H: CRHScheme, | ||
{ | ||
fn check_well_formedness(&self) -> bool { | ||
self.check_well_formedness | ||
} | ||
|
||
fn distance(&self) -> (usize, usize) { | ||
(self.rho_inv.1 * self.beta.0, self.rho_inv.0 * self.beta.1) | ||
} | ||
|
||
fn sec_param(&self) -> usize { | ||
self.sec_param | ||
} | ||
|
||
fn compute_dimensions(&self, _n: usize) -> (usize, usize) { | ||
(self.n, self.m) | ||
} | ||
|
||
fn leaf_hash_param(&self) -> &<<C as Config>::LeafHash as CRHScheme>::Parameters { | ||
&self.leaf_hash_param | ||
} | ||
|
||
fn two_to_one_hash_param( | ||
&self, | ||
) -> &<<C as Config>::TwoToOneHash as TwoToOneCRHScheme>::Parameters { | ||
&self.two_to_one_hash_param | ||
} | ||
|
||
fn col_hash_params(&self) -> &<H as CRHScheme>::Parameters { | ||
&self.col_hash_params | ||
} | ||
} | ||
|
||
impl<F, C, H> BrakedownPCParams<F, C, H> | ||
where | ||
F: PrimeField, | ||
C: Config, | ||
H: CRHScheme, | ||
{ | ||
/// Create a default UniversalParams, with the values from Fig. 2 from the paper. | ||
pub fn default<R: RngCore>( | ||
rng: &mut R, | ||
poly_len: usize, | ||
check_well_formedness: bool, | ||
leaf_hash_param: LeafParam<C>, | ||
two_to_one_hash_param: TwoToOneParam<C>, | ||
col_hash_params: H::Parameters, | ||
) -> Self { | ||
let sec_param = 128; | ||
let a = (178, 1000); | ||
let b = (61, 1000); | ||
let r = (1521, 1000); | ||
let base_len = 30; | ||
let t = calculate_t::<F>(sec_param, (b.0 * r.1, b.1 * r.0), poly_len).unwrap(); // we want to get a rough idea what t is | ||
let n = 1 << log2((ceil_div(2 * poly_len, t) as f64).sqrt().ceil() as usize); | ||
let m = ceil_div(poly_len, n); | ||
let c = Self::cn_const(a, b); | ||
let d = Self::dn_const(a, b, r); | ||
let ct = Constants { a, b, r, c, d }; | ||
let (a_dims, b_dims) = Self::mat_size(m, base_len, &ct); | ||
let a_mats = Self::make_all(rng, &a_dims); | ||
let b_mats = Self::make_all(rng, &b_dims); | ||
|
||
Self::new( | ||
sec_param, | ||
a, | ||
b, | ||
r, | ||
base_len, | ||
n, | ||
m, | ||
a_dims, | ||
b_dims, | ||
a_mats, | ||
b_mats, | ||
check_well_formedness, | ||
leaf_hash_param, | ||
two_to_one_hash_param, | ||
col_hash_params, | ||
) | ||
} | ||
|
||
/// This function creates a UniversalParams. It does not check if the paramters are consistent/correct. | ||
pub fn new( | ||
sec_param: usize, | ||
a: (usize, usize), | ||
b: (usize, usize), | ||
r: (usize, usize), | ||
base_len: usize, | ||
n: usize, | ||
m: usize, | ||
a_dims: Vec<(usize, usize, usize)>, | ||
b_dims: Vec<(usize, usize, usize)>, | ||
a_mats: Vec<SprsMat<F>>, | ||
b_mats: Vec<SprsMat<F>>, | ||
check_well_formedness: bool, | ||
leaf_hash_param: LeafParam<C>, | ||
two_to_one_hash_param: TwoToOneParam<C>, | ||
col_hash_params: H::Parameters, | ||
) -> Self { | ||
let m_ext = if a_dims.is_empty() { | ||
ceil_mul(m, r) | ||
} else { | ||
Self::codeword_len(&a_dims, &b_dims) | ||
}; | ||
let start = a_dims | ||
.iter() | ||
.scan(0, |acc, &(row, _, _)| { | ||
*acc += row; | ||
Some(*acc) | ||
}) | ||
.collect::<Vec<_>>(); | ||
let end = b_dims | ||
.iter() | ||
.scan(m_ext, |acc, &(_, col, _)| { | ||
*acc -= col; | ||
Some(*acc) | ||
}) | ||
.collect::<Vec<_>>(); | ||
|
||
Self { | ||
sec_param, | ||
alpha: a, | ||
beta: b, | ||
rho_inv: r, | ||
base_len, | ||
n, | ||
m, | ||
m_ext, | ||
a_dims, | ||
b_dims, | ||
start, | ||
end, | ||
a_mats, | ||
b_mats, | ||
check_well_formedness, | ||
leaf_hash_param, | ||
two_to_one_hash_param, | ||
col_hash_params, | ||
} | ||
} | ||
/// mu = rho_inv - 1 - rho_inv * alpha | ||
fn mu(a: (usize, usize), r: (usize, usize)) -> f64 { | ||
let nom = r.0 * (a.1 - a.0) - r.1 * a.1; | ||
let den = r.1 * a.1; | ||
nom as f64 / den as f64 | ||
} | ||
/// nu = beta + alpha * beta + 0.03 | ||
fn nu(a: (usize, usize), b: (usize, usize)) -> f64 { | ||
let c = (3usize, 100usize); | ||
let nom = b.0 * (a.1 + a.0) * c.1 + c.0 * b.1 * a.1; | ||
let den = b.1 * a.1 * c.1; | ||
nom as f64 / den as f64 | ||
} | ||
/// cn_const | ||
fn cn_const(a: (usize, usize), b: (usize, usize)) -> (f64, f64) { | ||
let a = div(a); | ||
let b = div(b); | ||
let arg = 1.28 * b / a; | ||
let nom = ent(b) + a * ent(arg); | ||
let den = -b * arg.log2(); | ||
(nom, den) | ||
} | ||
/// cn | ||
fn cn(n: usize, ct: &Constants) -> usize { | ||
use ark_std::cmp::{max, min}; | ||
let b = ct.b; | ||
let c = ct.c; | ||
min( | ||
max(ceil_mul(n, (32 * b.0, 25 * b.1)), 4 + ceil_mul(n, b)), | ||
((110f64 / (n as f64) + c.0) / c.1).ceil() as usize, | ||
) | ||
} | ||
/// dn_const | ||
fn dn_const(a: (usize, usize), b: (usize, usize), r: (usize, usize)) -> (f64, f64) { | ||
let m = Self::mu(a, r); | ||
let n = Self::nu(a, b); | ||
let a = div(a); | ||
let b = div(b); | ||
let r = div(r); | ||
let nm = n / m; | ||
let nom = r * a * ent(b / r) + m * ent(nm); | ||
let den = -a * b * nm.log2(); | ||
(nom, den) | ||
} | ||
/// dn | ||
fn dn(n: usize, ct: &Constants) -> usize { | ||
use ark_std::cmp::min; | ||
let b = ct.b; | ||
let r = ct.r; | ||
let d = ct.d; | ||
min( | ||
ceil_mul(n, (2 * b.0, b.1)) | ||
+ ((ceil_mul(n, r) - n + 110) as f64 / F::MODULUS_BIT_SIZE as f64).ceil() as usize, // 2 * beta * n + n * (r - 1 + 110/n) | ||
((110f64 / (n as f64) + d.0) / d.1).ceil() as usize, | ||
) | ||
} | ||
fn mat_size( | ||
mut n: usize, | ||
base_len: usize, | ||
ct: &Constants, | ||
) -> (Vec<(usize, usize, usize)>, Vec<(usize, usize, usize)>) { | ||
let mut a_dims: Vec<(usize, usize, usize)> = Vec::default(); | ||
let a = ct.a; | ||
let r = ct.r; | ||
|
||
while n >= base_len { | ||
let m = ceil_mul(n, a); | ||
let cn = Self::cn(n, ct); | ||
let cn = if cn < m { cn } else { m }; // can't generate more nonzero entries than there are columns | ||
a_dims.push((n, m, cn)); | ||
n = m; | ||
} | ||
|
||
let b_dims = a_dims | ||
.iter() | ||
.map(|&(an, am, _)| { | ||
let n = ceil_mul(am, r); | ||
let m = ceil_mul(an, r) - an - n; | ||
let dn = Self::dn(n, ct); | ||
let dn = if dn < m { dn } else { m }; // can't generate more nonzero entries than there are columns | ||
(n, m, dn) | ||
}) | ||
.collect::<Vec<_>>(); | ||
(a_dims, b_dims) | ||
} | ||
|
||
/// This function computes the codeword length | ||
/// Notice that it assumes the input is bigger than base_len (i.e., a_dim is not empty) | ||
pub(crate) fn codeword_len( | ||
a_dims: &[(usize, usize, usize)], | ||
b_dims: &[(usize, usize, usize)], | ||
) -> usize { | ||
b_dims.iter().map(|(_, col, _)| col).sum::<usize>() + // Output v of the recursive encoding | ||
a_dims.iter().map(|(row, _, _)| row).sum::<usize>() + // Input x to the recursive encoding | ||
b_dims.last().unwrap().0 // Output z of the last step of recursion | ||
} | ||
|
||
/// Create a matrix with `n` rows and `m` columns and `d` non-zero entries in each row. | ||
/// This function creates a list for entries of each columns and calls the constructor | ||
/// from `SprsMat`. It leverages Fisher–Yates shuffle for choosing `d` indices in each | ||
/// row. | ||
fn make_mat<R: RngCore>(n: usize, m: usize, d: usize, rng: &mut R) -> SprsMat<F> { | ||
let mut tmp: Vec<usize> = (0..m).collect(); | ||
let mut mat: Vec<Vec<(usize, F)>> = vec![vec![]; m]; | ||
for i in 0..n { | ||
// Fisher–Yates shuffle algorithm | ||
let idxs = { | ||
(0..d) | ||
.map(|j| { | ||
let r = rng.next_u64() as usize % (m - j); | ||
tmp.swap(r, m - 1 - j); | ||
tmp[m - 1 - j] | ||
}) | ||
.collect::<Vec<usize>>() | ||
}; | ||
// Sampling values for each non-zero entry | ||
for j in idxs { | ||
mat[j].push(( | ||
i, | ||
loop { | ||
let r = F::rand(rng); | ||
if r != F::zero() { | ||
break r; | ||
} | ||
}, | ||
)) | ||
} | ||
} | ||
SprsMat::<F>::new_from_columns(n, m, d, &mat) | ||
} | ||
|
||
fn make_all<R: RngCore>(rng: &mut R, dims: &[(usize, usize, usize)]) -> Vec<SprsMat<F>> { | ||
dims.iter() | ||
.map(|(n, m, d)| Self::make_mat(*n, *m, *d, rng)) | ||
.collect::<Vec<_>>() | ||
} | ||
} | ||
|
||
#[inline] | ||
fn div(a: (usize, usize)) -> f64 { | ||
a.0 as f64 / a.1 as f64 | ||
} | ||
|
||
struct Constants { | ||
a: (usize, usize), | ||
b: (usize, usize), | ||
r: (usize, usize), | ||
c: (f64, f64), | ||
d: (f64, f64), | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
122 changes: 122 additions & 0 deletions
122
poly-commit/src/linear_codes/multilinear_brakedown/mod.rs
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,122 @@ | ||
use crate::Error; | ||
|
||
use super::utils::tensor_vec; | ||
use super::{BrakedownPCParams, LinearEncode}; | ||
use ark_crypto_primitives::{ | ||
crh::{CRHScheme, TwoToOneCRHScheme}, | ||
merkle_tree::Config, | ||
}; | ||
use ark_ff::{Field, PrimeField}; | ||
use ark_poly::{MultilinearExtension, Polynomial}; | ||
#[cfg(not(feature = "std"))] | ||
use ark_std::vec::Vec; | ||
use ark_std::{log2, marker::PhantomData, rand::RngCore}; | ||
|
||
mod tests; | ||
|
||
/// The multilinear Brakedown polynomial commitment scheme based on [[Brakedown]][bd]. | ||
/// The scheme defaults to the naive batching strategy. | ||
/// | ||
/// Note: The scheme currently does not support hiding. | ||
/// | ||
/// [bd]: https://eprint.iacr.org/2021/1043.pdf | ||
pub struct MultilinearBrakedown<F: PrimeField, C: Config, P: MultilinearExtension<F>, H: CRHScheme> | ||
{ | ||
_phantom: PhantomData<(F, C, P, H)>, | ||
} | ||
|
||
impl<F, C, P, H> LinearEncode<F, C, P, H> for MultilinearBrakedown<F, C, P, H> | ||
where | ||
F: PrimeField, | ||
C: Config, | ||
P: MultilinearExtension<F>, | ||
<P as Polynomial<F>>::Point: Into<Vec<F>>, | ||
H: CRHScheme, | ||
{ | ||
type LinCodePCParams = BrakedownPCParams<F, C, H>; | ||
|
||
fn setup<R: RngCore>( | ||
_max_degree: usize, | ||
num_vars: Option<usize>, | ||
rng: &mut R, | ||
leaf_hash_param: <<C as Config>::LeafHash as CRHScheme>::Parameters, | ||
two_to_one_hash_param: <<C as Config>::TwoToOneHash as TwoToOneCRHScheme>::Parameters, | ||
col_hash_params: H::Parameters, | ||
) -> Self::LinCodePCParams { | ||
Self::LinCodePCParams::default( | ||
rng, | ||
1 << num_vars.unwrap(), | ||
true, | ||
leaf_hash_param, | ||
two_to_one_hash_param, | ||
col_hash_params, | ||
) | ||
} | ||
|
||
fn encode(msg: &[F], pp: &Self::LinCodePCParams) -> Result<Vec<F>, Error> { | ||
if msg.len() != pp.m { | ||
return Err(Error::EncodingError); | ||
} | ||
let cw_len = pp.m_ext; | ||
let mut cw = Vec::with_capacity(cw_len); | ||
cw.extend_from_slice(msg); | ||
|
||
// Multiply by matrices A | ||
for (i, &s) in pp.start.iter().enumerate() { | ||
let mut src = pp.a_mats[i].row_mul(&cw[s - pp.a_dims[i].0..s]); | ||
cw.append(&mut src); | ||
} | ||
|
||
// later we don't necessarily mutate in order, so we need the full vec now. | ||
cw.resize(cw_len, F::zero()); | ||
// RS encode the last one | ||
let rss = *pp.start.last().unwrap_or(&0); | ||
let rsie = rss + pp.a_dims.last().unwrap_or(&(0, pp.m, 0)).1; | ||
let rsoe = *pp.end.last().unwrap_or(&cw_len); | ||
naive_reed_solomon(&mut cw, rss, rsie, rsoe); | ||
|
||
// Come back | ||
for (i, (&s, &e)) in pp.start.iter().zip(&pp.end).enumerate() { | ||
let src = &pp.b_mats[i].row_mul(&cw[s..e]); | ||
cw[e..e + pp.b_dims[i].1].copy_from_slice(src); | ||
} | ||
Ok(cw.to_vec()) | ||
} | ||
|
||
fn poly_to_vec(polynomial: &P) -> Vec<F> { | ||
polynomial.to_evaluations() | ||
} | ||
|
||
fn point_to_vec(point: <P as Polynomial<F>>::Point) -> Vec<F> { | ||
point | ||
} | ||
|
||
/// For a multilinear polynomial in n+m variables it returns a tuple for k={n,m}: | ||
/// ((1-z_1)*(1-z_2)*...*(1_z_k), z_1*(1-z_2)*...*(1-z_k), ..., z_1*z_2*...*z_k) | ||
fn tensor( | ||
point: &<P as Polynomial<F>>::Point, | ||
left_len: usize, | ||
_right_len: usize, | ||
) -> (Vec<F>, Vec<F>) { | ||
let point: Vec<F> = Self::point_to_vec(point.clone()); | ||
|
||
let split = log2(left_len) as usize; | ||
let left = &point[..split]; | ||
let right = &point[split..]; | ||
(tensor_vec(left), tensor_vec(right)) | ||
} | ||
} | ||
|
||
// This RS encoding is on points 1, ..., oe - s without relying on FFTs | ||
fn naive_reed_solomon<F: Field>(cw: &mut [F], s: usize, ie: usize, oe: usize) { | ||
let mut res = vec![F::zero(); oe - s]; | ||
let mut x = F::one(); | ||
for r in res.iter_mut() { | ||
for j in (s..ie).rev() { | ||
*r *= x; | ||
*r += cw[j]; | ||
} | ||
x += F::one(); | ||
} | ||
cw[s..oe].copy_from_slice(&res); | ||
} |
263 changes: 263 additions & 0 deletions
263
poly-commit/src/linear_codes/multilinear_brakedown/tests.rs
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,263 @@ | ||
#[cfg(test)] | ||
mod tests { | ||
|
||
use crate::linear_codes::LinearCodePCS; | ||
use crate::utils::test_sponge; | ||
use crate::{ | ||
linear_codes::{utils::*, BrakedownPCParams, MultilinearBrakedown, PolynomialCommitment}, | ||
LabeledPolynomial, | ||
}; | ||
use ark_bls12_377::Fr; | ||
use ark_bls12_381::Fr as Fr381; | ||
use ark_crypto_primitives::{ | ||
crh::{sha256::Sha256, CRHScheme, TwoToOneCRHScheme}, | ||
merkle_tree::{ByteDigestConverter, Config}, | ||
}; | ||
use ark_ff::{Field, PrimeField}; | ||
use ark_poly::evaluations::multivariate::{MultilinearExtension, SparseMultilinearExtension}; | ||
use ark_std::test_rng; | ||
use blake2::Blake2s256; | ||
use rand_chacha::{rand_core::SeedableRng, ChaCha20Rng}; | ||
|
||
type LeafH = LeafIdentityHasher; | ||
type CompressH = Sha256; | ||
type ColHasher<F, D> = FieldToBytesColHasher<F, D>; | ||
|
||
struct MerkleTreeParams; | ||
|
||
impl Config for MerkleTreeParams { | ||
type Leaf = Vec<u8>; | ||
|
||
type LeafDigest = <LeafH as CRHScheme>::Output; | ||
type LeafInnerDigestConverter = ByteDigestConverter<Self::LeafDigest>; | ||
type InnerDigest = <CompressH as TwoToOneCRHScheme>::Output; | ||
|
||
type LeafHash = LeafH; | ||
type TwoToOneHash = CompressH; | ||
} | ||
|
||
type MTConfig = MerkleTreeParams; | ||
|
||
type BrakedownPCS<F> = LinearCodePCS< | ||
MultilinearBrakedown<F, MTConfig, SparseMultilinearExtension<F>, ColHasher<F, Blake2s256>>, | ||
F, | ||
SparseMultilinearExtension<F>, | ||
MTConfig, | ||
ColHasher<F, Blake2s256>, | ||
>; | ||
|
||
fn rand_poly<Fr: PrimeField>( | ||
_: usize, | ||
num_vars: Option<usize>, | ||
rng: &mut ChaCha20Rng, | ||
) -> SparseMultilinearExtension<Fr> { | ||
match num_vars { | ||
Some(n) => SparseMultilinearExtension::rand(n, rng), | ||
None => unimplemented!(), // should not happen in ML case! | ||
} | ||
} | ||
|
||
fn constant_poly<Fr: PrimeField>( | ||
_: usize, | ||
num_vars: Option<usize>, | ||
rng: &mut ChaCha20Rng, | ||
) -> SparseMultilinearExtension<Fr> { | ||
match num_vars { | ||
Some(n) => { | ||
let points = vec![(1, Fr::rand(rng))]; | ||
SparseMultilinearExtension::from_evaluations(n, &points) | ||
} | ||
None => unimplemented!(), // should not happen in ML case! | ||
} | ||
} | ||
|
||
#[test] | ||
fn test_construction() { | ||
let mut rng = &mut test_rng(); | ||
let num_vars = 11; | ||
// just to make sure we have the right degree given the FFT domain for our field | ||
let leaf_hash_param = <LeafH as CRHScheme>::setup(&mut rng).unwrap(); | ||
let two_to_one_hash_param = <CompressH as TwoToOneCRHScheme>::setup(&mut rng) | ||
.unwrap() | ||
.clone(); | ||
let col_hash_params = <ColHasher<Fr, Blake2s256> as CRHScheme>::setup(&mut rng).unwrap(); | ||
let check_well_formedness = true; | ||
|
||
let pp: BrakedownPCParams<Fr, MTConfig, ColHasher<Fr, Blake2s256>> = | ||
BrakedownPCParams::default( | ||
rng, | ||
1 << num_vars, | ||
check_well_formedness, | ||
leaf_hash_param, | ||
two_to_one_hash_param, | ||
col_hash_params, | ||
); | ||
|
||
let (ck, vk) = BrakedownPCS::<Fr>::trim(&pp, 0, 0, None).unwrap(); | ||
|
||
let rand_chacha = &mut ChaCha20Rng::from_rng(test_rng()).unwrap(); | ||
let labeled_poly = LabeledPolynomial::new( | ||
"test".to_string(), | ||
rand_poly(1, Some(num_vars), rand_chacha), | ||
Some(num_vars), | ||
Some(num_vars), | ||
); | ||
|
||
let mut test_sponge = test_sponge::<Fr>(); | ||
let (c, states) = BrakedownPCS::<Fr>::commit(&ck, &[labeled_poly.clone()], None).unwrap(); | ||
|
||
let point = rand_point(Some(num_vars), rand_chacha); | ||
|
||
let value = labeled_poly.evaluate(&point); | ||
|
||
let proof = BrakedownPCS::<Fr>::open( | ||
&ck, | ||
&[labeled_poly], | ||
&c, | ||
&point, | ||
&mut (test_sponge.clone()), | ||
&states, | ||
None, | ||
) | ||
.unwrap(); | ||
assert!(BrakedownPCS::<Fr>::check( | ||
&vk, | ||
&c, | ||
&point, | ||
[value], | ||
&proof, | ||
&mut test_sponge, | ||
None | ||
) | ||
.unwrap()); | ||
} | ||
|
||
fn rand_point<F: Field>(num_vars: Option<usize>, rng: &mut ChaCha20Rng) -> Vec<F> { | ||
match num_vars { | ||
Some(n) => (0..n).map(|_| F::rand(rng)).collect(), | ||
None => unimplemented!(), // should not happen! | ||
} | ||
} | ||
|
||
#[test] | ||
fn single_poly_test() { | ||
use crate::tests::*; | ||
single_poly_test::<_, _, BrakedownPCS<Fr>, _>( | ||
Some(5), | ||
rand_poly::<Fr>, | ||
rand_point::<Fr>, | ||
poseidon_sponge_for_test::<Fr>, | ||
) | ||
.expect("test failed for bls12-377"); | ||
single_poly_test::<_, _, BrakedownPCS<Fr381>, _>( | ||
Some(10), | ||
rand_poly::<Fr381>, | ||
rand_point::<Fr381>, | ||
poseidon_sponge_for_test::<Fr381>, | ||
) | ||
.expect("test failed for bls12-381"); | ||
} | ||
|
||
#[test] | ||
fn constant_poly_test() { | ||
use crate::tests::*; | ||
single_poly_test::<_, _, BrakedownPCS<Fr>, _>( | ||
Some(10), | ||
constant_poly::<Fr>, | ||
rand_point::<Fr>, | ||
poseidon_sponge_for_test::<Fr>, | ||
) | ||
.expect("test failed for bls12-377"); | ||
single_poly_test::<_, _, BrakedownPCS<Fr381>, _>( | ||
Some(5), | ||
constant_poly::<Fr381>, | ||
rand_point::<Fr381>, | ||
poseidon_sponge_for_test::<Fr381>, | ||
) | ||
.expect("test failed for bls12-381"); | ||
} | ||
|
||
#[test] | ||
fn full_end_to_end_test() { | ||
use crate::tests::*; | ||
full_end_to_end_test::<_, _, BrakedownPCS<Fr>, _>( | ||
Some(8), | ||
rand_poly::<Fr>, | ||
rand_point::<Fr>, | ||
poseidon_sponge_for_test::<Fr>, | ||
) | ||
.expect("test failed for bls12-377"); | ||
println!("Finished bls12-377"); | ||
full_end_to_end_test::<_, _, BrakedownPCS<Fr381>, _>( | ||
Some(9), | ||
rand_poly::<Fr381>, | ||
rand_point::<Fr381>, | ||
poseidon_sponge_for_test::<Fr381>, | ||
) | ||
.expect("test failed for bls12-381"); | ||
println!("Finished bls12-381"); | ||
} | ||
|
||
#[test] | ||
fn single_equation_test() { | ||
use crate::tests::*; | ||
single_equation_test::<_, _, BrakedownPCS<Fr>, _>( | ||
Some(10), | ||
rand_poly::<Fr>, | ||
rand_point::<Fr>, | ||
poseidon_sponge_for_test::<Fr>, | ||
) | ||
.expect("test failed for bls12-377"); | ||
println!("Finished bls12-377"); | ||
single_equation_test::<_, _, BrakedownPCS<Fr381>, _>( | ||
Some(5), | ||
rand_poly::<Fr381>, | ||
rand_point::<Fr381>, | ||
poseidon_sponge_for_test::<Fr381>, | ||
) | ||
.expect("test failed for bls12-381"); | ||
println!("Finished bls12-381"); | ||
} | ||
|
||
#[test] | ||
fn two_equation_test() { | ||
use crate::tests::*; | ||
two_equation_test::<_, _, BrakedownPCS<Fr>, _>( | ||
Some(5), | ||
rand_poly::<Fr>, | ||
rand_point::<Fr>, | ||
poseidon_sponge_for_test::<Fr>, | ||
) | ||
.expect("test failed for bls12-377"); | ||
println!("Finished bls12-377"); | ||
two_equation_test::<_, _, BrakedownPCS<Fr381>, _>( | ||
Some(10), | ||
rand_poly::<Fr381>, | ||
rand_point::<Fr381>, | ||
poseidon_sponge_for_test::<Fr381>, | ||
) | ||
.expect("test failed for bls12-381"); | ||
println!("Finished bls12-381"); | ||
} | ||
|
||
#[test] | ||
fn full_end_to_end_equation_test() { | ||
use crate::tests::*; | ||
full_end_to_end_equation_test::<_, _, BrakedownPCS<Fr>, _>( | ||
Some(5), | ||
rand_poly::<Fr>, | ||
rand_point::<Fr>, | ||
poseidon_sponge_for_test::<Fr>, | ||
) | ||
.expect("test failed for bls12-377"); | ||
println!("Finished bls12-377"); | ||
full_end_to_end_equation_test::<_, _, BrakedownPCS<Fr381>, _>( | ||
Some(8), | ||
rand_poly::<Fr381>, | ||
rand_point::<Fr381>, | ||
poseidon_sponge_for_test::<Fr381>, | ||
) | ||
.expect("test failed for bls12-381"); | ||
println!("Finished bls12-381"); | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters