-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check current signers still validators #1097
Check current signers still validators #1097
Conversation
…k-if-current-signers-are-still-validators
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⭐ this is great.
Is it possible to further improve this by having the parties who are leaving the signing set also participate in the reshare, but not receive new shares? So that we don't have to keep t parties the same. I can't remember whether synderion lets you do this.
@@ -169,7 +170,8 @@ pub async fn new_reshare( | |||
.await?; | |||
|
|||
let (new_key_share, aux_info) = | |||
execute_reshare(session_id.clone(), channels, signer.signer(), inputs, None).await?; | |||
execute_reshare(session_id.clone(), channels, signer.signer(), inputs, &new_holders, None) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe ive misunderstood something, but it looks like new_holders
is the same as inputs.new_holders
- so why add the extra argument to the execute_reshare
function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
explained lower down
@@ -273,8 +275,15 @@ pub async fn validate_new_reshare( | |||
.await? | |||
.ok_or_else(|| ValidatorErr::ChainFetch("Not Currently in a reshare"))?; | |||
|
|||
if reshare_data.new_signer != chain_data.new_signer | |||
|| chain_data.block_number != reshare_data.block_number | |||
let mut hasher_chain_data = Blake2s256::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to hash this? Can we not compare two Vec<Vec<u8>>
s for equality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes you are right a holdover of a change I revereted from #1114 but will change back
.collect() | ||
Ok(if !new_signers.is_empty() { | ||
let mut filtered_validators_info = vec![]; | ||
for new_signer in new_signers { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works fine - but since we are anyway about to convert this to a BTreeSet, we can use .difference()
pallets/staking/src/lib.rs
Outdated
remove_index_len = remove_indexs.len(); | ||
let remove_indexs_reversed: Vec<_> = remove_indexs.iter().rev().collect(); | ||
// TODO: Only remove up to threhsold https://github.com/entropyxyz/entropy-core/issues/1114 | ||
let truncated = if remove_index_len > signers_info.threshold as usize { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe im getting confused, but should this not be signers_info.total_signers - signers_info.threshold
? As in the number of redundant signers (n - t) rather than t.
So with 2 of 3, we can afford to loose at most 1 signer, not 2.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, writing a better test for regression now
pallets/staking/src/lib.rs
Outdated
let mut randomness = Self::get_randomness(); | ||
// grab a current signer to initiate value | ||
let mut next_signer_up = ¤t_signers[0].clone(); | ||
let mut next_signer_up = &validators[0].clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this not be a random validator rather than the first in the list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a place holder for rust not complaining, gets changed lower down (not initialized compile error)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it looks to me like it only gets changed lower down if validators[0]
is currently a member of the signer set
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right this should be grabbed from current_signers to initiate it
@@ -334,6 +334,7 @@ pub async fn execute_reshare( | |||
chans: Channels, | |||
threshold_pair: &sr25519::Pair, | |||
inputs: KeyResharingInputs<KeyParams, PartyId>, | |||
verifiers: &BTreeSet<PartyId>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im not sure we need this argument as it looks like whenever we call this function verifiers
is the same as inputs.new_holders
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes originally I changed it to allow for old holders to be more and not in new holders however ran into the #1114 issue, didn't change it back because this is still more right imo and adds no overhead, and hopefully will be needed when the issue is resolved
not currently opened an issue, a limitation with synedrion see #1114 |
Co-authored-by: peg <[email protected]>
…k-if-current-signers-are-still-validators
pallets/staking/src/benchmarking.rs
Outdated
@@ -427,13 +429,22 @@ benchmarks! { | |||
new_session { | |||
let c in 1 .. MAX_SIGNERS as u32 - 1; | |||
let l in 0 .. MAX_SIGNERS as u32; | |||
let v in 0 .. MAX_SIGNERS as u32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this value be higher than MAX_SIGNERS
?
pallets/staking/src/lib.rs
Outdated
if current_signers_length >= signers_info.total_signers as usize { | ||
let mut remove_indexs = vec![]; | ||
for (i, current_signer) in current_signers.clone().into_iter().enumerate() { | ||
if !validators.contains(¤t_signer) { | ||
remove_indexs.push(i); | ||
} | ||
} | ||
if remove_indexs.is_empty() { | ||
current_signers.remove(0); | ||
} else { | ||
remove_index_len = remove_indexs.len(); | ||
let remove_indexs_reversed: Vec<_> = remove_indexs.iter().rev().collect(); | ||
let truncated = remove_indexs_reversed | ||
[..(signers_info.total_signers as usize - signers_info.threshold as usize)] | ||
.to_vec(); | ||
for remove_index in truncated { | ||
current_signers.remove(*remove_index); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to follow the transformations here is pretty confusing. Maybe you could make use of Vec
's .retain
and .truncate
methods here to clean stuff up instead of things like .rev
and slices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
commented it up more, for what it is worth the truncated should eventually be removed
OcwMessageReshare { new_signer: new_signer.0.to_vec(), block_number }; | ||
let onchain_reshare_request = OcwMessageReshare { | ||
new_signers: vec![new_signer.0.to_vec()], | ||
block_number: block_number - 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why change this instead of keeping the +1
in run_to_block
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because this needs to match the onchain info or it will not pass the validate check
Co-authored-by: Hernando Castano <[email protected]>
// Stash address of new signer | ||
pub new_signer: Vec<u8>, | ||
// Stash addresses of new signers | ||
pub new_signers: Vec<Vec<u8>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few lines above, in ValidatorInfo
, Vec
s are provided with the full path, codec::alloc::vec::Vec<u8>
. Should we be consistent? The short form here seems better.
// old holders -> next_signers - new_signers (will be at least t) | ||
let old_holders = | ||
&prune_old_holders(&api, &rpc, data.new_signers, validators_info.clone()).await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't find where the (will be at least t)
invariant is checked; doesn't seem to happen in prune_old_holders
(also, but unrelated to this PR, that method seems to be cloning wildly but not sure that's needed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ya this is a little hard to follow flow this comes from the chain itself here https://github.com/entropyxyz/entropy-core/pull/1097/files#diff-9be73ee9078236e99d5b726b1a2d9acd38446cedbe770d2f23d39ef51b1763ceR720
OcwMessageReshare { new_signer: new_signer.0.to_vec(), block_number }; | ||
let onchain_reshare_request = OcwMessageReshare { | ||
new_signers: reshare_data.new_signers.into_iter().map(|s| s.to_vec()).collect(), | ||
block_number: block_number - 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this blow up on block 0? Maybe checked_sub
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the block number is hard coded one line above to a number way over 0
Related #1114