You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following the PoC for Light Wallet use case implementation #1313, we want to make the signature of the Cardano Transactions type of data signed by Mithril network is production ready and delivered on the mainnet.
What
The MVP should implement a production ready type of data signed for the Cardano transactions set that can handle its 100 million transactions magnitude:
Fast signature process on signer and aggregator
Fast proof of membership generation on the aggregator
Incremental signature process
Pruning of transaction data implemented for the signer to minimize its impact on the SPO machines
Certify transactions as close as possible from the block tip (handle rollbacks) and with a fast refresh (i.e. signature) rate
"Block range commitment Merkle Tree" new signature scheme
We want to implement a mechanism that will allow fast proof generation for the Cardano transactions and to reach the mainnet magnitude of records to sign. In order to do that, we must avoid to load the full Cardano transactions set in memory when signing and generating proofs. We want to evolve the way the protocol message and proofs are computed for the Cardano transactions, which will allow us to reach this performances. The representation of the Cardano transactions that is computed by the signer (to sign) and aggregator (to sign and prove) should evolve to a Block range commitment Merkle Tree.
Naive implementation (current)
Signature/proof diagram for signer and aggregator
Cons
Does not scale for signing
Does not scale for proving
High footprint on the signer SPO infrastructure
Block range commitments implementation (next)
Signature diagram for aggregator
Signature diagram for signer
Proof diagram for aggregator
Pros
Scales for signing
Scales for proving
Low footprint on the signer SPO infrastructure
Implementation roadmap
Implement a block range commitment Merkle tree for Cardano Transactions
Implement a cache to compute protocol message and membership proofs
Implement pruning of Cardano transactions data on the signer
Migrate beacon type to a new block-range based beacon
Retrieve Cardano transactions with chainsync mini protocol (closer to the tip, need to handle rollbacks)
Evolve stress test tooling to support Cardano transactions
Performances expectations
Figures TBC
Certify 100M transactions.
Refresh certification every 5min.
Wait at most 10min before having transaction certified.
Serve 100-1K proofs per second with <500ms latency.
First evaluation
100M transactions on mainnet
10-100 transactions per block on mainnet
1 block every ~20s
Block range length of 15 blocks or ~5min
150-1.5K transactions per block range
67K-670K block ranges (number of leaves if the top tree)
At most 2 Merkle proof generations per transaction
At most N+1 Merkle proof generations per N transactions, and could be computed in
Merkle proof generation takes <10ms for <100K leaves and <150ms for 1M leaves
Best case scenario at 100M transactions: <300ms per batch proof
<100ms for data retrieval from store (locks)
<10ms for Merkle proof generation of block range trees
<150ms for Merkle proof generation of block ranges commitments tree
<50ms for message generation
If caching exists for the top tree, we could probably get <150ms per batch proof
Open topics
What is the best value for the block range length (as it will be difficult to change once released)?
a length between 10 and 20 blocks seems to be a good value
do we need recursive/more layers structure (tree of commitments for ranges of block ranges)?
How to avoid DoS attacks on the aggregator?
Max budget enforcing for proofs generation (e.g. based on number of underlying Merkle proofs generations)?
Throttling of the HTTP server?
What is the best caching strategy to adopt?
Recording some stats would be useful?
Caching tree of commitments?
Caching most recent block ranges sub-trees?
Can we parallelize some computations?
Compute block range sub-trees proofs in parallel?
Compute all proofs in parallel (map-reduce)?
Evolution scenario (does this require a new era?):
What if we need to update the block range length?
What if we change the underlying Merkle tree implementation (we rely on Merkle Mountain Range which does not require complete binary trees)?
Optimize the message to sign from a global Merkle tree of transactions to Merkle tree of Merkle trees of transactions (one for each immutable file number or block range)
Support CardanoTransactions signed entity type in explorer
Document merkelized type of data that is signed
Prune data from Cardano transactions store in signer (possible with Merkle tree of Merkle tree)
Handle gracefully possible rollbacks from the cardano chain (when we retrieve transactions closer form the tip of the chain) to guarantee consistency in Merkle tree creation (sorting of transactions)
Use pallas-hardano to gather immutable files (need to get acces to bloks of a single immutable file for that + pallas-hardano crate released)
Or use pallas node to client communication to gather the blocks directly
Add cache mechanism for immutable parser
The text was updated successfully, but these errors were encountered:
Why
Following the PoC for Light Wallet use case implementation #1313, we want to make the signature of the Cardano Transactions type of data signed by Mithril network is production ready and delivered on the
mainnet
.What
The MVP should implement a production ready type of data signed for the Cardano transactions set that can handle its 100 million transactions magnitude:
"Block range commitment Merkle Tree" new signature scheme
We want to implement a mechanism that will allow fast proof generation for the Cardano transactions and to reach the
mainnet
magnitude of records to sign. In order to do that, we must avoid to load the full Cardano transactions set in memory when signing and generating proofs. We want to evolve the way the protocol message and proofs are computed for the Cardano transactions, which will allow us to reach this performances. The representation of the Cardano transactions that is computed by the signer (to sign) and aggregator (to sign and prove) should evolve to a Block range commitment Merkle Tree.Naive implementation (current)
Signature/proof diagram for signer and aggregator
Cons
Block range commitments implementation (next)
Signature diagram for aggregator
Signature diagram for signer
Proof diagram for aggregator
Pros
Implementation roadmap
Performances expectations
100M
transactions.5min
.10min
before having transaction certified.100-1K
proofs per second with<500ms
latency.100M
transactions onmainnet
10-100
transactions per block onmainnet
1
block every~20s
15
blocks or~5min
150-1.5K
transactions per block range67K-670K
block ranges (number of leaves if the top tree)2
Merkle proof generations per transactionN+1
Merkle proof generations perN
transactions, and could be computed in<10ms
for<100K
leaves and<150ms
for1M
leaves100M
transactions:<300ms
per batch proof<100ms
for data retrieval from store (locks)<10ms
for Merkle proof generation of block range trees<150ms
for Merkle proof generation of block ranges commitments tree<50ms
for message generation<150ms
per batch proofOpen topics
10
and20
blocks seems to be a good valueHow
Cardano Transactions
#1533ChainObserver
supports retrieving the Chain Point of the tip of the chain #1589chainsync
inpallas
#1590pre-release-preview
release-preprod
release-mainnet
Notes and ideas
CardanoTransactions
signed entity type in explorerpallas-hardano
to gather immutable files (need to get acces to bloks of a single immutable file for that +pallas-hardano
crate released)pallas
node to client communication to gather the blocks directlyThe text was updated successfully, but these errors were encountered: