Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add Chain Data tables #63

Merged
merged 26 commits into from
Dec 29, 2023
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions src/client/chain_data.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
use super::Client;

use crate::errors::ClientError;
use objects::BlockHeader;

impl Client {
pub fn get_block_headers(
&self,
start: u32,
finish: u32,
) -> Result<Vec<BlockHeader>, ClientError> {
let mut headers = Vec::new();
for block_number in start..=finish {
if let Ok(block_header) = self.store.get_block_header_by_num(block_number) {
headers.push(block_header)
}
}

Ok(headers)
}
juan518munoz marked this conversation as resolved.
Show resolved Hide resolved
}
15 changes: 13 additions & 2 deletions src/client/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,15 @@ use objects::{accounts::AccountId, Digest};

use crate::{
config::ClientConfig,
errors::{ClientError, RpcApiError},
errors::{ClientError, RpcApiError, StoreError},
store::{mock_executor_data_store::MockDataStore, Store},
};

#[cfg(any(test, feature = "testing"))]
use crate::mock::MockRpcApi;

pub mod accounts;
pub mod chain_data;
pub mod notes;
pub mod transactions;

Expand Down Expand Up @@ -124,8 +125,18 @@ impl Client {
})
.collect::<Vec<_>>();

let new_block_header = match response.block_header {
Some(block_header) => match objects::BlockHeader::try_from(block_header) {
Ok(block_header) => Some(block_header),
Err(err) => {
return Err(ClientError::StoreError(StoreError::ConvertionFailure(err)));
}
},
None => None,
};

self.store
.apply_state_sync(new_block_num, new_nullifiers)
.apply_state_sync(new_block_num, new_nullifiers, new_block_header)
.map_err(ClientError::StoreError)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A general comment about sync_state function: my original intent was for it to work slightly differently. Specifically:

The sync_state request to the node gives us the next block containing requested data. It also gives us chain_tip which is the latest block number in the chain. So, unless response.block_header.block_num == response.chain_tip we haven't synced to the tip of the chain yet.

The idea was that we'd make these requests in a loop until response.block_header.block_num == response.chain_tip, at which point we know that we've fully synchronized with the chain.

Each request also brings us info about new notes, nullifiers etc. created. It also returns Chain MMR delta that we can use to update the state of Chain MMR. This includes both chain MMR peaks and chain MMR nodes.

A naive way to update chain MMR is to load full PartialMmr at the beginning of this method and then call apply on it for every response. There is a better way to do it - though, the details require more thought.

Copy link
Contributor Author

@juan518munoz juan518munoz Dec 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can add the naive implementation to leave it in a working state, and change it a better one if we can come up with some.


Ok(new_block_num)
Expand Down
17 changes: 13 additions & 4 deletions src/errors.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
use core::fmt;
use crypto::{
dsa::rpo_falcon512::FalconError,
utils::{DeserializationError, HexParseError},
};
use crypto::utils::DeserializationError;
use crypto::{dsa::rpo_falcon512::FalconError, utils::HexParseError};
use miden_node_proto::error::ParseError;
use miden_tx::{TransactionExecutorError, TransactionProverError};
use objects::{accounts::AccountId, AccountError, Digest, NoteError, TransactionScriptError};
use tonic::{transport::Error as TransportError, Status as TonicStatus};
Expand Down Expand Up @@ -68,6 +67,9 @@ pub enum StoreError {
NoteTagAlreadyTracked(u64),
QueryError(rusqlite::Error),
TransactionError(rusqlite::Error),
BlockHeaderNotFound(u32),
ChainMmrNodeNotFound(u64),
ConvertionFailure(ParseError),
TransactionScriptError(TransactionScriptError),
VaultDataNotFound(Digest),
}
Expand Down Expand Up @@ -114,6 +116,13 @@ impl fmt::Display for StoreError {
write!(f, "error instantiating transaction script: {err}")
}
VaultDataNotFound(root) => write!(f, "account vault data for root {} not found", root),
BlockHeaderNotFound(block_number) => {
write!(f, "block header for block {} not found", block_number)
}
ChainMmrNodeNotFound(node_index) => {
write!(f, "chain mmr node at index {} not found", node_index)
}
ConvertionFailure(err) => write!(f, "failed to convert data: {err}"),
}
}
}
Expand Down
6 changes: 6 additions & 0 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,9 @@ mod tests {
0
);

// assert that we don't have any block headers prior to syncing state
assert_eq!(client.get_block_headers(10, 10).unwrap().len(), 0);

// sync state
let block_num = client.sync_state().await.unwrap();

Expand Down Expand Up @@ -238,6 +241,9 @@ mod tests {
.1
.chain_tip
);

// verify that the database now holds the latest block header
assert_eq!(client.get_block_headers(10, 10).unwrap().len(), 1);
}

#[tokio::test]
Expand Down
14 changes: 11 additions & 3 deletions src/mock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,20 @@ use crate::client::{Client, FILTER_ID_SHIFT};
use crate::store::mock_executor_data_store::MockDataStore;
use crate::store::AuthInfo;
use crypto::dsa::rpo_falcon512::KeyPair;
use miden_node_proto::block_header::BlockHeader as NodeBlockHeader;
use miden_node_proto::requests::SubmitProvenTransactionRequest;
use miden_node_proto::responses::SubmitProvenTransactionResponse;
use miden_node_proto::{
account_id::AccountId as ProtoAccountId,
requests::SyncStateRequest,
responses::{NullifierUpdate, SyncStateResponse},
};
use mock::mock::block;
use objects::{utils::collections::BTreeMap, StarkField};

use miden_tx::TransactionExecutor;
use objects::accounts::AccountType;
use objects::assets::FungibleAsset;
use objects::{utils::collections::BTreeMap, StarkField};

/// Mock RPC API
///
Expand Down Expand Up @@ -93,12 +95,18 @@ fn generate_sync_state_mock_requests() -> BTreeMap<SyncStateRequest, SyncStateRe
nullifiers,
};

let chain_tip = 10;

// create a block header for the response
let block_header: objects::BlockHeader =
block::mock_block_header(chain_tip.into(), None, None, &[]);

// create a state sync response
let response = SyncStateResponse {
chain_tip: 10,
chain_tip,
mmr_delta: None,
block_path: None,
block_header: None,
block_header: Some(NodeBlockHeader::from(block_header)),
accounts: vec![],
notes: vec![],
nullifiers: vec![NullifierUpdate {
Expand Down
194 changes: 194 additions & 0 deletions src/store/chain_data.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
use super::Store;

use crate::errors::StoreError;

use clap::error::Result;

use objects::{BlockHeader, ChainMmr};
use rusqlite::params;

type SerializedBlockHeaderData = (i64, String, String, String, String);
type SerializedBlockHeaderParts = (i64, String, String, String, String);

type SerializedChainMmrNodeData = String;
type SerializedChainMmrNodeParts = (i64, String);

impl Store {
// CHAIN DATA
// --------------------------------------------------------------------------------------------
pub fn insert_block_header(&mut self, block_header: BlockHeader) -> Result<(), StoreError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in the previous comments, this will need to take an additional parameter for chain_mmr_peaks (this would be Vec<Digest>).

Copy link
Contributor

@bobbinth bobbinth Dec 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now thinking about this, we might also need to pass the forest (u64) to this as well because our goal is to be able to reconstruct PartialMmr that the client keeps track of. This requires:

  • forest: u64 - this would be in the block_headers table.
  • peaks: Vec<RpoDigest> - this would also be in the block_headers table.
  • nodes: BTreeMap<InOrderIndex, RpoDigest> - this would come from the chain_mmr_nodes table.
  • track_latest: bool- not sure where this should come from yet.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still needed? forest can be derived from peaks and the PartialMmr constructor takes only the peaks

let (block_num, header, notes_root, sub_hash, chain_mmr) =
serialize_block_header(block_header)?;

const QUERY: &str = "\
INSERT INTO block_headers
(block_num, header, notes_root, sub_hash, chain_mmr)
VALUES (?, ?, ?, ?, ?)";

self.db
.execute(
QUERY,
params![block_num, header, notes_root, sub_hash, chain_mmr],
)
.map_err(StoreError::QueryError)
.map(|_| ())
}

pub fn get_block_header_by_num(&self, block_number: u32) -> Result<BlockHeader, StoreError> {
const QUERY: &str = "SELECT block_num, header, notes_root, sub_hash, chain_mmr FROM block_headers WHERE block_num = ?";
self.db
.prepare(QUERY)
.map_err(StoreError::QueryError)?
.query_map(params![block_number as i64], parse_block_headers_columns)
.map_err(StoreError::QueryError)?
.map(|result| {
result
.map_err(StoreError::ColumnParsingError)
.and_then(parse_block_header)
})
.next()
.ok_or(StoreError::BlockHeaderNotFound(block_number))?
}

pub fn insert_chain_mmr_node(&mut self, chain_mmr: ChainMmr) -> Result<(), StoreError> {
let node = serialize_chain_mmr(chain_mmr)?;

const QUERY: &str = "INSERT INTO chain_mmr_nodes (node) VALUES (?)";

self.db
.execute(QUERY, params![node])
.map_err(StoreError::QueryError)
.map(|_| ())
}

pub fn get_chain_mmr_hash_by_id(&self, id: u64) -> Result<ChainMmr, StoreError> {
const QUERY: &str = "SELECT id, node FROM chain_mmr_nodes WHERE id = ?";
self.db
.prepare(QUERY)
.map_err(StoreError::QueryError)?
.query_map(params![id as i64], parse_chain_mmr_nodes_columns)
.map_err(StoreError::QueryError)?
.map(|result| {
result
.map_err(StoreError::ColumnParsingError)
.and_then(parse_chain_mmr_nodes)
})
.next()
.ok_or(StoreError::ChainMmrNodeNotFound(id))?
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure these methods are correct. chain_mmr_nodes table is meant to store nodes from PartialMmr struct. Basically, every row in this table would be a single entry in the nodes map of PartialMmr.

We probably don't want to insert the whole partial MMR every time - but rather only insert the resulting from each update.

So, the methods here should probably be a bit lower-level. Something like:

pub fn insert_chain_mmr_nodes(&mut self, nodes: Vec<(InOrderIndex, Digest)>) -> Result<(), StoreError>` {

}

/// Returns all nodes in the table.
pub fn get_chain_mmr_nodes(&mut self) -> Result<BTreeMap<InOrderIndex, Digest>, StoreError> {

}

/// Gets a list of nodes required to reconstruct authentication paths for the specified blocks.
///
/// This will be used for `get_transaction_data()` method of `DataStore`.
pub fn get_chain_mmr_paths(
  &mut self,
  block_numbers: &[u32]
) -> Result<Vec<(InOrderIndex, Digest)>, StoreError> {

}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, to check that I undestand correctly:

insert_chain_mmr_nodes should receive only the nodes derived from each update, iterating over them, inserting each pair (InOrderIndex, Digest) in the table.

get_chain_mmr_nodes retrieves all rows from the table, and puts them inside a BTreeMap.

get_chain_mmr_paths only retrieves the rows on the table which InOrderIndex matches any of the elements of block_numbers.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

insert_chain_mmr_nodes should receive only the nodes derived from each update, iterating over them, inserting each pair (InOrderIndex, Digest) in the table.

This is correct. In the naive implementation this should be pretty simple as we can just take everything that was added to the nodes map after the last inserted node (new nodes would always have a bigger index).

get_chain_mmr_nodes retrieves all rows from the table, and puts them inside a BTreeMap.

Correct.

get_chain_mmr_paths only retrieves the rows on the table which InOrderIndex matches any of the elements of block_numbers.

It is a bit more involved as for each block number we'll need to return all nodes in the path from the block to the root of the corresponding peak. Let's leave this for another PR.

Copy link
Contributor Author

@juan518munoz juan518munoz Dec 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the implementation of InOrderIndex it seems like there's no way to serialize this type, and neither access it's inner usize, am I missing something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, currently missing but I'm adding it in 0xPolygonMiden/crypto#238.

}

// HELPERS
// ================================================================================================

fn serialize_block_header(
block_header: BlockHeader,
) -> Result<SerializedBlockHeaderData, StoreError> {
let block_num: u64 = block_header.block_num().into();
juan518munoz marked this conversation as resolved.
Show resolved Hide resolved
let header =
serde_json::to_string(&block_header).map_err(StoreError::InputSerializationError)?;
let notes_root = serde_json::to_string(&block_header.note_root())
.map_err(StoreError::InputSerializationError)?;
let sub_hash = serde_json::to_string(&block_header.sub_hash())
.map_err(StoreError::InputSerializationError)?;
let chain_mmr = serde_json::to_string(&block_header.chain_root())
.map_err(StoreError::InputSerializationError)?;
juan518munoz marked this conversation as resolved.
Show resolved Hide resolved

Ok((block_num as i64, header, notes_root, sub_hash, chain_mmr))
}

fn parse_block_headers_columns(
row: &rusqlite::Row<'_>,
) -> Result<SerializedBlockHeaderParts, rusqlite::Error> {
let block_num: i64 = row.get(0)?;
let header: String = row.get(1)?;
let notes_root: String = row.get(2)?;
let sub_hash: String = row.get(3)?;
let chain_mmr: String = row.get(4)?;
Ok((block_num, header, notes_root, sub_hash, chain_mmr))
}

fn parse_block_header(
serialized_block_header_parts: SerializedBlockHeaderParts,
) -> Result<BlockHeader, StoreError> {
let (_, header, _, _, _) = serialized_block_header_parts;

serde_json::from_str(&header).map_err(StoreError::JsonDataDeserializationError)
}

fn serialize_chain_mmr(chain_mmr: ChainMmr) -> Result<SerializedChainMmrNodeData, StoreError> {
serde_json::to_string(&chain_mmr).map_err(StoreError::InputSerializationError)
}

fn parse_chain_mmr_nodes_columns(
row: &rusqlite::Row<'_>,
) -> Result<SerializedChainMmrNodeParts, rusqlite::Error> {
let id = row.get(0)?;
let node = row.get(1)?;
Ok((id, node))
}

fn parse_chain_mmr_nodes(
serialized_chain_mmr_node_parts: SerializedChainMmrNodeParts,
) -> Result<ChainMmr, StoreError> {
let (_, node) = serialized_chain_mmr_node_parts;

serde_json::from_str(&node).map_err(StoreError::JsonDataDeserializationError)
}

// TESTS
// ================================================================================================
#[cfg(test)]
pub mod tests {
use mock::mock::block;
use objects::ChainMmr;

use crate::store::tests::create_test_store;

#[test]
fn test_block_header_insertion() {
let mut store = create_test_store();
let block_header = block::mock_block_header(0u8.into(), None, None, &[]);

assert!(store.insert_block_header(block_header).is_ok());
}

#[test]
fn test_block_header_by_number() {
let mut store = create_test_store();
let block_header = block::mock_block_header(0u8.into(), None, None, &[]);
store.insert_block_header(block_header).unwrap();

// Retrieving an existing block header should succeed
match store.get_block_header_by_num(0) {
Ok(block_header_from_db) => assert_eq!(block_header_from_db, block_header),
Err(e) => {
panic!("{:?}", e);
}
}

// Retrieving a non existing block header should fail
assert!(store.get_block_header_by_num(1).is_err());
}

#[test]
fn test_chain_mmr_node_insertion() {
let mut store = create_test_store();
let chain_mmr = ChainMmr::default();

assert!(store.insert_chain_mmr_node(chain_mmr).is_ok());
}

#[test]
fn test_chain_mmr_node_by_id() {
let mut store = create_test_store();
let chain_mmr = ChainMmr::default();
store.insert_chain_mmr_node(chain_mmr).unwrap();

// Retrieving an existing chain mmr node should succeed
assert!(store.get_chain_mmr_hash_by_id(1).is_ok());

// Retrieving a non existing chain mmr node should fail
assert!(store.get_chain_mmr_hash_by_id(2).is_err());
}
}
Loading
Loading