Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tracers to zk server #253

Merged
merged 30 commits into from
Sep 4, 2024

Conversation

gianbelinche
Copy link

@gianbelinche gianbelinche commented Sep 2, 2024

What ❔

This PR adds tracers for the zk server, it is basically a merge between branches era_vm_zk_server_v2 and implement_tracers with a few changes to make it work.

All tests under zk test i server pass.

Note: A couple of tracers, StorageInvocations and ValidationTracer, were not needed to make te tests pass, so they were not implemented for era vm, TODO comments were left for them in case we need them in the future.

The timeout for tests was increased in order to make the tests pass, however this is due to the era vm being too slow, this could be changed later once it is well optimized.

Related to:
lambdaclass/era-compiler-tester#24
lambdaclass/era_vm#221

Why ❔

Checklist

  • PR title corresponds to the body of PR (we generate changelog entries from PRs).
  • Tests for the changes have been added / updated.
  • Documentation comments have been added / updated.
  • Code has been formatted via zk fmt and zk lint.

@gianbelinche gianbelinche marked this pull request as ready for review September 2, 2024 20:47
core/lib/multivm/src/tracers/call_tracer/era_vm/mod.rs Outdated Show resolved Hide resolved
core/lib/multivm/src/tracers/call_tracer/era_vm/mod.rs Outdated Show resolved Hide resolved
core/lib/multivm/src/tracers/call_tracer/era_vm/mod.rs Outdated Show resolved Hide resolved
Comment on lines 130 to 146
let output = if fat_data_pointer.is_pointer {
let fat_data_pointer = FatPointer::decode(fat_data_pointer.value);
if fat_data_pointer.len == 0 && fat_data_pointer.offset == 0 {
Some(
execution
.heaps
.get(fat_data_pointer.page)
.unwrap()
.read_unaligned_from_pointer(&fat_data_pointer)
.unwrap(),
)
} else {
None
}
} else {
None
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider this change, I think it's clearer:

Suggested change
let output = if fat_data_pointer.is_pointer {
let fat_data_pointer = FatPointer::decode(fat_data_pointer.value);
if fat_data_pointer.len == 0 && fat_data_pointer.offset == 0 {
Some(
execution
.heaps
.get(fat_data_pointer.page)
.unwrap()
.read_unaligned_from_pointer(&fat_data_pointer)
.unwrap(),
)
} else {
None
}
} else {
None
};
let output = match fat_data_pointer.is_pointer {
true => {
let fat_data_pointer = FatPointer::decode(fat_data_pointer.value);
match (fat_data_pointer.len, fat_data_pointer.offset) {
(0, 0) => execution
.heaps
.get(fat_data_pointer.page)
.and_then(|ptr| ptr.read_unaligned_from_pointer(&fat_data_pointer).ok()),
_ => None,
}
}
_ => None,
};

),
enumeration_index: vm
.storage
.borrow_mut()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like borrowing refcells outside of the implant block for this struct:
The reason is that this will panic in runtime if there is another living reference to it, which it could possibly happen given that this field is public, let's change this, do we need to change something else in our vm?

Comment on lines +79 to +98
let call_type = if let CallType::Call(far_call) = current_call.r#type {
if matches!(far_call, FarCallOpcode::Mimic) {
let previous_caller = execution
.running_contexts
.first()
.map(|call| call.caller)
// Actually it's safe to just unwrap here, because we have at least one call in the stack
// But i want to be sure that we will not have any problems in the future
.unwrap_or(current.caller);
if previous_caller == CONTRACT_DEPLOYER_ADDRESS {
CallType::Create
} else {
CallType::Call(far_call)
}
} else {
CallType::Call(far_call)
}
} else {
return;
};
Copy link

@Oppen Oppen Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can reduce nesting a bit by using let <destructuring pattern> else here:

Suggested change
let call_type = if let CallType::Call(far_call) = current_call.r#type {
if matches!(far_call, FarCallOpcode::Mimic) {
let previous_caller = execution
.running_contexts
.first()
.map(|call| call.caller)
// Actually it's safe to just unwrap here, because we have at least one call in the stack
// But i want to be sure that we will not have any problems in the future
.unwrap_or(current.caller);
if previous_caller == CONTRACT_DEPLOYER_ADDRESS {
CallType::Create
} else {
CallType::Call(far_call)
}
} else {
CallType::Call(far_call)
}
} else {
return;
};
let CallType::Call(far_call) = current_call.r#type else {
return;
}
let call_type = if matches!(far_call, FarCallOpcode::Mimic) {
let previous_caller = execution
.running_contexts
.first()
.map(|call| call.caller)
// Actually it's safe to just unwrap here, because we have at least one call in the stack
// But i want to be sure that we will not have any problems in the future
.unwrap_or(current.caller);
if previous_caller == CONTRACT_DEPLOYER_ADDRESS {
CallType::Create
} else {
CallType::Call(far_call)
}
} else {
CallType::Call(far_call)
};

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing is that this code is exactly as is for the other vms, so I think it is better to leave it as is. The only change was the one from the unreachable, so we don't stop execution due to a tracer (even though in the other vms they still have it)

@gianbelinche gianbelinche merged commit ec01627 into era_vm_zk_server_v2 Sep 4, 2024
9 of 44 checks passed
@gianbelinche gianbelinche deleted the era_vm_zk_server_and_tracers branch September 4, 2024 16:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants