-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploy two hosts for benchmarking nimbus-eth1 import #194
Comments
How urgently do we need the 2 hosts? OR I could get a dedicated host which would be comparatively more expensive My assumption is that the host from Auction might take longer to get compared to the dedicated one. |
After discussing with @jakubgs, I finally went ahead with the following 2 x Dedicated Server AX42 Order Details : Possible wait times : |
These 2 AX42 hosts have been activated by Hetzner and currently boot into rescue system. |
This commit adds 2 hetzner AX42 hosts for eth1 benchmarking to our network. related issue: #194
This commit adds 2 hetzner AX42 hosts for eth1 benchmarking to our network. related issue: #194
This commit adds 2 hetzner AX42 hosts for eth1 benchmarking to our network. related issue: #194
I will use Next steps are as follows :
|
Sounds correct. Remember that the timer will have to do several things:
|
as per @arnetheduck :
|
The short benchmark was run and it completed in ~ 12 hours
This was however run on a RAID 0 setup between 3 drives. So further benchmarks will be run on devices that do not have RAID 0. |
I made this github repo to hold results of exported csv for short benchmark test. The format of reports is still a work in progress, for now the systemd service just pushes the csv exported by the import process to this github repo. I also made this github role for building of nimbus eth1 and cleaning up the bench-01 host and restarting the short benchmark from a clean state. |
Thinking further on the expected report the following items should be covered:
The folder structure could be
and similar for |
I wonder if the folder structure shouldn't be Since if we go for each day then the root of the repo will be quite a big list very quickly. |
Hmm indeed, also thinking from search standpoint, I believe nimbus team would be more interested in searching for the performance of the short benchmark for a particular commit, so a folder for commit would also not be a bad idea and then within that folder we could have various files which have the timestamp identifier attached.
|
I think using dates in folder structure will make for a nicer format. Using commits for folders is not great because when you call |
Additional feature: when a benchmark runs, it should be compared against the previous commit using https://github.com/status-im/nimbus-eth1/blob/master/scripts/block-import-stats.py - this script compares two CSV files and outputs a comparison table as can be seen in this comment: status-im/nimbus-eth1#2413 (comment) See also: https://github.com/status-im/nimbus-eth1/tree/master/scripts#block-import-statspy Its output could be saved to a text file together with the other outputs |
Example of comparing current run with previous run for short benchmark : I will clean up all old / incomplete benchmarks from the repository now. |
short benchmarking reports have been stable for a while. I consider this task as done, unless there are any more changes or bugs in the reports. |
another requirement is redirecting output of blocks-import python script to A main repo
Generated
|
What kind of monstrocity is this? You know you can template files in bash using |
yes, just copy pasting what Jacek had mentioned in chat to keep track. |
"whatever" as long as it puts an overview table in the "top-level" readme |
main readme is generated here : https://github.com/status-im/nimbus-eth1-benchmarks/blob/master/README.md uses this template : https://github.com/status-im/nimbus-eth1-benchmarks/blob/master/README-TEMPLATE.md and generated on each benchmark with |
I consider this as completed. |
The development of nimbus-eth1 is ramping up, and for that reason we will need to perform benchmarking of the process of importing the network state data with validation from ERA files. Currently this process is not optimized, as it's in it's early stages of development, which means a full import of
mainnet
would probably take more than a week. Despite that we need to start measuring results in order to figure out progress in import process optimization.This benchmarking will require two kind of tests on two hosts:
Both of those will not finish, so they will have to be aborted, but the amount of blocks they are able to sync will be the measure of performance. These performance reports will have to be archived in some way, simplest way would be to commit them to a dedicated repository. In addition to the reports gained this way the
import
process will make available a/metrics
endpoint which we can scrape with Prometheus.The two hosts can be purchased from Hetzner as the hosts will not be using external connections. The storage required will need to be at least 2x the size of Mainnet ERA and ERA1 files, which is currently ~1 2B, so a 2 TB additional NVMe would suffice. Aside from that more than 16 GB of RAM and 4 cores is enough.
update as of 28 Oct
Short test must begin with a template DB which contains blocks from 20M since measuring the import process from these blocks is what matters to the nimbus team. Jacek to provide this template db.
The long test will begin with no template DB and is also an import only test, it usually takes around a week.
The goal is to measure time taken to complete import in both cases.
The text was updated successfully, but these errors were encountered: