Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nimbus - could not find process_resident_memory_bytes in metrics #70

Open
joebiker opened this issue Nov 22, 2020 · 5 comments
Open

Nimbus - could not find process_resident_memory_bytes in metrics #70

joebiker opened this issue Nov 22, 2020 · 5 comments

Comments

@joebiker
Copy link

joebiker commented Nov 22, 2020

Running Nimbus client 0.6.6
Running eth2stats-client version v0.0.16+d729a1d

Description of warn message
stats is looking for process_resident_memory_bytes

Closest stat that Nimbus reports is
nim_gc_mem_bytes 4100096.0
nim_gc_mem_occupied_bytes 2733144.0
sqlite3_memory_used_bytes 2614336.0

@protolambda
Copy link
Collaborator

Have you tried asking in the discord? I think it should work if you use the correct metrics flags (see readme of this repo for help), and nimbus might have to be built with the "insecure" flag set to true (I think it was NIMFLAGS="-d:insecure") to enable extra HTTP functionality like metrics (this might have changed, ask Nimbus team).

@joebiker
Copy link
Author

@protolambda Yes, I've done that. I compiled using the NIMFLAGS suggested.
I would revert back to nimbus 0.6.5 to show the difference in stats, but I'm struggling to get eth2stats to run as a daemon on mac launchctl (exits 1 everytime).

@joebiker
Copy link
Author

I see this in the README.md
The process_resident_memory_bytes gauge is extracted from the Prometheus metrics endpoint.

However I'm not sure how it applies. I am running Prometheus, however there's no configuration for including Prometheus into command line options?

@protolambda
Copy link
Collaborator

That's poor wording in the readme, it should just grab it from the metrics endpoint of the eth2 client. Prometheus is the more common consumer of the metrics endpoint, but not relevant here.

Are you using all the nimbus metrics flags? --metrics --metrics-port=8080 --metrics-address=0.0.0.0
You could try curl the /metrics endpoint to check if it can be reached and has the expected contents.

@joebiker
Copy link
Author

Yes, I checked the /metrics endpoint on Nimbus 0.6.6, I've listed the contents of what I consider 'close' to the memory usage statistic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants