-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revise Metrics #460
Revise Metrics #460
Conversation
1b32994
to
e380b6f
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #460 +/- ##
==========================================
+ Coverage 92.03% 92.15% +0.11%
==========================================
Files 132 130 -2
Lines 9508 9316 -192
==========================================
- Hits 8751 8585 -166
+ Misses 757 731 -26 ☔ View full report in Codecov by Sentry. |
00e051d
to
c2a86de
Compare
Co-authored-by: dtrai2 <[email protected]>
improves also test-coverage by disabling tests on type checking imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those changes are amazing! I have only one small remark: The pipeline times are never added to events, even if APPEND_TO_EVENT is set. Still having those times would be great if that's possible without a massive hassle. Otherwise I would approve it now. What do you say?
There is one more thing: The directory PROMETHEUS_MULTIPROC_DIR is now deleted after Logprep is shut down.
This means that that the directory needs to be re-created each time Logprep restarts. Is this the desired behaviour?
hello @ppcad , the PROMETHEUS_MULTIPROC_DIR should only be cleared. not deleted. but you have to provide it. in a kubernetes environment you could add a empty dir volume to the deployment and mount this. so the volume is there when logprep starts. |
Okay, in that case it needs to be fixed. |
just empty its content, restored a code state from an earlier version
I checked out the pipeline processing times and how we could add them to the events. The problem is with the current implementation it is not possible as we are using two different decorators for this. If the events should be appended with the processing times we use one decorator for it and if we do not want to append the processing times we use a different decorator. Now the problem is that the decorators are applied on module load time. Meaning if we load a module before the configuration is parsed we might decorate a function with the wrong decorator. The processor python files are luckily loaded after the configuration was parsed, the pipeline though seems to be loaded before the configuration was parsed. So event if I extend the appending functionality the pipeline processing times still won't be appended as the configuration is set too late. I see now the following options:
If we want to add this again I personally would pick option 3. It is not consistent with our current configuration pattern, but has the smallest impact on code changes. Considering that this might be dropped in the future I think it is a good enough solution. |
I agree. If it has the least impact on the code, then I would pick option 3, since appending to events will be eventually completely removed. |
option 3 please 👍 |
- needs to be configured via a environment variable
and log exit code of failed pipelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
plz add my changes in dashboards, cause it leads to not showing garbage collections
quickstart/exampledata/config/grafana/dashboards/logprep-dashboard.json
Outdated
Show resolved
Hide resolved
quickstart/exampledata/config/grafana/dashboards/logprep-dashboard.json
Outdated
Show resolved
Hide resolved
quickstart/exampledata/config/grafana/dashboards/logprep-dashboard.json
Outdated
Show resolved
Hide resolved
…oard.json Co-authored-by: Jörg Zimmermann <[email protected]>
…oard.json Co-authored-by: Jörg Zimmermann <[email protected]>
…oard.json Co-authored-by: Jörg Zimmermann <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I marked two places where the environment variable was not replaced with the new name.
Other than that it works great and I will approve it as soon as this is changed.
- used wrong env and with that wrong decorator - added another assertion to ensure that correct decorator was used
ah thanks for that find! It was actually also a broken test. With that wrong environment variable the test should have failed. I fixed the env and added an assertion. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
id
could possibly break configurations if the same rule is used in both rule treesid
to each rule or delete the possibly redundant rulequickstart/exampledata/config/grafana/dashboards
id
for all rules to identify rules in metrics and logsid
is given, theid
will be generated in a stable wayid
uniqueness on processor level over both rule trees to ensure metrics are counted correctly on rule levelDEBUG