-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Prometheus metrics export and import #1335
Conversation
14112fa
to
c73a5c6
Compare
|
||
# Function to display usage | ||
usage() { | ||
echo "Usage: $0 [-a AWS_ACCESS_KEY_ID] [-s AWS_SECRET_ACCESS_KEY] [-t AWS_SESSION_TOKEN] -f FILENAME -b BUCKET_NAME -n KUBERNETES_NAMESPACE [--local]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would have to be modified if default values are there for every option.
echo "Error: Failed to copy file from pod."; | ||
exit 1; | ||
} | ||
echo "File copied from pod successfully." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it a whole directory that is actually copied ?
c73a5c6
to
650eee6
Compare
|
Motivation
Prometheus metrics are usually lost after ArmoniK infra gets destroyed. This PR includes scripts to save this data so it can be looked at and analyzed later on.
Description
This PR introduces a script that can be ran to export the data from Prometheus into an S3 bucket (at the end of github workflows for example). Along with a minimal monitoring deployment that current just includes Grafana and Prometheus, with a script that can be used to quickly import back previous Prometheus data into this deployment so it can be studied/analyzed.
Testing
Scripts were tested on both local and AWS deployments.
Impact
Not relevant.
Additional Information
Check README.md for additional information.
Checklist