Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include default requests/limits in all loki + promtail + grafana-agent deployments #358

Closed
ubergesundheit opened this issue Jun 22, 2021 · 12 comments

Comments

@ubergesundheit
Copy link
Member

The loki-app + promtail-app should include reasonable default limits/requests

@hervenicol
Copy link

hervenicol commented Sep 29, 2022

When we review requests/limits on Loki, could be good to have this issue in mind also: https://github.com/giantswarm/giantswarm/issues/21562

@TheoBrigitte
Copy link
Member

As we recently did a lot of tuning in Loki, do we still need this @hervenicol ?

@hervenicol
Copy link

For Loki we should be good, buts I didn't do anything on Promtail.

@QuentinBisson QuentinBisson changed the title Include default requests/limits in all loki + promtail deployments Include default requests/limits in all loki + promtail + grafana-agent deployments Mar 5, 2024
@Rotfuks
Copy link
Contributor

Rotfuks commented Jul 9, 2024

Let's check if this is done yet, or if we still have some todos here to set request/limits

@QuentinBisson
Copy link

Loki is fine:

k get sts -n loki loki-backend -oyaml | yq '.spec.template.spec.containers.[].resources'
limits:
  cpu: 100m
  memory: 100Mi
requests:
  cpu: 50m
  memory: 50Mi
limits:
  memory: 3Gi
requests:
  cpu: 200m
  memory: 1Gi
> k get sts -n loki loki-write -oyaml | yq '.spec.template.spec.containers.[].resources'
limits:
  memory: 8Gi
requests:
  cpu: "1"
  memory: 4Gi
> k get deploy -n loki loki-read -oyaml | yq '.spec.template.spec.containers.[].resources'
limits:
  memory: 3Gi
requests:
  cpu: 200m
  memory: 1Gi

Promtail is okay:

k get ds -n kube-system promtail -oyaml | yq '.spec.template.spec.containers.[].resources'
limits:
  cpu: "1"
  memory: 256Mi
requests:
  cpu: 25m
  memory: 128Mi

Grafana-agent/Alloy are not:

k get deploy -n kube-system grafana-agent -oyaml | yq '.spec.template.spec.containers.[].resources'
{}
requests:
  cpu: 1m
  memory: 5Mi
> 
> k get deploy -n monitoring alloy-rules -oyaml | yq '.spec.template.spec.containers.[].resources'
{}
requests:
  cpu: 1m
  memory: 5Mi

@QuentinBisson QuentinBisson self-assigned this Jul 16, 2024
@QuentinBisson
Copy link

@giantswarm/team-atlas I would really like some thoughts on how to proceed here. Should we set some random resource usage, use vpa, use hpa?

The issue is when we want to play with clustering (which we don't know) but it supports hpas

@hervenicol
Copy link

Why is it important to take the right decision regarding VPA or HPA right now?

I guess it's because it requires a new olly-bundle release, whereas we can change the deployment type for grafana-agent (or alloy-logs or logging agent) directly from additional values on the MC via logging-operator.
Right?

Otherwise, if both configs (xPA and deployment type) can be setup from the same place, we should start with VPA and we will move to HPA later when we have the need.

@QuentinBisson
Copy link

I'm not trying to take the future right decisions but knowing where to go changes what I have to do (adding vpa support upstream is different than setting resources :) )

@hervenicol
Copy link

oh, there's no VPA upstream!
Well, we could only add VPA to our own chart.
Contributing VPA in upstream chart could be nice as well, but given the delay for PRs I think we should not wait for this before we do something on our side.

@QuentinBisson
Copy link

Upstream PR grafana/alloy#1305.
I'll let this wait a bit and see how it goes by the end of the week :)

@QuentinBisson
Copy link

Initial VPA PR: giantswarm/alloy-app#44
Followed with a Fix: giantswarm/alloy-app#46
Configured in prometheus-rules for now: giantswarm/prometheus-rules#1339

@QuentinBisson
Copy link

Alloy now has proper limits and we can enable VPA on memory if needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

No branches or pull requests

6 participants