Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gatekeeper Constraint did not show violation report #3031

Closed
lnvn opened this issue Oct 4, 2023 · 2 comments
Closed

Gatekeeper Constraint did not show violation report #3031

lnvn opened this issue Oct 4, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@lnvn
Copy link

lnvn commented Oct 4, 2023

What steps did you take and what happened:
[A clear and concise description of what the bug is.]
Hi all,

I have deployed gatekeeper helm chart into k8s cluster

with values file:

replicas: 1
constraintViolationsLimit: 120
auditInterval: 360
controllerManager:
  resources:
    requests:
      cpu: 150m
      memory: 150Mi
audit:
  resources:
    requests:
      cpu: 150m
      memory: 150Mi

I used some gatekeeper template in this repository: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy

And then deploy the constraint
But after deploy and wait more than 10', the constraint did not show any violation report and gatekeper-audit keep restarted
(OOMkilled -> CrashLoopBackOff -> pod restarted)

$~ kubectl get constraints
NAME                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/psp-host-namespace   deny

NAME                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/psp-host-network-ports   deny

NAME                                                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/psp-allow-privilege-escalation-container   deny

NAME                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/psp-drop-capabilities   deny

NAME                                                                        ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/psp-pods-allowed-user-ranges   deny

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container   deny
$~ kubectl get pods -n gatekeeper-system
NAME                                             READY   STATUS    RESTARTS        AGE
gatekeeper-audit-f6874cd77-rrjb5                 1/1     Running   4 (5m14s ago)   32m
gatekeeper-controller-manager-744d7f67bf-kl2pp   1/1     Running   0               32m

Describe gatekeeper-audit (when it CrashLoopBackOff ) pods:

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  43m                   default-scheduler  Successfully assigned gatekeeper-system/gatekeeper-audit-f6874cd77-rrjb5 to ip-192-168-60-246.eu-central-1.compute.internal
  Warning  Unhealthy  36m (x2 over 43m)     kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500
  Normal   Pulled     22m (x4 over 43m)     kubelet            Container image "openpolicyagent/gatekeeper:v3.12.0" already present on machine
  Normal   Created    22m (x4 over 43m)     kubelet            Created container manager
  Normal   Started    22m (x4 over 43m)     kubelet            Started container manager
  Warning  Unhealthy  22m (x8 over 43m)     kubelet            Readiness probe failed: Get "http://192.168.57.119:9090/readyz": dial tcp 192.168.57.119:9090: connect: connection refused
  Warning  Unhealthy  8m38s                 kubelet            Liveness probe failed: Get "http://192.168.57.119:9090/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  BackOff    8m17s (x15 over 29m)  kubelet            Back-off restarting failed container
  Warning  Unhealthy  28s                   kubelet            Liveness probe failed: Get "http://192.168.57.119:9090/healthz": read tcp 192.168.60.246:55458->192.168.57.119:9090: read: connection reset by peer

In gatekeeper-controller-manager have some log

{"level":"info","ts":1696390514.0155146,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"psp-privileged-container"}}
{"level":"info","ts":1696390514.0283725,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPPrivilegedContainer","name":"psp-privileged-container"}}
{"level":"info","ts":1696390514.0401416,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPHostNetworkingPorts","name":"psp-host-network-ports"}}
{"level":"info","ts":1696390514.0617616,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPAllowPrivilegeEscalationContainer","name":"psp-allow-privilege-escalation-container"}}
{"level":"info","ts":1696390514.0803077,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPHostNetworkingPorts","name":"psp-host-network-ports"}}
{"level":"info","ts":1696390514.096445,"logger":"controller","msg":"handling constraint update","process":"constraint_controller","instance":{"apiVersion":"constraints.gatekeeper.sh/v1beta1","kind":"K8sPSPAllowPrivilegeEscalationContainer","name":"psp-allow-privilege-escalation-container"}}
2023/10/04 03:35:15 http: TLS handshake error from 192.168.1.91:38330: EOF
2023/10/04 03:35:15 http: TLS handshake error from 192.168.1.91:38356: EOF
2023/10/04 03:35:17 http: TLS handshake error from 192.168.92.127:49294: EOF
2023/10/04 03:35:17 http: TLS handshake error from 192.168.92.127:49324: EOF
2023/10/04 03:35:17 http: TLS handshake error from 192.168.92.127:49308: EOF
2023/10/04 03:40:25 http: TLS handshake error from 192.168.1.91:34574: EOF
2023/10/04 03:42:56 http: TLS handshake error from 192.168.92.127:47484: EOF
2023/10/04 03:43:25 http: TLS handshake error from 192.168.1.91:45532: EOF
2023/10/04 03:44:25 http: TLS handshake error from 192.168.1.91:32960: EOF
2023/10/04 03:44:25 http: TLS handshake error from 192.168.1.91:32966: EOF
2023/10/04 03:46:56 http: TLS handshake error from 192.168.92.127:60844: EOF
2023/10/04 03:46:56 http: TLS handshake error from 192.168.92.127:60836: EOF
2023/10/04 03:46:56 http: TLS handshake error from 192.168.92.127:60848: EOF

What did you expect to happen:

  • The gatekeeper audit pod working normally
  • Constraint show violation report

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Do you have any suggestion in this case!
Thanks!

Environment:

  • Gatekeeper version: v3.12
  • Kubernetes version: (use kubectl version): v1.25
@lnvn lnvn added the bug Something isn't working label Oct 4, 2023
@davis-haba
Copy link
Contributor

Hi,

It appears your Audit Pod is OOMing. Try increasing its memory --150Mi is relatively low.

@lnvn
Copy link
Author

lnvn commented Oct 11, 2023

Hi @davis-haba,

Thank you a lot ! 💯
I solved the issue by increase the memory limit

  audit:
    resources:
      limits:
        memory: 2000Mi

@lnvn lnvn closed this as completed Oct 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants