You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And then deploy the constraint
But after deploy and wait more than 10', the constraint did not show any violation report and gatekeper-audit keep restarted
(OOMkilled -> CrashLoopBackOff -> pod restarted)
$~ kubectl get constraints
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/psp-host-namespace deny
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/psp-host-network-ports deny
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/psp-allow-privilege-escalation-container deny
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/psp-drop-capabilities deny
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/psp-pods-allowed-user-ranges deny
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-privileged-container deny
$~ kubectl get pods -n gatekeeper-system
NAME READY STATUS RESTARTS AGE
gatekeeper-audit-f6874cd77-rrjb5 1/1 Running 4 (5m14s ago) 32m
gatekeeper-controller-manager-744d7f67bf-kl2pp 1/1 Running 0 32m
Describe gatekeeper-audit (when it CrashLoopBackOff ) pods:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned gatekeeper-system/gatekeeper-audit-f6874cd77-rrjb5 to ip-192-168-60-246.eu-central-1.compute.internal
Warning Unhealthy 36m (x2 over 43m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
Normal Pulled 22m (x4 over 43m) kubelet Container image "openpolicyagent/gatekeeper:v3.12.0" already present on machine
Normal Created 22m (x4 over 43m) kubelet Created container manager
Normal Started 22m (x4 over 43m) kubelet Started container manager
Warning Unhealthy 22m (x8 over 43m) kubelet Readiness probe failed: Get "http://192.168.57.119:9090/readyz": dial tcp 192.168.57.119:9090: connect: connection refused
Warning Unhealthy 8m38s kubelet Liveness probe failed: Get "http://192.168.57.119:9090/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Warning BackOff 8m17s (x15 over 29m) kubelet Back-off restarting failed container
Warning Unhealthy 28s kubelet Liveness probe failed: Get "http://192.168.57.119:9090/healthz": read tcp 192.168.60.246:55458->192.168.57.119:9090: read: connection reset by peer
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
Hi all,
I have deployed gatekeeper helm chart into k8s cluster
with values file:
I used some gatekeeper template in this repository: https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy
And then deploy the constraint
But after deploy and wait more than 10', the constraint did not show any violation report and gatekeper-audit keep restarted
(OOMkilled -> CrashLoopBackOff -> pod restarted)
Describe gatekeeper-audit (when it CrashLoopBackOff ) pods:
In gatekeeper-controller-manager have some log
What did you expect to happen:
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Do you have any suggestion in this case!
Thanks!
Environment:
kubectl version
): v1.25The text was updated successfully, but these errors were encountered: