-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
manifests: Cleanup controller Deployment manifest #6
Conversation
/cc @AlonaKaplan |
d89d4ca
to
a1751dd
Compare
# - key: kubernetes.io/os | ||
# operator: In | ||
# values: | ||
# - linux | ||
securityContext: | ||
runAsNonRoot: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, runAsUser
was required as well.
Can you please verify the Deployment can be deployed on a restricted NS?
kubectl label --dry-run=server --overwrite ns ns-name pod-security.kubernetes.io/enforce=restricted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I cannot add the comment on the relevant line so writing here- the name "system" for the namespace is problematic. Please use the name suggested in the design doc, or another more specific name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC,
runAsUser
was required as well. Can you please verify the Deployment can be deployed on a restricted NS?
kubectl label --dry-run=server --overwrite ns ns-name pod-security.kubernetes.io/enforce=restricted
Yes, it works, no warning /error raised
EDIT: From what I know, specifying runAsNonRoot
ensure the container will not run with root user (UID: 0).
You can achieve that by specifying runAsUser
with UID that is not root (not zero)
Also, I cannot add the comment on the relevant line so writing here- the name "system" for the namespace is problematic. Please use the name suggested in the design doc, or another more specific name.
Regarding the namespce, it eventually change by Kustomize to selfserviceoverlay-system
https://github.com/AlonaKaplan/selfserviceoverlay/blob/main/config/default/kustomization.yaml#L2
make deploy
result is creating the mentioned namespace and all related objects inside it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can achieve that by specifying runAsUserwith UID that is not root (not zero)
yes, that's what kubevirt and kubesecondarydns do. I wonder how you can created the Deployment in a restricted ns without it. I remember that in the past I"ve got a warning the runAsUserwith
is missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its possible because runAsNonRoot
is specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But you have to specify the non-root user id.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RunAsNonRoot make sure the container wont run with UID: 0, Kubernetes will validate that the container image is configured to run as a non-root user at runtime.
RunAsUser is more granular in the sense that it enable specifying the UID.
According to the restricted pod security policy doc, k8s validates that RunAsNonRoot
set and true
(mandatory), and/or RunAsUser is not zero (seem optional):
See Running as Non-root
and Running as Non-root user
in the table:
https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted
# seccompProfile: | ||
# type: RuntimeDefault | ||
seccompProfile: | ||
type: RuntimeDefault | ||
containers: | ||
- command: | ||
- /manager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the image to a quay we have access to.
And also update the Makefile to push the image to this destination.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For development I think we better push images directly to cluster nodes instead of quay, it will be more efficient and enable quick iterations.
I posted a PR to introduce cluster-sync target for that purpose, it build and push images directly to cluster nodes #8
General comments-
|
Remove node-affinity comments as there is no need to manage multu arch nodes at the moment. Set seccompProfile to RuntimeDefault, as there is no intention to support older versions of k8s/OCP (< 1.19/4.11) at the moment. Signed-off-by: Or Mergi <[email protected]>
Done |
a1751dd
to
b44584b
Compare
Part from They are mainly used for better for managing, filter these objects, it can come in handy in production cluster where there are lots of workloads of different kind (system, app, etc..) Let me know if you prefer removing them anyway. |
@AlonaKaplan , I posted PR to removed 'apps.kubernetes.io' labels #9 |
Remove node-affinity comments as there is no need to manage multu arch nodes at the moment.
Set seccompProfile to RuntimeDefault, as there is no intention to support older versions of k8s/OCP (< 1.19/4.11) at the moment.