-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
manifests: Cleanup controller Deployment manifest #6
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -36,35 +36,10 @@ spec: | |
labels: | ||
control-plane: controller-manager | ||
spec: | ||
# TODO(user): Uncomment the following code to configure the nodeAffinity expression | ||
# according to the platforms which are supported by your solution. | ||
# It is considered best practice to support multiple architectures. You can | ||
# build your manager image using the makefile target docker-buildx. | ||
# affinity: | ||
# nodeAffinity: | ||
# requiredDuringSchedulingIgnoredDuringExecution: | ||
# nodeSelectorTerms: | ||
# - matchExpressions: | ||
# - key: kubernetes.io/arch | ||
# operator: In | ||
# values: | ||
# - amd64 | ||
# - arm64 | ||
# - ppc64le | ||
# - s390x | ||
# - key: kubernetes.io/os | ||
# operator: In | ||
# values: | ||
# - linux | ||
securityContext: | ||
runAsNonRoot: true | ||
# TODO(user): For common cases that do not require escalating privileges | ||
# it is recommended to ensure that all your Pods/Containers are restrictive. | ||
# More info: https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted | ||
# Please uncomment the following code if your project does NOT have to work on old Kubernetes | ||
# versions < 1.19 or on vendors versions which do NOT support this field by default (i.e. Openshift < 4.11 ). | ||
# seccompProfile: | ||
# type: RuntimeDefault | ||
seccompProfile: | ||
type: RuntimeDefault | ||
containers: | ||
- command: | ||
- /manager | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please update the image to a quay we have access to. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For development I think we better push images directly to cluster nodes instead of quay, it will be more efficient and enable quick iterations. I posted a PR to introduce cluster-sync target for that purpose, it build and push images directly to cluster nodes #8 |
||
|
@@ -75,8 +50,6 @@ spec: | |
capabilities: | ||
drop: | ||
- "ALL" | ||
# TODO(user): Configure the resources accordingly based on the project requirements. | ||
# More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ | ||
resources: | ||
limits: | ||
cpu: 500m | ||
|
@@ -85,4 +58,4 @@ spec: | |
cpu: 10m | ||
memory: 64Mi | ||
serviceAccountName: controller-manager | ||
terminationGracePeriodSeconds: 10 | ||
terminationGracePeriodSeconds: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC,
runAsUser
was required as well.Can you please verify the Deployment can be deployed on a restricted NS?
kubectl label --dry-run=server --overwrite ns ns-name pod-security.kubernetes.io/enforce=restricted
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I cannot add the comment on the relevant line so writing here- the name "system" for the namespace is problematic. Please use the name suggested in the design doc, or another more specific name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it works, no warning /error raised
EDIT: From what I know, specifying
runAsNonRoot
ensure the container will not run with root user (UID: 0).You can achieve that by specifying
runAsUser
with UID that is not root (not zero)Regarding the namespce, it eventually change by Kustomize to
selfserviceoverlay-system
https://github.com/AlonaKaplan/selfserviceoverlay/blob/main/config/default/kustomization.yaml#L2
make deploy
result is creating the mentioned namespace and all related objects inside itThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, that's what kubevirt and kubesecondarydns do. I wonder how you can created the Deployment in a restricted ns without it. I remember that in the past I"ve got a warning the
runAsUserwith
is missing.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its possible because
runAsNonRoot
is specified.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But you have to specify the non-root user id.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RunAsNonRoot make sure the container wont run with UID: 0, Kubernetes will validate that the container image is configured to run as a non-root user at runtime.
RunAsUser is more granular in the sense that it enable specifying the UID.
According to the restricted pod security policy doc, k8s validates that
RunAsNonRoot
set andtrue
(mandatory), and/or RunAsUser is not zero (seem optional):See
Running as Non-root
andRunning as Non-root user
in the table:https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted