Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

manifests: Cleanup controller Deployment manifest #6

Merged
merged 1 commit into from
Mar 5, 2024

Conversation

ormergi
Copy link
Contributor

@ormergi ormergi commented Mar 3, 2024

Remove node-affinity comments as there is no need to manage multu arch nodes at the moment.

Set seccompProfile to RuntimeDefault, as there is no intention to support older versions of k8s/OCP (< 1.19/4.11) at the moment.

@ormergi
Copy link
Contributor Author

ormergi commented Mar 3, 2024

/cc @AlonaKaplan

@ormergi ormergi force-pushed the deploy-manager-cleanup branch 2 times, most recently from d89d4ca to a1751dd Compare March 4, 2024 09:08
# - key: kubernetes.io/os
# operator: In
# values:
# - linux
securityContext:
runAsNonRoot: true
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, runAsUser was required as well.
Can you please verify the Deployment can be deployed on a restricted NS?

kubectl label --dry-run=server --overwrite ns ns-name pod-security.kubernetes.io/enforce=restricted

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I cannot add the comment on the relevant line so writing here- the name "system" for the namespace is problematic. Please use the name suggested in the design doc, or another more specific name.

Copy link
Contributor Author

@ormergi ormergi Mar 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, runAsUser was required as well. Can you please verify the Deployment can be deployed on a restricted NS?

kubectl label --dry-run=server --overwrite ns ns-name pod-security.kubernetes.io/enforce=restricted

Yes, it works, no warning /error raised
EDIT: From what I know, specifying runAsNonRoot ensure the container will not run with root user (UID: 0).
You can achieve that by specifying runAsUserwith UID that is not root (not zero)

Also, I cannot add the comment on the relevant line so writing here- the name "system" for the namespace is problematic. Please use the name suggested in the design doc, or another more specific name.

Regarding the namespce, it eventually change by Kustomize to selfserviceoverlay-system
https://github.com/AlonaKaplan/selfserviceoverlay/blob/main/config/default/kustomization.yaml#L2
make deploy result is creating the mentioned namespace and all related objects inside it

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can achieve that by specifying runAsUserwith UID that is not root (not zero)

yes, that's what kubevirt and kubesecondarydns do. I wonder how you can created the Deployment in a restricted ns without it. I remember that in the past I"ve got a warning the runAsUserwith is missing.

Copy link
Contributor Author

@ormergi ormergi Mar 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its possible because runAsNonRoot is specified.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But you have to specify the non-root user id.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RunAsNonRoot make sure the container wont run with UID: 0, Kubernetes will validate that the container image is configured to run as a non-root user at runtime.
RunAsUser is more granular in the sense that it enable specifying the UID.

According to the restricted pod security policy doc, k8s validates that RunAsNonRoot set and true (mandatory), and/or RunAsUser is not zero (seem optional):
See Running as Non-root and Running as Non-root user in the table:
https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted

# seccompProfile:
# type: RuntimeDefault
seccompProfile:
type: RuntimeDefault
containers:
- command:
- /manager
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update the image to a quay we have access to.
And also update the Makefile to push the image to this destination.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For development I think we better push images directly to cluster nodes instead of quay, it will be more efficient and enable quick iterations.

I posted a PR to introduce cluster-sync target for that purpose, it build and push images directly to cluster nodes #8

@AlonaKaplan
Copy link
Owner

General comments-

  1. terminationGracePeriodSeconds can be 1sec. We don't have pre-stop hook and are not wating for anything to complete on termination.
  2. The pod and the container have big list of labels. Can you please explain why each one of them is needed remove the ones the don't.

Remove node-affinity comments as there is no need to manage multu arch nodes
at the moment.

Set seccompProfile to RuntimeDefault, as there is no intention to support older
versions of k8s/OCP (< 1.19/4.11) at the moment.

Signed-off-by: Or Mergi <[email protected]>
@ormergi
Copy link
Contributor Author

ormergi commented Mar 4, 2024

terminationGracePeriodSeconds can be 1sec. We don't have pre-stop hook and are not wating for anything to complete on termination.

Done

@ormergi ormergi force-pushed the deploy-manager-cleanup branch from a1751dd to b44584b Compare March 4, 2024 16:34
@ormergi
Copy link
Contributor Author

ormergi commented Mar 4, 2024

The pod and the container have big list of labels. Can you please explain why each one of them is needed remove the ones the don't.

Part from control-plane: controller-manager that is used to match the Deployment to its pods spec,
none of these labels are mandatory.

They are mainly used for better for managing, filter these objects, it can come in handy in production cluster where there are lots of workloads of different kind (system, app, etc..)
At this point I dont think we need them, but there is no harm in having them.
Recommended labels docs https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/

Let me know if you prefer removing them anyway.

@AlonaKaplan AlonaKaplan merged commit ac4dec4 into AlonaKaplan:main Mar 5, 2024
@ormergi
Copy link
Contributor Author

ormergi commented Mar 5, 2024

@AlonaKaplan , I posted PR to removed 'apps.kubernetes.io' labels #9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants