A Kubernetes CRD + Controller to set resource quotas across multiple namespaces.
Kubernetes has a built-in resource quota mechanism to limit specific resources (CPU, memory, storage, etc.) per namespace. This project extends the native Kubernetes built-in resource quota mechanism, making the resource quotas across multiple namespaces (project).
The design concepts are:
- Create a CRD
projectresourcequotas.jenting.io
to define the per-project resource quotas. - The user creates the
projectresourcequotas.jenting.io
CRs with namespaces + resource quotas limits. - Have a controller to calculate current resource usage and updates the current resource usage to
projectresourcequotas.jenting.io
CRs status. - Have admission webhooks for rejecting the Kubernetes resources creation/modification if
current resource usage + request resource limit > project resource quota limit
. The supported Kubernetes resources are:- ConfigMap
- PersistentVolumeClaim
- Pod
- ReplicationController
- ResourceQuota
- Secret
- Service
- Have an admission webhook for rejecting:
- the ProjectResourceQuota CR creation if the namespace is configured in another ProjectResourceQuota CR already.
- the ProjectResourceQuota CR modification if the
current resource usage > updated project resource quota limit
.
The projectresourcequotas.jenting.io
CR supports resource quotas are:
Resource Name | Description |
---|---|
configmaps | The total number of ConfigMaps within the project cannot exceed this value. |
persistentvolumeclaims | The total number of PersistentVolumeClaims within the project cannot exceed this value. |
pods | Across all pods in a non-terminal state (.status.Phase != (Failed, Succeeded)) within the project, the total number of Pods cannot exceed this value. |
requests.cpu | Across all pods in a non-terminal state (.status.Phase != (Failed, Succeeded)) within the project, the sum of CPU requests cannot exceed this value. Note that, it requires that every incoming container makes explicit requests.cpu . |
requests.memory | Across all pods in a non-terminal state within the project, the sum of memory requests cannot exceed this value. It requires that every incoming container makes explicit requests.memory . |
requests.storage | Across all pods in the project, the sum of local ephemeral storage requests cannot exceed this value. |
requests.ephemeral-storage | Across all pods in the project, the sum of local ephemeral storage requests cannot exceed this value. |
cpu | Same as requests.cpu . |
memory | Same as requests.memory . |
storage | Same as requests.storage . |
ephemeral-storage | Same as requests.ephemeral-storage . |
limits.cpu | Across all pods in a non-terminal state with the project, the sum of CPU limits cannot exceed this value. It requires that every incoming container makes explicit limit.cpu . |
limits.memory | Across all pods in a non-terminal state within the project, the sum of memory limits cannot exceed this value. It requires that every incoming container makes explicit limit.memory . |
limits.ephemeral-storage | Across all pods in the project, the sum of local ephemeral storage limits cannot exceed this value. |
replicationcontrollers | The total number of ReplicationControllers within the project cannot exceed this value. |
resourcequotas | The total number of ResourceQuotas within the project cannot exceed this value. |
secrets | The total number of Secrets within the project cannot exceed this value. |
services | The total number of Services within the project cannot exceed this value. |
services.loadbalancers | The total number of Services of type LoadBalancer within the project cannot exceed this value. |
services.nodeports | The total number of Services of type NodePort within the project cannot exceed this value. |
Note All the supported resource quotas are per-namespace.
You’ll need a Kubernetes cluster to run against. You can use KIND to get a local cluster for testing, or run against a remote cluster.
Note: Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info
shows).
-
Install cert-manager:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.yaml
-
Install the CRDs into the cluster:
make install
-
Build and push your image to the location specified by
IMG
:make docker-build docker-push IMG=<some-registry>/resourcequota:tag
-
Deploy the controller to the cluster with the image specified by
IMG
:make deploy IMG=<some-registry>/resourcequota:tag
-
View the
projectresourcequotas.jenting.io
CRs:kubectl get prq
-
Install the CRs:
kubectl apply -f config/samples/_v1_projectresourcequota.yaml
-
Install another CR. Verify the installation fails because the namespace is configured in another CR already:
kubectl apply -f config/samples/_v2_projectresourcequota.yaml
-
Install the Pods. Verify the second pod installation fails because the resource request + used > hard limit:
kubectl apply -f config/samples/pod-nginx.yaml kubectl apply -f config/samples/pod-busybox.yaml
-
Remove the
default
from theprojectresourcequotas.jenting.io
CR. Verify theprojectresourcequotas.jenting.io
CRstatus.used
reflecting the current status.# get current projectresourcequota-sample kubectl get prq projectresourcequota-sample # remove the default namespace from spec.namespaces kubectl edit prq projectresourcequota-sample # check the projectresourcequota-sample is updated kubectl get prq projectresourcequota-sample
Note We don't support calculating the existing Kubernetes resources usage before the ProjectResourceQuota CR is configured. It means for the existing Kubernetes resources are not limited by the new ProjectResourceQuota CR.
-
Undeploy the resources from the cluster:
make undeploy
-
Uninstall the CRDs from the cluster:
make uninstall
This project aims to follow the Kubernetes Operator pattern.
It uses Controllers, which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.