diff --git a/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md b/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md index 4de9cae8..4c797322 100644 --- a/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md +++ b/content/en/docs/additional-concepts/resourcequotas-and-limitranges/_index.md @@ -5,9 +5,17 @@ weight: 95 In this lab, we are going to look at ResourceQuotas and LimitRanges. As {{% param distroName %}} users, we are most certainly going to encounter the limiting effects that ResourceQuotas and LimitRanges impose. +{{% onlyWhenNot baloise %}} {{% alert title="Warning" color="warning" %}} For this lab to work it is vital that you use the namespace `-quota`! {{% /alert %}} +{{% /onlyWhenNot %}} + +{{% onlyWhen baloise %}} +{{% alert title="Warning" color="warning" %}} +For this lab to work it is vital that you use the namespace `-quota-test`! +{{% /alert %}} +{{% /onlyWhen %}} ## ResourceQuotas @@ -23,13 +31,13 @@ Defining ResourceQuotas makes sense when the cluster administrators want to have In order to check for defined quotas in your Namespace, simply see if there are any of type ResourceQuota: ```bash -{{% param cliToolName %}} get resourcequota --namespace -quota +{{% param cliToolName %}} get resourcequota --namespace ``` To show in detail what kinds of limits the quota imposes: ```bash -{{% param cliToolName %}} describe resourcequota --namespace -quota +{{% param cliToolName %}} describe resourcequota --namespace ``` {{% onlyWhenNot openshift %}} @@ -125,14 +133,22 @@ The possibility of enforcing minimum and maximum resources and defining Resource ### {{% task %}} Namespace +{{% onlyWhenNot baloise %}} {{% alert title="Warning" color="warning" %}} Remember to use the namespace `-quota`, otherwise this lab will not work! {{% /alert %}} +{{% /onlyWhenNot %}} + +{{% onlyWhen baloise %}} +{{% alert title="Warning" color="warning" %}} +Remember to use the namespace `-quota-test`, otherwise this lab will not work! +{{% /alert %}} +{{% /onlyWhen %}} Analyse the LimitRange in your Namespace (there has to be one, if not you are using the wrong Namespace): ```bash -{{% param cliToolName %}} describe limitrange --namespace -quota +{{% param cliToolName %}} describe limitrange --namespace ``` The command above should output this (name and Namespace will vary): @@ -149,7 +165,7 @@ Container cpu - - 10m 100m - Check for the ResourceQuota in your Namespace (there has to be one, if not you are using the wrong Namespace): ```bash -{{% param cliToolName %}} describe quota --namespace -quota +{{% param cliToolName %}} describe quota --namespace ``` The command above will produce an output similar to the following (name and namespace may vary) @@ -191,7 +207,7 @@ spec: Apply this resource with: ```bash -{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace -quota +{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace ``` {{% alert title="Note" color="info" %}} @@ -201,7 +217,7 @@ You have to actively terminate the following command pressing `CTRL+c` on your k Watch the Pod's creation with: ```bash -{{% param cliToolName %}} get pods --watch --namespace -quota +{{% param cliToolName %}} get pods --watch --namespace ``` You should see something like the following: @@ -219,7 +235,7 @@ stress2much 0/1 CrashLoopBackOff 1 20s The `stress2much` Pod was OOM (out of memory) killed. We can see this in the `STATUS` field. Another way to find out why a Pod was killed is by checking its status. Output the Pod's YAML definition: ```bash -{{% param cliToolName %}} get pod stress2much --output yaml --namespace -quota +{{% param cliToolName %}} get pod stress2much --output yaml --namespace ``` Near the end of the output you can find the relevant status part: @@ -238,7 +254,7 @@ Near the end of the output you can find the relevant status part: So let's look at the numbers to verify the container really had too little memory. We started the `stress` command using the parameter `--vm-bytes 85M` which means the process wants to allocate 85 megabytes of memory. Again looking at the Pod's YAML definition with: ```bash -{{% param cliToolName %}} get pod stress2much --output yaml --namespace -quota +{{% param cliToolName %}} get pod stress2much --output yaml --namespace ``` reveals the following values: @@ -262,7 +278,7 @@ Let's fix this by recreating the Pod and explicitly setting the memory request t First, delete the `stress2much` pod with: ```bash -{{% param cliToolName %}} delete pod stress2much --namespace -quota +{{% param cliToolName %}} delete pod stress2much --namespace ``` Then create a new Pod where the requests and limits are set: @@ -297,7 +313,7 @@ spec: And apply this again with: ```bash -{{% param cliToolName %}} apply -f pod_stress.yaml --namespace -quota +{{% param cliToolName %}} apply -f pod_stress.yaml --namespace ``` {{% alert title="Note" color="info" %}} @@ -339,7 +355,7 @@ spec: ``` ```bash -{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace -quota +{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace ``` We are immediately confronted with an error message: @@ -353,7 +369,7 @@ The default request value of 16 MiB of memory that was automatically set on the Let's have a closer look at the quota with: ```bash -{{% param cliToolName %}} get quota --output yaml --namespace -quota +{{% param cliToolName %}} get quota --output yaml --namespace ``` which should output the following YAML definition: @@ -404,7 +420,7 @@ spec: And apply with: ```bash -{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace -quota +{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace ``` Even though the limits of both Pods combined overstretch the quota, the requests do not and so the Pods are allowed to run. diff --git a/content/en/docs/first-steps/_index.md b/content/en/docs/first-steps/_index.md index f44a5e00..abbbbb1f 100644 --- a/content/en/docs/first-steps/_index.md +++ b/content/en/docs/first-steps/_index.md @@ -90,7 +90,7 @@ Authorized users inside a Project are able to manage those resources. Project na {{% onlyWhen baloise %}} You would usually create your first Project here using `oc new-project`. This is, however, not possible on the provided cluster. -Instead, a Project named `-training` has been pre-created for you. +Instead, a Project named `-training-test` has been pre-created for you. Use this Project for all labs in this training except for {{}}. {{% alert title="Note" color="info" %}}