Skip to content

Commit

Permalink
Merge pull request #560 from acend/namespace-update
Browse files Browse the repository at this point in the history
Update namespace names
  • Loading branch information
bliemli authored Oct 10, 2023
2 parents e7434c3 + 4e409dd commit b06fcff
Show file tree
Hide file tree
Showing 2 changed files with 30 additions and 14 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,17 @@ weight: 95

In this lab, we are going to look at ResourceQuotas and LimitRanges. As {{% param distroName %}} users, we are most certainly going to encounter the limiting effects that ResourceQuotas and LimitRanges impose.

{{% onlyWhenNot baloise %}}
{{% alert title="Warning" color="warning" %}}
For this lab to work it is vital that you use the namespace `<username>-quota`!
{{% /alert %}}
{{% /onlyWhenNot %}}

{{% onlyWhen baloise %}}
{{% alert title="Warning" color="warning" %}}
For this lab to work it is vital that you use the namespace `<username>-quota-test`!
{{% /alert %}}
{{% /onlyWhen %}}


## ResourceQuotas
Expand All @@ -23,13 +31,13 @@ Defining ResourceQuotas makes sense when the cluster administrators want to have
In order to check for defined quotas in your Namespace, simply see if there are any of type ResourceQuota:

```bash
{{% param cliToolName %}} get resourcequota --namespace <namespace>-quota
{{% param cliToolName %}} get resourcequota --namespace <namespace>
```

To show in detail what kinds of limits the quota imposes:

```bash
{{% param cliToolName %}} describe resourcequota <quota-name> --namespace <namespace>-quota
{{% param cliToolName %}} describe resourcequota <quota-name> --namespace <namespace>
```

{{% onlyWhenNot openshift %}}
Expand Down Expand Up @@ -125,14 +133,22 @@ The possibility of enforcing minimum and maximum resources and defining Resource

### {{% task %}} Namespace

{{% onlyWhenNot baloise %}}
{{% alert title="Warning" color="warning" %}}
Remember to use the namespace `<username>-quota`, otherwise this lab will not work!
{{% /alert %}}
{{% /onlyWhenNot %}}

{{% onlyWhen baloise %}}
{{% alert title="Warning" color="warning" %}}
Remember to use the namespace `<username>-quota-test`, otherwise this lab will not work!
{{% /alert %}}
{{% /onlyWhen %}}

Analyse the LimitRange in your Namespace (there has to be one, if not you are using the wrong Namespace):

```bash
{{% param cliToolName %}} describe limitrange --namespace <namespace>-quota
{{% param cliToolName %}} describe limitrange --namespace <namespace>
```

The command above should output this (name and Namespace will vary):
Expand All @@ -149,7 +165,7 @@ Container cpu - - 10m 100m -
Check for the ResourceQuota in your Namespace (there has to be one, if not you are using the wrong Namespace):

```bash
{{% param cliToolName %}} describe quota --namespace <namespace>-quota
{{% param cliToolName %}} describe quota --namespace <namespace>
```

The command above will produce an output similar to the following (name and namespace may vary)
Expand Down Expand Up @@ -191,7 +207,7 @@ spec:
Apply this resource with:
```bash
{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace <namespace>-quota
{{% param cliToolName %}} apply -f pod_stress2much.yaml --namespace <namespace>
```

{{% alert title="Note" color="info" %}}
Expand All @@ -201,7 +217,7 @@ You have to actively terminate the following command pressing `CTRL+c` on your k
Watch the Pod's creation with:

```bash
{{% param cliToolName %}} get pods --watch --namespace <namespace>-quota
{{% param cliToolName %}} get pods --watch --namespace <namespace>
```

You should see something like the following:
Expand All @@ -219,7 +235,7 @@ stress2much 0/1 CrashLoopBackOff 1 20s
The `stress2much` Pod was OOM (out of memory) killed. We can see this in the `STATUS` field. Another way to find out why a Pod was killed is by checking its status. Output the Pod's YAML definition:

```bash
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>-quota
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>
```

Near the end of the output you can find the relevant status part:
Expand All @@ -238,7 +254,7 @@ Near the end of the output you can find the relevant status part:
So let's look at the numbers to verify the container really had too little memory. We started the `stress` command using the parameter `--vm-bytes 85M` which means the process wants to allocate 85 megabytes of memory. Again looking at the Pod's YAML definition with:

```bash
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>-quota
{{% param cliToolName %}} get pod stress2much --output yaml --namespace <namespace>
```

reveals the following values:
Expand All @@ -262,7 +278,7 @@ Let's fix this by recreating the Pod and explicitly setting the memory request t
First, delete the `stress2much` pod with:

```bash
{{% param cliToolName %}} delete pod stress2much --namespace <namespace>-quota
{{% param cliToolName %}} delete pod stress2much --namespace <namespace>
```

Then create a new Pod where the requests and limits are set:
Expand Down Expand Up @@ -297,7 +313,7 @@ spec:
And apply this again with:
```bash
{{% param cliToolName %}} apply -f pod_stress.yaml --namespace <namespace>-quota
{{% param cliToolName %}} apply -f pod_stress.yaml --namespace <namespace>
```

{{% alert title="Note" color="info" %}}
Expand Down Expand Up @@ -339,7 +355,7 @@ spec:
```
```bash
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>-quota
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>
```

We are immediately confronted with an error message:
Expand All @@ -353,7 +369,7 @@ The default request value of 16 MiB of memory that was automatically set on the
Let's have a closer look at the quota with:

```bash
{{% param cliToolName %}} get quota --output yaml --namespace <namespace>-quota
{{% param cliToolName %}} get quota --output yaml --namespace <namespace>
```

which should output the following YAML definition:
Expand Down Expand Up @@ -404,7 +420,7 @@ spec:
And apply with:
```bash
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>-quota
{{% param cliToolName %}} apply -f pod_overbooked.yaml --namespace <namespace>
```

Even though the limits of both Pods combined overstretch the quota, the requests do not and so the Pods are allowed to run.
2 changes: 1 addition & 1 deletion content/en/docs/first-steps/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Authorized users inside a Project are able to manage those resources. Project na
{{% onlyWhen baloise %}}
You would usually create your first Project here using `oc new-project`.
This is, however, not possible on the provided cluster.
Instead, a Project named `<username>-training` has been pre-created for you.
Instead, a Project named `<username>-training-test` has been pre-created for you.
Use this Project for all labs in this training except for {{<link "resourcequotas-and-limitranges">}}.

{{% alert title="Note" color="info" %}}
Expand Down

0 comments on commit b06fcff

Please sign in to comment.