Skip to content

Commit

Permalink
Update chaos testing guide
Browse files Browse the repository at this point in the history
This commit adds additional content to be in sync with the version
maintained in Krkn in addition to replacing OpenShift term with
Kubernetes.

Signed-off-by: Naga Ravi Chaitanya Elluri <[email protected]>
  • Loading branch information
chaitanyaenr committed Jan 8, 2024
1 parent 02aab21 commit f685d6b
Showing 1 changed file with 60 additions and 21 deletions.
81 changes: 60 additions & 21 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,9 @@
* [Cluster recovery checks, metrics evaluation and pass/fail criteria](#cluster-recovery-checks-metrics-evaluation-and-passfail-criteria)
* [Scenarios](#scenarios)
* [Test Environment Recommendations - how and where to run chaos tests](#test-environment-recommendations---how-and-where-to-run-chaos-tests)
* [Chaos testing in Practice within the OpenShift Organization](#chaos-testing-in-practice-within-the-OpenShift-Organization)
* [Using kraken as part of a tekton pipeline](#using-kraken-as-part-of-a-tekton-pipeline)
* [Start as a single taskrun](#start-as-a-single-taskrun)
* [Start as a pipelinerun](#start-as-a-pipelinerun)
* [Chaos testing in Practice](#chaos-testing-in-practice)
* [OpenShift oraganization](#openshift-organization)
* [startx-lab](#startx-lab)


### Introduction
Expand Down Expand Up @@ -49,7 +48,7 @@ Failures in production are costly. To help mitigate risk to service health, cons


### Best Practices
Now that we understand the test methodology, let us take a look at the best practices for an OpenShift cluster. On that platform there are user applications and cluster workloads that need to be designed for stability and to provide the best user experience possible:
Now that we understand the test methodology, let us take a look at the best practices for an Kubernetes cluster. On that platform there are user applications and cluster workloads that need to be designed for stability and to provide the best user experience possible:

- Alerts with appropriate severity should get fired.
- Alerts are key to identify when a component starts degrading, and can help focus the investigation effort on affected system components.
Expand Down Expand Up @@ -78,11 +77,11 @@ We want to look at this in terms of CPU, Memory, Disk, Throughput, Network etc.
- The controller watching the component should recognize a failure as soon as possible. The component needs to have minimal initialization time to avoid extended downtime or overloading the replicas if it is a highly available configuration. The cause of failure can be because of issues with the infrastructure on top of which it is running, application failures, or because of service failures that it depends on.

- High Availability deployment strategy.
- There should be multiple replicas ( both OpenShift and application control planes ) running preferably in different availability zones to survive outages while still serving the user/system requests. Avoid single points of failure.
- There should be multiple replicas ( both Kubernetes and application control planes ) running preferably in different availability zones to survive outages while still serving the user/system requests. Avoid single points of failure.
- Backed by persistent storage
- It is important to have the system/application backed by persistent storage. This is especially important in cases where the application is a database or a stateful application given that a node, pod, or container failure will wipe off the data.

- There should be fallback routes to the backend in case of using CDN, for example, Akamai in case of console.redhat.com - a managed service deployed on top of OpenShift dedicated:
- There should be fallback routes to the backend in case of using CDN, for example, Akamai in case of console.redhat.com - a managed service deployed on top of Kubernetes dedicated:
- Content delivery networks (CDNs) are commonly used to host resources such as images, JavaScript files, and CSS. The average web page is nearly 2 MB in size, and offloading heavy resources to third-parties is extremely effective for reducing backend server traffic and latency. However, this makes each CDN an additional point of failure for every site that relies on it. If the CDN fails, its customers could also fail.
- To test how the application reacts to failures, drop all network traffic between the system and CDN. The application should still serve the content to the user irrespective of the failure.

Expand All @@ -93,10 +92,10 @@ We want to look at this in terms of CPU, Memory, Disk, Throughput, Network etc.


### Tooling
Now that we looked at the best practices, In this section, we will go through how [Kraken](https://github.com/redhat-chaos/krkn) - a chaos testing framework can help test the resilience of OpenShift and make sure the applications and services are following the best practices.
Now that we looked at the best practices, In this section, we will go through how [Kraken](https://github.com/redhat-chaos/krkn) - a chaos testing framework can help test the resilience of Kubernetes and make sure the applications and services are following the best practices.

#### Workflow
Let us start by understanding the workflow of kraken: the user will start by running kraken by pointing to a specific OpenShift cluster using kubeconfig to be able to talk to the platform on top of which the OpenShift cluster is hosted. This can be done by either the oc/kubectl API or the cloud API. Based on the configuration of kraken, it will inject specific chaos scenarios as shown below, talk to [Cerberus](https://github.com/redhat-chaos/cerberus) to get the go/no-go signal representing the overall health of the cluster ( optional - can be turned off ), scrapes metrics from in-cluster prometheus given a metrics profile with the promql queries and stores them long term in Elasticsearch configured ( optional - can be turned off ), evaluates the promql expressions specified in the alerts profile ( optional - can be turned off ) and aggregated everything to set the pass/fail i.e. exits 0 or 1. More about the metrics collection, cerberus and metrics evaluation can be found in the next section.
Let us start by understanding the workflow of kraken: the user will start by running kraken by pointing to a specific Kubernetes cluster using kubeconfig to be able to talk to the platform on top of which the Kubernetes cluster is hosted. This can be done by either the oc/kubectl API or the cloud API. Based on the configuration of kraken, it will inject specific chaos scenarios as shown below, talk to [Cerberus](https://github.com/redhat-chaos/cerberus) to get the go/no-go signal representing the overall health of the cluster ( optional - can be turned off ), scrapes metrics from in-cluster prometheus given a metrics profile with the promql queries and stores them long term in Elasticsearch configured ( optional - can be turned off ), evaluates the promql expressions specified in the alerts profile ( optional - can be turned off ) and aggregated everything to set the pass/fail i.e. exits 0 or 1. More about the metrics collection, cerberus and metrics evaluation can be found in the next section.

![Kraken workflow](../media/kraken-workflow.png)

Expand All @@ -113,15 +112,15 @@ If the monitoring tool, cerberus is enabled it will consume the signal and conti

### Scenarios

Let us take a look at how to run the chaos scenarios on your OpenShift clusters using Kraken-hub - a lightweight wrapper around Kraken to ease the runs by providing the ability to run them by just running container images using podman with parameters set as environment variables. This eliminates the need to carry around and edit configuration files and makes it easy for any CI framework integration. Here are the scenarios supported:
Let us take a look at how to run the chaos scenarios on your Kubernetes clusters using Kraken-hub - a lightweight wrapper around Kraken to ease the runs by providing the ability to run them by just running container images using podman with parameters set as environment variables. This eliminates the need to carry around and edit configuration files and makes it easy for any CI framework integration. Here are the scenarios supported:

- Pod Scenarios ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/pod-scenarios.md))
- Disrupts OpenShift/Kubernetes and applications deployed as pods:
- Disrupts Kubernetes/Kubernetes and applications deployed as pods:
- Helps understand the availability of the application, the initialization timing and recovery status.
- [Demo](https://asciinema.org/a/452351?speed=3&theme=solarized-dark)

- Container Scenarios ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/container-scenarios.md))
- Disrupts OpenShift/Kubernetes and applications deployed as containers running as part of a pod(s) using a specified kill signal to mimic failures:
- Disrupts Kubernetes/Kubernetes and applications deployed as containers running as part of a pod(s) using a specified kill signal to mimic failures:
- Helps understand the impact and recovery timing when the program/process running in the containers are disrupted - hangs, paused, killed etc., using various kill signals, i.e. SIGHUP, SIGTERM, SIGKILL etc.
- [Demo](https://asciinema.org/a/BXqs9JSGDSEKcydTIJ5LpPZBM?speed=3&theme=solarized-dark)

Expand All @@ -135,8 +134,8 @@ Let us take a look at how to run the chaos scenarios on your OpenShift clusters
- [Demo](https://asciinema.org/a/ANZY7HhPdWTNaWt4xMFanF6Q5)

- Zone Outages ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/zone-outages.md))
- Creates outage of availability zone(s) in a targeted region in the public cloud where the OpenShift cluster is running by tweaking the network acl of the zone to simulate the failure, and that in turn will stop both ingress and egress traffic from all nodes in a particular zone for the specified duration and reverts it back to the previous state.
- Helps understand the impact on both Kubernetes/OpenShift control plane as well as applications and services running on the worker nodes in that zone.
- Creates outage of availability zone(s) in a targeted region in the public cloud where the Kubernetes cluster is running by tweaking the network acl of the zone to simulate the failure, and that in turn will stop both ingress and egress traffic from all nodes in a particular zone for the specified duration and reverts it back to the previous state.
- Helps understand the impact on both Kubernetes/Kubernetes control plane as well as applications and services running on the worker nodes in that zone.
- Currently, only set up for AWS cloud platform: 1 VPC and multiples subnets within the VPC can be specified.
- [Demo](https://asciinema.org/a/452672?speed=3&theme=solarized-dark)

Expand All @@ -156,7 +155,6 @@ Let us take a look at how to run the chaos scenarios on your OpenShift clusters
- Helps understand if the application/system components have reserved resources to not get disrupted because of rogue applications, or get performance throttled.
- CPU Hog ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/node-cpu-hog.md), [Demo](https://asciinema.org/a/452762))
- Memory Hog ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/node-memory-hog.md), [Demo](https://asciinema.org/a/452742?speed=3&theme=solarized-dark))
- IO Hog ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/node-io-hog.md))

- Time Skewing ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/time-scenarios.md))
- Manipulate the system time and/or date of specific pods/nodes.
Expand Down Expand Up @@ -202,17 +200,18 @@ Let us take a look at few recommendations on how and where to run the chaos test
- Enable Observability:
- Chaos Engineering Without Observability ... Is Just Chaos.
- Make sure to have logging and monitoring installed on the cluster to help with understanding the behaviour as to why it is happening. In case of running the tests in the CI where it is not humanly possible to monitor the cluster all the time, it is recommended to leverage Cerberus to capture the state during the runs and metrics collection in Kraken to store metrics long term even after the cluster is gone.
- Kraken ships with dashboards that will help understand API, Etcd and OpenShift cluster level stats and performance metrics.
- Kraken ships with dashboards that will help understand API, Etcd and Kubernetes cluster level stats and performance metrics.
- Pay attention to Prometheus alerts. Check if they are firing as expected.

- Run multiple chaos tests at once to mimic the production outages:
- For example, hogging both IO and Network at the same time instead of running them separately to observe the impact.
- You might have existing test cases, be it related to Performance, Scalability or QE. Run the chaos in the background during the test runs to observe the impact. Signaling feature in Kraken can help with coordinating the chaos runs i.e., start, stop, pause the scenarios based on the state of the other test jobs.


#### Chaos testing in Practice within the OpenShift Organization
#### Chaos testing in Practice

Within the OpenShift organization we use kraken to perform chaos testing throughout a release before the code is available to customers.
##### OpenShift organization
Within the Kubernetes organization we use kraken to perform chaos testing throughout a release before the code is available to customers.

1. We execute kraken during our regression test suite.

Expand All @@ -230,7 +229,14 @@ Within the OpenShift organization we use kraken to perform chaos testing through

3. We are starting to add in test cases that perform chaos testing during an upgrade (not many iterations of this have been completed).

### Using kraken as part of a tekton pipeline

##### startx-lab

**NOTE**: Requests for enhancements and any issues need to be filed at the mentioned links given that they are not natively supported in Kraken.

The following content covers the implementation details around how Startx is leveraging Kraken:

* Using kraken as part of a tekton pipeline

You can find on [artifacthub.io](https://artifacthub.io/packages/search?kind=7&ts_query_web=kraken) the
[kraken-scenario](https://artifacthub.io/packages/tekton-task/startx-tekton-catalog/kraken-scenario) `tekton-task`
Expand Down Expand Up @@ -258,14 +264,47 @@ to reflect your cluster configuration. Refer to the [kraken configuration](https
and [configuration examples](https://github.com/startxfr/tekton-catalog/blob/stable/task/kraken-scenario/0.1/samples/)
for details on how to configure theses resources.

#### Start as a single taskrun
* Start as a single taskrun

```bash
oc apply -f https://github.com/startxfr/tekton-catalog/raw/stable/task/kraken-scenario/0.1/samples/taskrun.yaml
```

#### Start as a pipelinerun
* Start as a pipelinerun

```yaml
oc apply -f https://github.com/startxfr/tekton-catalog/raw/stable/task/kraken-scenario/0.1/samples/pipelinerun.yaml
```

* Deploying kraken using a helm-chart

You can find on [artifacthub.io](https://artifacthub.io/packages/search?kind=0&ts_query_web=kraken) the
[chaos-kraken](https://artifacthub.io/packages/helm/startx/chaos-kraken) `helm-chart`
which can be used to deploy a kraken chaos scenarios.

Default configuration create the following resources :

- 1 project named **chaos-kraken**
- 1 scc with privileged context for kraken deployment
- 1 configmap with kraken 21 generic scenarios, various scripts and configuration
- 1 configmap with kubeconfig of the targeted cluster
- 1 job named kraken-test-xxx
- 1 service to the kraken pods
- 1 route to the kraken service

```bash
# Install the startx helm repository
helm repo add startx https://startxfr.github.io/helm-repository/packages/
# Install the kraken project
helm install --set project.enabled=true chaos-kraken-project startx/chaos-kraken
# Deploy the kraken instance
helm install \
--set kraken.enabled=true \
--set kraken.aws.credentials.region="eu-west-3" \
--set kraken.aws.credentials.key_id="AKIAXXXXXXXXXXXXXXXX" \
--set kraken.aws.credentials.secret="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
--set kraken.kubeconfig.token.server="https://api.mycluster:6443" \
--set kraken.kubeconfig.token.token="sha256~XXXXXXXXXX_PUT_YOUR_TOKEN_HERE_XXXXXXXXXXXX" \
-n chaos-kraken \
chaos-kraken-instance startx/chaos-kraken
```

0 comments on commit f685d6b

Please sign in to comment.