From baa0043209ac476fcd944f028610eae68343a90b Mon Sep 17 00:00:00 2001 From: Bill Maxwell Date: Tue, 9 Aug 2022 08:08:28 -0700 Subject: [PATCH 1/2] Updated READMEs to fix gramar and syntax changes. Signed-off-by: Bill Maxwell --- mariadb/README.md | 37 ++++++++++++++++++------------------- nginx/README.md | 12 ++++++++---- redis/README.md | 43 ++++++++++++++++++++++--------------------- registry/README.md | 2 +- 4 files changed, 49 insertions(+), 45 deletions(-) diff --git a/mariadb/README.md b/mariadb/README.md index b45f968..2a3eda6 100644 --- a/mariadb/README.md +++ b/mariadb/README.md @@ -12,21 +12,21 @@ This Acorn provides a multi-node galera cluster. `acorn run [MARIADB_GALERA_IMAGE]` -This will create a three node cluster with a default database acorn. +This will create a three-node cluster with a default database acorn. You can get the username and root password if needed from the generated secrets. ## Production considerations -By default this will start a single instance of MariaDB with 1 replica on a 10GB volume from the default storage class. In a production setting you will want to customize this, along with the size and storage class of the backup volumes. +By default, this will start a single instance of MariaDB with 1 replica on a 10GB volume from the default storage class. In a production setting, you will want to customize this, along with the size and storage class of the backup volumes. #### TODOs Document how to mount volumes and custom types. -* Add a way to reset root password +* Add a way to reset the root password * Add a way to pass in custom backup scripts -* Add clean up of older backups.. also limit the number kept. +Add clean-up of older backups.. also limit the number kept. ## Available options @@ -51,19 +51,19 @@ Ports: mariadb-0:3306/tcp ### Accessing mariadb -By default the Acorn creates a single replica which can be accessed via the `mariadb-0` service. +By default, the Acorn creates a single replica that can be accessed via the `mariadb-0` service. If you are going to run in an active-active state with multiple r/w replicas you will want to expose the `mariadb` service and access that through a load balancer. ### Adding replicas -By default the MariaDB chart starts a single r/w replica. In production settings you would typically want more then one replica running. Users have two options with this chart. One method is to add additional passive followers to the primary server. When one of these passive replicas fail or experience an outage nothing happens to the running primary server. If the primary r/w replica fails then service will be down until it is restored. +By default, the MariaDB chart starts a single r/w replica. In production settings, you would typically want more than one replica running. Users have two options with this chart. One method is to add additional passive followers to the primary server. When one of these passive replicas fails or experiences an outage nothing happens to the running primary server. If the primary r/w replica fails then service will be down until it is restored. Alternatively, the Acorn can configure the replicas to run in an active-active state with multiple replicas able to perform r/w operations. #### Active-Passive replication -If you would like to run active-passive then you will need to create a custom yaml file like so: +If you would like to run active-passive then you will need to create a custom YAML file like so: config.yaml @@ -79,13 +79,12 @@ replicas: Then update your deployment: `acorn update [APP-NAME] --custom-mariadb-config @config.yaml --replicas 2` -This will startup a second replica that can be used for backups, and read-only access. +This will start up a second replica that can be used for backups, and read-only access. #### Active-Active replication -Galera clusters have a quorem algorithm to prevent split brain scenarios. Ideally clusters run with an odd number of replicas. - -By default there are three replicas running and there shouldn't be less. This allows for 1 replica to fail and still serve data. Additional replicas can be added by updating the application to the total number of replicas desired in the end state. +Galera clusters have a quorum algorithm to prevent split-brain scenarios. Clusters should run with an odd number of replicas to avoid split-brain scenarios. +Three replicas are running by default and there shouldn't be fewer. This allows for 1 replica to fail and still serve data. Additional replicas can be added by updating the application to the total number of replicas desired in the end state. `acorn update [APP-NAME] --replicas 5` @@ -155,7 +154,7 @@ Here is an example of how you could do daily backups: If you would like to add backups to an already running cluster, you can do: `acorn update [APP-NAME] --backup-schedule "0 0 * * *"` -Backups are run from pod that will mount both the data volume from the `mariadb-0` replica and a separate backup volume. The job uses `mariabackup` to perform the backup of the database cluster. +Backups are run from a pod that will mount both the data volume from the `mariadb-0` replica and a separate `mariadb-backup-vol` volume. The job uses the `mariabackup` user to perform the backup of the database cluster. #### Listing available backups @@ -205,14 +204,14 @@ config_block: key: "value" ``` -So to pass or update a setting in the `mysqld` configuration block create a config.yaml with the content: +So to pass or update a setting in the `mysqld` configuration block creates a config.yaml with the content: ```yaml mysqld: max_connections: 1024 ``` -You can set per-replica configurations if needed by placing the configurations under the `replica` top level key. Each node, specified in `mariadb-\(i)` where `i` is the replica number, can have custom configuration per config block. +You can set per-replica configurations if needed by placing the configurations under the `replica` top-level key. Each node, specified in `mariadb-\(i)` where `i` is the replica number, can have a custom configuration per config block. ```yaml mysqld: @@ -225,9 +224,9 @@ replicas: Then run/update the app like so: -`acorn run [MARIADB_GALERA_IMAGE] --custom-mariadb-confg @config.yaml` +`acorn run [MARIADB_GALERA_IMAGE] --custom-mariadb-config @config.yaml` -This will get merged with the configuration defined in the Acorn. the defaul config block can be found [here](https://github.com/acorn-io/acorn-library/blob/main/mariadb-galera/Acornfile#L207). +This will get merged with the configuration defined in the Acorn. the default config block can be found [here](https://github.com/acorn-io/acorn-library/blob/main/mariadb-galera/Acornfile#L207). Some of the configuration values can not be changed. @@ -247,7 +246,7 @@ The clusters will come up as expected after this. ### Active - Active recovery from shutdown/quorem loss -When a cluster is completely shutdown, or has lost a majority of the nodes you need to follow a series of manual steps to recover. +When a cluster is completely shut down or has lost a majority of the nodes you need to follow a series of manual steps to recover. 1.) Update the deployment with the `acorn update [APP-NAME] --recovery` flag. @@ -260,8 +259,8 @@ mariadb-1-7d977b8fb8-f8lwx/mariadb-1: 2022-06-17 23:57:17 0 [Note] WSREP: Recove mariadb-2-7f49689648-6h7kf/mariadb-2: 2022-06-17 23:57:18 0 [Note] WSREP: Recovered position: 8d5f1139-ee97-11ec-b8ef-7359029eaa77:3 ``` -3.) Find the node with the highest position value. In this case we can use `mariadb-1` or `mariadb-2` since they are both at 3. +3.) Find the node with the highest position value. In this case, we can use `mariadb-1` or `mariadb-2` since they are both at 3. -4.) Update the app so that `acorn update [APP-NAME] --recovery --force-recover --boot-strap-index 2`. We are using `2` because it is the most advanced. If the containers have come up and you do not see "failed to update grastate.data" then the app is ready to update. +4.) Update the app so that `acorn update [APP-NAME] --recovery --force-recover --boot-strap-index 2`. We are using `2` because it is the most advanced. If the containers have come up and you do not see `failed to update grastate.data` then the app is ready to update. 5.) `acorn update [APP-NAME] --recovery=false --force-recover=false`. This will cause the containers to restart and the new boot-strap-index node will start the cluster. diff --git a/nginx/README.md b/nginx/README.md index 2190dc5..958a9cc 100644 --- a/nginx/README.md +++ b/nginx/README.md @@ -15,7 +15,7 @@ This will clone content from this site into the HTML root directory and serve it To expose this service via ingress: -`acorn run -d my-app.example.com:nginx [IMAGE] --git-repo ...` +`acorn run -p my-app.example.com:nginx [IMAGE] --git-repo ...` ### Available options @@ -35,7 +35,7 @@ Ports: nginx:80/http ### Configure custom server blocks Create a custom secret with the keys equal to the name of the file to place in `/etc/nginx/conf.d/` -The content should be a base64 encoded nginx server block. +The content should be a base64 encoded Nginx server block. When running the acorn: @@ -43,7 +43,7 @@ When running the acorn: ### Configure base configuration -Create a custom secret with that has a data key `template` with the full content of the nginx.conf file to be used. +Create a custom secret that has a data key `template` with the full content of the nginx.conf file to be used. When running the acorn pass in the secret name: @@ -53,6 +53,10 @@ When running the acorn pass in the secret name: Create a secret with the ssh keys to use. The keys must already be trusted by the remote repository. You can create the secret like: -`kubectl create secret -n acorn-redis generic my-ssh-keys --from-file=/Users/me/.ssh/id_rsa` +`acorn secret create my-ssh-keys --file=/Users/me/.ssh/id_rsa` when you run the acorn bind in the secret: + +```shell +acorn run -s my-ssh-keys:git-clone-ssh-keys [IMAGE] +``` diff --git a/redis/README.md b/redis/README.md index 261cf70..f9c39cb 100644 --- a/redis/README.md +++ b/redis/README.md @@ -1,15 +1,17 @@ # Redis Acorn --- -This Acorn deploys Redis in a single leader with multiple followers or in Redis Cluster configuration. +This Acorn deploys Redis in a single leader with multiple followers. + +NOTE: Redis Cluster and Sentinel support still WIP. ## Quick start -To quickly deploy a replicated Redis setup simply run the acorn: +To quickly deploy a single Redis instance simply run the acorn: `acorn run ` -This will create a single Redis server and a single read only replica. +This will create a single Redis server. Auth will be setup, and you can obtain the password under the token via: `acorn secret expose redis-auth-` @@ -17,7 +19,7 @@ Auth will be setup, and you can obtain the password under the token via: If you set the value in the env var REDISCLI_AUTH the `redis-cli` will automatically pick it up. `export REDISCLI_AUTH=` -You can connect to the Redis instance via the `redis-cli -h ` if the env var above is set you will automatically be logged in, otherwise you need to `AUTH ` +You can connect to the Redis instance via the `redis-cli -h ` if the env var above is set you will automatically be logged in, otherwise, you need to `AUTH ` ### Available options @@ -27,22 +29,21 @@ Secrets: redis-auth, redis-leader-config, redis-user-data, redis-follower-conf Container: redis-0, redis-follower-0 Ports: redis-0:6379/tcp, redis-follower-0:6379/tcp - --redis-follower-config string User provided configuration for leader and cluster servers - --redis-leader-config string User provided configuration for leader and cluster servers - --redis-leader-count int Redis leader replica count. Setting this value 3 and above will configure Redis cluster. - --redis-password string Sets the requirepass value otherwise one is randomly generated - --redis-replica-count int Redis replicas per leader. To run in stand alone set to 0 + --follower-config string User provided configuration for leader and cluster servers + --leader-config string User provided configuration for leader and cluster servers + --leader int Redis leader replica count. Setting this value 3 and above will configure Redis cluster. + --replica int Redis replicas per leader. To run in stand alone set to 0 ``` ## Advanced Usage ### Stand alone/Dev mode -You can run in stand alone mode with only a single read-write instance by setting the `--redis-replica-count` to `0`. +You can run in stand-alone mode with only a single read-write instance by setting the `--replica-count` to `0`. ### Custom configuration -Custom configuration can be provided for leaders and follower node types. The passed in configuration will be merged with the Acorn values. The configuration data can be passed in via `yaml` or `cue` file. It should be in the form of `key: value` pairs. +Custom configuration can be provided for leaders and follower node types. The passed-in configuration will be merged with the Acorn values. The configuration data can be passed in via `yaml` or `cue` file. It should be in the form of `key: value` pairs. For example redis-test.yaml @@ -53,9 +54,9 @@ save: "1800 1 150 50 60 10000" ``` Can be passed like: -`acorn run --redis-leader-config @redis-test.yaml --redis-replica-count 0` +`acorn run --leader-config @redis-test.yaml --replica-count 0` -This will merge with the predefined redis config. There are some values that can not be overriden: +This will merge with the predefined Redis config. Some values can not be overridden: #### All Server Roles @@ -83,25 +84,25 @@ appendonly ### Adding additional replicas -When running in leader/follower mode you can add additional read-only replicas if needed. Update the app with `--redis-replica-count ` +When running in leader/follower mode you can add additional read-only replicas if needed. Update the app with `--replica-count ` ### Running in cluster mode -To run in cluster mode, you will need to determine how many primary and how many replicas you would like to run. You will need a minimum of 3 leader nodes to setup the cluster. Then you can specify how many replicas to back up each leader. A simple cluster with redundancy can be deployed as follows: +To run in cluster mode, you will need to determine how many primary and how many replicas you would like to run. You will need a minimum of 3 leader nodes to set up the cluster. Then you can specify how many replicas to back up each leader. A simple cluster with redundancy can be deployed as follows: -`acorn run --redis-leader-count 3 --redis-replica-count 1` +`acorn run --leader-count 3 --replica-count 1` This will create a cluster with three nodes each backed up by a single replica. This will deploy 6 containers in total. Every time you scale up a leader you will also scale up a replica. #### Adding additional nodes -To add additional nodes, simply change the scale of the `--redis-leader-count` to a higher number. -`acorn update --image [REDIS] [APP_NAME] --redis-leader-count 4 --redis-replica-count 1` +To add additional nodes, simply change the scale of the `--leader-count` to a higher number. +`acorn update --image [REDIS] [APP_NAME] --leader-count 4 --replica-count 1` -This will add an additional leader and replica (assuming there were 3 leaders previously). These new pods will be added to the cluster one as a leader and the other a replica of that new leader. The cluster will automatically be rebalanced once the new leader has been added. +This will add an additional leader and replica (assuming there were 3 leaders previously). These new pods will be added to the cluster one as a leader and the other as a replica of that new leader. The cluster will automatically be rebalanced once the new leader has been added. #### Removing nodes -Before removing nodes from the redis cluster you must first empty them. Nodes will be removed in descending order. Nodes are named redis-[LEADER]-[FOLLOWER] so the highest leader and all followers will be removed on the scale down operation. **Note** During normal redis cluster operations leaders and followers might switch roles. This process requires manual intervention to detach the replicas and empty any leaders. Once this is done, you can scale down the cluster. +Before removing nodes from the Redis cluster you must first empty them. Nodes will be removed in descending order. Nodes are named redis-[LEADER]-[FOLLOWER] so the highest leader and all followers will be removed on the scale-down operation. **Note** During normal Redis cluster operations leaders and followers might switch roles. This process requires manual intervention to detach the replicas and empty any leaders. Once this is done, you can scale down the cluster. -Follow REDIS docs: to learn how to empty, reshard and remove nodes. +Follow REDIS docs: to learn how to empty, re-shard, and remove nodes. diff --git a/registry/README.md b/registry/README.md index edc00ef..ce8296e 100644 --- a/registry/README.md +++ b/registry/README.md @@ -98,7 +98,7 @@ s3: This config blob is using data from the secret `user-secret-data`. This should be populated ahead of time: -`kubectl create secret generic my-data --type opaque --from-literal=s3accesskey=myaccesskey --from-literal=s3secretkey=mysecretkey` +`acorn secret create my-data --data s3accesskey=myaccesskey --data s3secretkey=mysecretkey` To consume this as part of the deployment run: From cebd532980a817a35c2f3904d964a272483bbc1a Mon Sep 17 00:00:00 2001 From: Bill Maxwell Date: Tue, 9 Aug 2022 08:09:45 -0700 Subject: [PATCH 2/2] Pinned versions of NGINX and Git containers. Signed-off-by: Bill Maxwell --- nginx/Acornfile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/nginx/Acornfile b/nginx/Acornfile index 6a228ce..cf68342 100644 --- a/nginx/Acornfile +++ b/nginx/Acornfile @@ -10,7 +10,7 @@ args: { } containers: nginx: { - image: "nginx" + image: "nginx:1.23-alpine" scale: args.replicas ports: expose: "80:80/http" @@ -25,7 +25,7 @@ containers: nginx: { if args.gitRepo != "" { sidecars: { git: { - image: "alpine/git:v2.34.2" + image: "alpine/git:v2.36.2" init: true dirs: { "/var/www/html": "volume://site-data"