From f163f90c877f88697eeda0ecc972ca8d36e4845e Mon Sep 17 00:00:00 2001 From: Ilia Dmitriev Date: Sun, 3 Nov 2024 22:46:12 +0300 Subject: [PATCH] formatting --- GPG.md | 33 +- Kubernetes/README.md | 7 +- Kubernetes/kubectl.md | 790 +++++++++++++++++++++++++---------------- Kubernetes/minikube.md | 46 ++- s3/S3.md | 6 + sa/all-db.md | 13 +- sa/asyncpg.md | 9 +- zsh/zsh_install.md | 7 +- 8 files changed, 563 insertions(+), 348 deletions(-) diff --git a/GPG.md b/GPG.md index a3cbd32..e70f0b6 100644 --- a/GPG.md +++ b/GPG.md @@ -5,34 +5,43 @@ this manual for macOS (tested on Big Sur) ## installation 1. install Gnu Version of GPG for macOS + ```shell brew install gnupg brew install pinentry-mac ``` 1.1. Find pinentry-mac installation path + ```shell -% which pinentry-mac +% which pinentry-mac /opt/homebrew/bin/pinentry-mac ``` + 1.2. Setup pinentry + ```shell cat >> ~/.gnupg/gpg-agent.conf << EOF_ pinentry-program /opt/homebrew/bin/pinentry-mac EOF_ ``` + 1.3. restart `gpg-agent` + ```shell killall -1 gpg-agent ``` 2. Generate private and public part of key + ```shell gpg --gen-key ``` + Enter pass phrase twice (should leave it empty in order PyCharm work) 3. Print out keys list + ```shell gpg --list-secret-keys --keyid-format LONG ``` @@ -47,13 +56,15 @@ ssb rsa3072/318B84706760C3D7 2021-03-04 [E] [ годен до: 2023-03-04] ``` 4. add this key to git config -`git config --global user.signingkey ` + `git config --global user.signingkey ` + ```shell git config --global user.signingkey 5B34A94A1139F75E ``` 5. add variable GPG_TTY to your shell -this variable points to your tty + this variable points to your tty + ```shell echo 'export GPG_TTY=$(tty)' >> ~/.zshrc source ~/.zshrc @@ -61,29 +72,33 @@ env | grep GPG_TTY ``` 6. print out public key part + ```shell gpg --armor --export your.email.address@gmail.com ``` 7. add public key part to github setting -* https://github.com/settings/keys -* press `New GPG key` button -* paste your public key and press `Add GPG key` +- https://github.com/settings/keys +- press `New GPG key` button +- paste your public key and press `Add GPG key` ## git command line usage 1. use `-S` option to sing your commit explicitly + ```shell git commit -S -m 'Signed commit' ``` 2. show signature in commit log + ```shell git log --show-signature ``` 3. show signature of defined commit + ```shell git show --show-signature ``` @@ -91,6 +106,7 @@ git show --show-signature ## PyCharm/IntelliJ IDEA 1. disable tty and activate agent daemon + ```shell cat >> ~/.gnupg/pgp.conf << _EOF no-tty @@ -99,24 +115,29 @@ git show --show-signature ``` 2. to cache key pass phrase run once + ```shell gpg --status-fd=2 -bsau << _EOF Test message _EOF ``` + Enter pass phrase 3. make key trustable + ```shell (echo 5; echo y; echo save) | gpg --command-fd 0 --no-tty --no-greeting -q --edit-key trust ``` 4. remove password from key + ```shell (echo save) | gpg --command-fd 0 --no-tty --no-greeting -q --edit-key passwd ``` 5. enable all commit signing + ```shell git config --global commit.gpgsign true ``` diff --git a/Kubernetes/README.md b/Kubernetes/README.md index 0a34284..5a518a6 100644 --- a/Kubernetes/README.md +++ b/Kubernetes/README.md @@ -1,8 +1,9 @@ # Kubernetes how-to -1. [Minikube](minikube.md) -2. [Kubernetes](kubectl.md) +1. [Minikube](minikube.md#Contents) +2. [Kubernetes](kubectl.md#kubectl) 3. [Helm](helm.md) 4. [Deploy RabbitMQ to Kubernetes](RabbitMQ_Kubernetes.md) 5. [Deploy Redis to Kubernetes](Redis_Kubernetes.md) -6. [Deploy Postgres to Kubernetes](postgres_kubernetes.md) \ No newline at end of file +6. [Deploy Postgres to Kubernetes](postgres_kubernetes.md) + diff --git a/Kubernetes/kubectl.md b/Kubernetes/kubectl.md index 51f877d..c9dea66 100644 --- a/Kubernetes/kubectl.md +++ b/Kubernetes/kubectl.md @@ -2,105 +2,109 @@ - [Namespaces](#namespaces) - [Workloads](#workloads) - * [Deployments](#deployments) - + [Get all deployments](#get-all-deployments) - + [Create a new deployment](#create-a-new-deployment) - + [Edit deployment](#edit-deployment) - + [Delete deployment](#delete-deployment) - * [Pods](#pods) - + [Get pods](#get-pods) - + [Execute shell command inside a pod](#execute-shell-command-inside-a-pod) - + [Watch logs of a pod](#watch-logs-of-a-pod) - + [Create interactive temporary pod with shell](#create-interactive-temporary-pod-with-shell) - * [StatefulSet](#statefulset) - + [Create StatefulSet](#create-statefulset) - + [Network Identities of Stateful Set](#network-identities-of-stateful-set) - + [Examples of Stateful Set](#examples-of-stateful-set) + - [Deployments](#deployments) + - [Get all deployments](#get-all-deployments) + - [Create a new deployment](#create-a-new-deployment) + - [Edit deployment](#edit-deployment) + - [Delete deployment](#delete-deployment) + - [Pods](#pods) + - [Get pods](#get-pods) + - [Execute shell command inside a pod](#execute-shell-command-inside-a-pod) + - [Watch logs of a pod](#watch-logs-of-a-pod) + - [Create interactive temporary pod with shell](#create-interactive-temporary-pod-with-shell) + - [StatefulSet](#statefulset) + - [Create StatefulSet](#create-statefulset) + - [Network Identities of Stateful Set](#network-identities-of-stateful-set) + - [Examples of Stateful Set](#examples-of-stateful-set) - [Control commands](#control-commands) - * [Apply](#apply) - * [Delete](#delete) - * [Edit](#edit) - * [Patch](#patch) + - [Apply](#apply) + - [Delete](#delete) + - [Edit](#edit) + - [Patch](#patch) - [ConfigMaps](#configmaps) - * [Get ConfigMap](#get-configmap) - * [Create ConfigMap](#create-configmap) - * [Edit ConfigMaps](#edit-configmaps) - * [Delete ConfigMaps](#delete-configmaps) - * [Using ConfigMaps](#using-configmaps) - + [Environment variables from ConfigMap with envFrom](#environment-variables-from-configmap-with-envfrom) - + [Single environment variable from ConfigMap with valueFrom](#single-environment-variable-from-configmap-with-valuefrom) - + [Mount ConfigMap as a volume directory](#mount-configmap-as-a-volume-directory) - + [Mount single file from ConfigMap](#mount-single-file-from-configmap) + - [Get ConfigMap](#get-configmap) + - [Create ConfigMap](#create-configmap) + - [Edit ConfigMaps](#edit-configmaps) + - [Delete ConfigMaps](#delete-configmaps) + - [Using ConfigMaps](#using-configmaps) + - [Environment variables from ConfigMap with envFrom](#environment-variables-from-configmap-with-envfrom) + - [Single environment variable from ConfigMap with valueFrom](#single-environment-variable-from-configmap-with-valuefrom) + - [Mount ConfigMap as a volume directory](#mount-configmap-as-a-volume-directory) + - [Mount single file from ConfigMap](#mount-single-file-from-configmap) - [Secrets](#secrets) - * [Types of Secret](#types-of-secret) - * [Opaque secrets](#opaque-secrets) - * [Docker registry auth data](#docker-registry-auth-data) - * [TLS secret](#tls-secret) - * [Using secrets](#using-secrets) - + [As a mount from Pod](#as-a-mount-from-pod) - + [As a mount from Pod to a specific key path](#as-a-mount-from-pod-to-a-specific-key-path) - + [Using Secrets as environment variables](#using-secrets-as-environment-variables) - + [Environment variables from a Secret's specific key](#environment-variables-from-a-secret-s-specific-key) + - [Types of Secret](#types-of-secret) + - [Opaque secrets](#opaque-secrets) + - [Docker registry auth data](#docker-registry-auth-data) + - [TLS secret](#tls-secret) + - [Using secrets](#using-secrets) + - [As a mount from Pod](#as-a-mount-from-pod) + - [As a mount from Pod to a specific key path](#as-a-mount-from-pod-to-a-specific-key-path) + - [Using Secrets as environment variables](#using-secrets-as-environment-variables) + - [Environment variables from a Secret's specific key](#environment-variables-from-a-secret-s-specific-key) - [Labels](#labels) - * [Get resource labels](#get-resource-labels) - * [Use labels for filtering resources](#use-labels-for-filtering-resources) - + [Filtering condition operations](#filtering-condition-operations) - * [Add labels to pod](#add-labels-to-pod) - * [Remove labels from pod](#remove-labels-from-pod) - * [Using labels as a resource selectors](#using-labels-as-a-resource-selectors) - + [Matching condition types](#matching-condition-types) + - [Get resource labels](#get-resource-labels) + - [Use labels for filtering resources](#use-labels-for-filtering-resources) + - [Filtering condition operations](#filtering-condition-operations) + - [Add labels to pod](#add-labels-to-pod) + - [Remove labels from pod](#remove-labels-from-pod) + - [Using labels as a resource selectors](#using-labels-as-a-resource-selectors) + - [Matching condition types](#matching-condition-types) - [Service Accounts](#service-accounts) - * [Managing ServiceAccounts](#managing-serviceaccounts) - + [Get Service Accounts](#get-service-accounts) - + [Create Service Accounts](#create-service-accounts) - * [Service Account Usage](#service-account-usage) - - [Role and RoleBinding](#role-and-rolebinding) - * [Kubernetes APIs](#kubernetes-apis) - + [Patch resource using API](#patch-resource-using-api) + - [Managing ServiceAccounts](#managing-serviceaccounts) + - [Get Service Accounts](#get-service-accounts) + - [Create Service Accounts](#create-service-accounts) + - [Service Account Usage](#service-account-usage) + - [Role and RoleBinding](#role-and-rolebinding) + - [Kubernetes APIs](#kubernetes-apis) + - [Patch resource using API](#patch-resource-using-api) - [Nodes](#nodes) - * [Cordon and uncordon node](#cordon-and-uncordon-node) - * [Safely Drain a Node](#safely-drain-a-node) - * [Pod Affinity and anti-affinity](#pod-affinity-and-anti-affinity) + - [Cordon and uncordon node](#cordon-and-uncordon-node) + - [Safely Drain a Node](#safely-drain-a-node) + - [Pod Affinity and anti-affinity](#pod-affinity-and-anti-affinity) - [Deploying](#deploying) - * [Rollout](#rollout) - + [Rollout restart](#rollout-restart) - + [Partial deployment](#partial-deployment) - + [Rollback to previous revision](#rollback-to-previous-revision) - * [Scaling](#scaling) - + [Manual scaling](#manual-scaling) - + [Autoscale](#autoscale) + - [Rollout](#rollout) + - [Rollout restart](#rollout-restart) + - [Partial deployment](#partial-deployment) + - [Rollback to previous revision](#rollback-to-previous-revision) + - [Scaling](#scaling) + - [Manual scaling](#manual-scaling) + - [Autoscale](#autoscale) - [Units](#units) - [Horizontal pod autoscaler](#horizontal-pod-autoscaler) - [Network](#network) - * [Services](#services) - + [Get services](#get-services) - + [Create ClusterIP](#create-clusterip) - + [Create nodePort](#create-nodeport) - + [Create LoadBalancer](#create-loadbalancer) - * [Ingress](#ingress) - + [Create ingress](#create-ingress) + - [Services](#services) + - [Get services](#get-services) + - [Create ClusterIP](#create-clusterip) + - [Create nodePort](#create-nodeport) + - [Create LoadBalancer](#create-loadbalancer) + - [Ingress](#ingress) + - [Create ingress](#create-ingress) - [Resources](#resources) # Namespaces List namespaces + ```shell kubectl get namespaces -# or +# or kubectl get ns ``` Create a new namespace named `namespace-name` + ```shell kubectl create ns namespace-name ``` Delete namespace named `namespace-name` + ```shell kubectl delete ns namespace-name ``` Set current namespace used in context + ```shell kubectl config set-context --current --namespace= # check @@ -111,14 +115,15 @@ kubectl config view --minify | grep namespace: Kubernetes provides several built-in workload resources: -* `Deployment` a higher-level concept of `ReplicaSet` -* `StatefulSet` -* `DaemonSet` -* `Job` and `CronJob` +- `Deployment` a higher-level concept of `ReplicaSet` +- `StatefulSet` +- `DaemonSet` +- `Job` and `CronJob` ## Deployments ### Get all deployments + ```shell # get all deployments in current namespace kubectl get deployments @@ -136,6 +141,7 @@ kubectl get deployment nginx Let's create deployment of nginx:1.20-alpine image of 3 replicas with selector `app=nginx` with port 80 inside container `nginx-deployment.yaml` file: + ```yaml apiVersion: apps/v1 kind: Deployment @@ -154,10 +160,10 @@ spec: app: nginx spec: containers: - - name: nginx - image: nginx:1.20-alpine - ports: - - containerPort: 80 + - name: nginx + image: nginx:1.20-alpine + ports: + - containerPort: 80 ``` ```shell @@ -165,31 +171,37 @@ kubectl apply -f nginx-deployment.yaml ``` An alternative way to create this deployment with command line + ```shell kubectl create deployment deployment-nginx \ --image=nginx:1.20-alpine --replicas=3 --port 80 ``` + but in this case selector label will be created by the name of deployment `deployment-nginx`. Also you can add `-dry-run` and `-o yaml` to a `create`-command to output yaml template Check deployment is created + ```shell -# get deployment by name +# get deployment by name kubectl get deploy nginx-deployment # get associated replicasets by selector app=nginx -kubectl get rs -l app=nginx -# get pods by selector +kubectl get rs -l app=nginx +# get pods by selector kubectl get po -l app=nginx ``` ### Edit deployment -Edit deployment using standard editor +Edit deployment using standard editor + ```shell kubectl edit deploy nginx-deployment ``` + using nano + ```shell KUBE_EDITOR="nano" kubectl edit deploy nginx-deployment ``` @@ -209,7 +221,7 @@ kubectl delete deploy nginx-deployment kubectl get pods # get all pods in all namespaces kubectl get po --all-namespaces -# get all pods in designated namespace +# get all pods in designated namespace kubectl get po -n namespace # get all pods by selector label kubectl get po -l app=nginx @@ -217,7 +229,8 @@ kubectl get po -l app=nginx kubectl get po pod-name ``` -Show labels of pods +Show labels of pods + ```shell kubectl get po --show-labels ``` @@ -225,6 +238,7 @@ kubectl get po --show-labels ### Execute shell command inside a pod Interactively run shell `sh` inside `pod-name` pod + ```shell kubectl exec -ti pod-name -- sh ``` @@ -232,14 +246,17 @@ kubectl exec -ti pod-name -- sh ### Watch logs of a pod watch logs of a `pod-name` pod + ```shell kubectl logs -f pod-name ``` you can specify a deployment + ```shell kubectl logs -f deployment/nginx-deployment ``` + then kubernetes will choose one of pods available for this deployment ### Create interactive temporary pod with shell @@ -248,17 +265,19 @@ then kubernetes will choose one of pods available for this deployment kubectl run test-pod --image=nginx:1.20-alpine --rm -it \ --restart=Never -- sh ``` -* test-pod - name of temporary pod -* nginx:1.20-alpine - image + +- test-pod - name of temporary pod +- nginx:1.20-alpine - image ## StatefulSet StatefulSet is the workload API object used to manage stateful applications, that: -* provide guarantees about ordering and uniqueness of pods -* provide unique network identifiers for pods -* have persistent storage -* have graceful deployment and scaling -* provide ordered, automated rolling updates + +- provide guarantees about ordering and uniqueness of pods +- provide unique network identifiers for pods +- have persistent storage +- have graceful deployment and scaling +- provide ordered, automated rolling updates Components: Service, StatefulSet, Persistent Volume, Persistent Volume Claim, Pods @@ -277,8 +296,8 @@ metadata: app: web-nginx spec: ports: - - port: 80 - name: web-service + - port: 80 + name: web-service clusterIP: None selector: app: web-nginx @@ -300,32 +319,34 @@ spec: spec: terminationGracePeriodSeconds: 10 containers: - - name: web-nginx-pod - image: nginx:1.20-alpine - ports: - - containerPort: 80 - name: web - volumeMounts: - - name: web-pvc - mountPath: /usr/share/nginx/html + - name: web-nginx-pod + image: nginx:1.20-alpine + ports: + - containerPort: 80 + name: web + volumeMounts: + - name: web-pvc + mountPath: /usr/share/nginx/html volumeClaimTemplates: - - metadata: - name: web-pvc - spec: - accessModes: [ "ReadWriteOnce" ] - resources: - requests: - storage: 1Gi + - metadata: + name: web-pvc + spec: + accessModes: ["ReadWriteOnce"] + resources: + requests: + storage: 1Gi ``` -* A Headless Service, named `web-nginx-svc`, is used to control the network domain. -* The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods. -* The volumeClaimTemplates, named `web-pvc`, will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. + +- A Headless Service, named `web-nginx-svc`, is used to control the network domain. +- The StatefulSet, named web, has a Spec that indicates that 3 replicas of the nginx container will be launched in unique Pods. +- The volumeClaimTemplates, named `web-pvc`, will provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. ```shell kubectl apply -f nginx-stateful-set.yaml ``` Watch pods created one by one + ```shell kubectl get po -w -l app=web-nginx ``` @@ -333,41 +354,49 @@ kubectl get po -w -l app=web-nginx ### Network Identities of Stateful Set Hostname consists of + ```shell ${StatefulSet.metadata.name}-${order}.${Service.metadata.name}.${namespace} ``` -* ${StatefulSet.metadata.name} - `web-nginx-stateful` -* ${order} - number 0..N-1 replicas count -* ${Service.metadata.name} - `web-nginx-svc` -* ${namespace} - namespace stateful created in + +- ${StatefulSet.metadata.name} - `web-nginx-stateful` +- ${order} - number 0..N-1 replicas count +- ${Service.metadata.name} - `web-nginx-svc` +- ${namespace} - namespace stateful created in for example full: + ```shell web-nginx-stateful-1.web-nginx-svc.test-sandbox.svc.cluster.local ``` + and short: + ```shell web-nginx-stateful-1.web-nginx-svc ``` - Get hostname of each pod + ```shell for i in $(kubectl get po -o name -l app=web-nginx); do kubectl exec "$i" -- sh -c 'hostname -f'; done ``` + Will output + ``` web-nginx-stateful-0.web-nginx-svc.test-sandbox.svc.cluster.local web-nginx-stateful-1.web-nginx-svc.test-sandbox.svc.cluster.local web-nginx-stateful-2.web-nginx-svc.test-sandbox.svc.cluster.local ``` -Create an empty container +Create an empty container + ```shell -kubectl run test-pod --image=alpine --rm -it \ +kubectl run test-pod --image=alpine --rm -it \ --restart=Never -- sh ``` @@ -376,7 +405,7 @@ nslookup web-nginx-svc Server: 10.96.0.10 Address: 10.96.0.10:53 - + Name: web-nginx-svc.test-sandbox.svc.cluster.local Address: 172.17.0.11 Name: web-nginx-svc.test-sandbox.svc.cluster.local @@ -394,7 +423,7 @@ nslookup 172.17.0.11 ``` ### Examples of Stateful Set - + 1. [etcd cluster](https://github.com/iliadmitriev/etcd-cluster) 2. [postgresql](https://github.com/iliadmitriev/postgres-cluster) @@ -405,11 +434,13 @@ nslookup 172.17.0.11 Applies configuration to resources or create new resources. from file + ```shell kubectl apply -f nginx-deployment.yaml -# or +# or cat nginx-deployment.yaml | kubectl apply -f- ``` + or from stdin ```shell @@ -445,7 +476,7 @@ Delete resources ```shell # delete resources specified in yaml file kubectl delete -f nginx-deployment.yaml -# delete pod +# delete pod kubectl delete po nginx-web-1 # delete deployment with all it's pods kubectl delete deployment nginx-web @@ -493,12 +524,13 @@ kubectl get configmap -A # get configmaps from test-sandbox namespaces kubectl get configmap -n test-sandbox # get configmap postgres-config contents in yaml -kubectl get configmap postgres-config -o yaml +kubectl get configmap postgres-config -o yaml ``` ## Create ConfigMap Create ConfigMap with command line + ```shell kubectl create cm test-config-map \ --from-literal KEY1=VALUE1 \ @@ -506,17 +538,20 @@ kubectl create cm test-config-map \ ``` Create ConfigMap from environment file `test.env` containing key=value pairs + ```shell KEY1=VALUE1 KEY2=VALUE2 KEY3=LONG VALUE WITH SPACES ``` + ```shell kubectl create cm test-config-map \ --from-env-file=test.env ``` Create ConfigMap from defined configuration file `test-config-map.yaml` + ```yaml apiVersion: v1 kind: ConfigMap @@ -527,13 +562,15 @@ data: KEY2: VALUE2 ``` -Apply ConfigMap configuration +Apply ConfigMap configuration + ```shell kubectl apply -f test-config-map.yaml ``` Create ConfigMap from file `public-cert.pem` All file data will be value of PUBLIC_CERT variable + ```shell kubectl create cm test-config-map \ --from-file PUBLIC_CERT=public-cert.pem @@ -542,9 +579,11 @@ kubectl create cm test-config-map \ ## Edit ConfigMaps Edit ConfigMap named test-config-map from current namespace + ```shell kubectl edit configmap test-config-map ``` + Current pods using this ConfigMap will not be affected ## Delete ConfigMaps @@ -557,12 +596,12 @@ kubectl delete configmap test-config-map Current pods using this ConfigMap will not be affected - ## Using ConfigMaps ### Environment variables from ConfigMap with envFrom Let's create a test pod `config-test-pod.yaml` + ```yaml apiVersion: v1 kind: Pod @@ -573,19 +612,25 @@ spec: - name: test-container image: nginx:1.20-alpine envFrom: - - configMapRef: - name: test-config-map + - configMapRef: + name: test-config-map restartPolicy: Never ``` + Run pod + ```shell kubectl create -f config-test-pod.yaml ``` + Check the environment variables + ```shell kubectl exec -it config-test-pod -- env | grep KEY ``` + Should output + ``` KEY1=VALUE1 KEY2=VALUE2 @@ -594,6 +639,7 @@ KEY2=VALUE2 ### Single environment variable from ConfigMap with valueFrom Let's create a test pod `config-test-pod.yaml` + ```yaml apiVersion: v1 kind: Pod @@ -611,14 +657,19 @@ spec: key: KEY1 restartPolicy: Never ``` + ```shell kubectl create -f config-test-pod.yaml ``` + Check the environment variable KEY + ```shell kubectl exec -it config-test-pod -- env | grep KEY ``` + Should output + ``` KEY1=VALUE1 ``` @@ -626,6 +677,7 @@ KEY1=VALUE1 ### Mount ConfigMap as a volume directory Let's create a test pod `config-test-pod.yaml` + ```yaml apiVersion: v1 kind: Pod @@ -644,17 +696,19 @@ spec: name: test-config-map restartPolicy: Never ``` + ```shell kubectl create -f config-test-pod.yaml ``` Check files + ```shell kubectl exec -it config-test-pod -- sh ls /volume/ KEY1 KEY2 - + cat /volume/KEY1 VALUE1 @@ -683,16 +737,19 @@ spec: name: test-config-map restartPolicy: Never ``` + ```shell kubectl create -f config-test-pod.yaml ``` + Check files + ```shell kubectl exec -it config-test-pod -- sh ls /volume/ KEY1 - + cat /volume/KEY1 VALUE1 ``` @@ -702,25 +759,29 @@ cat /volume/KEY1 A Secret is a namespaced resource that contains a small amount of secretive data such as a password, a token, or a key, which you don't want to include in your application code. A secret can be used with pods in three ways: -* As file(s) with volume mounted to pod -* As container environment variable(s) + +- As file(s) with volume mounted to pod +- As container environment variable(s) ## Types of Secret -* Opaque (generic, default) - arbitrary user-defined data -* docker-registry (kubernetes.io/dockerconfigjson) serialized ~/.docker/config.json file -* tls (kubernetes.io/tls) - data for a TLS client or server, x509 certificate and a private key -* service account token (kubernetes.io/service-account-token) service account token data to authorize at kubernetes api -* kubernetes.io/basic-auth - credentials for basic authentication (http) -* kubernetes.io/ssh-auth - credentials for ssh authentication (ssh, git) +- Opaque (generic, default) - arbitrary user-defined data +- docker-registry (kubernetes.io/dockerconfigjson) serialized ~/.docker/config.json file +- tls (kubernetes.io/tls) - data for a TLS client or server, x509 certificate and a private key +- service account token (kubernetes.io/service-account-token) service account token data to authorize at kubernetes api +- kubernetes.io/basic-auth - credentials for basic authentication (http) +- kubernetes.io/ssh-auth - credentials for ssh authentication (ssh, git) ## Opaque secrets Opaque is the default Secret type if omitted. Creation of opaque secret: + ```shell kubectl create secret generic mypasswords --from-literal=key1=supersecret --from-literal=key2=topsecret ``` + or with yaml file + ```yaml apiVersion: v1 kind: Secret @@ -730,6 +791,7 @@ data: key1: c3VwZXJzZWNyZXQ= key2: dG9wc2VjcmV0 ``` + key1 and key2 values go as base64 encoded strings ## Docker registry auth data @@ -741,7 +803,9 @@ kubectl create secret docker-registry dockerhub \ --docker-password=secret \ --docker-email=myemail@gmail.com ``` -or + +or + ```yaml apiVersion: v1 kind: Secret @@ -751,7 +815,9 @@ metadata: data: .dockerconfigjson: eyJhdXRocyI6eyJodWIuZG9ja2VyLmNvbSI6eyJ1c2VybmFtZSI6ImRvY2tlcnVzZXJuYW1lIiwicGFzc3dvcmQiOiJzZWNyZXQiLCJlbWFpbCI6Im15ZW1haWxAZ21haWwuY29tIiwiYXV0aCI6IlpHOWphMlZ5ZFhObGNtNWhiV1U2YzJWamNtVjAifX19 ``` + where `.data.dockerconfigjson` is base64 encoded json with docker login data + ```json { "auths": { @@ -770,9 +836,10 @@ where `.data.dockerconfigjson` is base64 encoded json with docker login data TLS Secret contains `tls.key` and `tls.crt` keys, a pair of x509 certificate and a private key stored in pem format. When creating tls secret with kubectl, it checks provided data on: -* private key structure -* x509 certificate structure -* key and certificate match each other + +- private key structure +- x509 certificate structure +- key and certificate match each other ```shell # create a new certificate from files @@ -785,7 +852,7 @@ kubectl create secret tls my-cert \ # update existing sertificate from files kubectl create secret tls hello-world-tls \ --cert hw.pem --key hw.key \ - --dry-run=client -o yaml + --dry-run=client -o yaml --save-config | kubectl apply -f - ``` @@ -800,19 +867,20 @@ metadata: name: mypod spec: containers: - - name: mypod - image: nginx:alpine - volumeMounts: - - name: foo - mountPath: "/mypasswords" - readOnly: true + - name: mypod + image: nginx:alpine + volumeMounts: + - name: foo + mountPath: "/mypasswords" + readOnly: true volumes: - - name: foo - secret: - secretName: mypasswords + - name: foo + secret: + secretName: mypasswords ``` -Check +Check + ```shell kubectl exec -ti mypod -- sh @@ -832,6 +900,7 @@ cat /mypasswords/key2 Set mode 0600 default and 0610 to a file mykey1 + ```yaml apiVersion: v1 kind: Pod @@ -839,23 +908,25 @@ metadata: name: mypod spec: containers: - - name: mypod - image: nginx:alpine - volumeMounts: - - name: foo - mountPath: "/mypasswords" - readOnly: true + - name: mypod + image: nginx:alpine + volumeMounts: + - name: foo + mountPath: "/mypasswords" + readOnly: true volumes: - - name: foo - secret: - secretName: mypasswords - defaultMode: 0600 - items: - - key: key1 - path: mykey1 - mode: 0610 + - name: foo + secret: + secretName: mypasswords + defaultMode: 0600 + items: + - key: key1 + path: mykey1 + mode: 0610 ``` + Check + ```shell kubectl exec -ti mypod -- sh @@ -863,10 +934,10 @@ ls -l /mypasswords/ # total 0 # lrwxrwxrwx 1 root root 13 Aug 4 11:57 mykey1 -> ..data/mykey1 -cat /mypasswords/mykey1 +cat /mypasswords/mykey1 # supersecret -ls -l /mypasswords/..data/mykey1 +ls -l /mypasswords/..data/mykey1 # -rw---x--- 1 root root 11 Aug 4 12:04 /mypasswords/..data/mykey1 ``` @@ -881,13 +952,15 @@ metadata: name: mypod spec: containers: - - name: mypod - image: nginx:alpine - envFrom: - - secretRef: - name: mypasswords + - name: mypod + image: nginx:alpine + envFrom: + - secretRef: + name: mypasswords ``` + Check + ```shell kubectl exec -ti mypod -- sh @@ -896,7 +969,7 @@ export | grep key # export key2='topsecret' ``` -### Environment variables from a Secret's specific key +### Environment variables from a Secret's specific key ```yaml apiVersion: v1 @@ -905,16 +978,18 @@ metadata: name: mypod spec: containers: - - name: mypod - image: nginx:alpine - env: - - name: ADMIN_PASSWORD - valueFrom: - secretKeyRef: - name: mypasswords - key: key1 + - name: mypod + image: nginx:alpine + env: + - name: ADMIN_PASSWORD + valueFrom: + secretKeyRef: + name: mypasswords + key: key1 ``` + Check + ```shell kubectl exec -ti mypod -- sh @@ -927,10 +1002,13 @@ export | grep ADMIN ## Get resource labels User `--show-labels key` to show labels + ```shell kubectl get po --show-labels ``` + Output + ``` NAME READY STATUS RESTARTS AGE LABELS pg-etcd-0 1/1 Running 1 3m53s app=pg-etcd,controller-revision-hash=pg-etcd-78dbf7b6d5,statefulset.kubernetes.io/pod-name=pg-etcd-0 @@ -943,47 +1021,52 @@ terminal-pod 1/1 Running 3 41h run=terminal-pod ``` Get labels for other type of resources + ```shell # services kubectl get svc --show-labels # persistent volume claim -kubectl get pvc --show-labels +kubectl get pvc --show-labels ``` ## Use labels for filtering resources Use key `-l` or `--selector` for filtering only matching labels + ```shell kubectl get po -l app=pg-postgres ``` + For multiple conditions use comma separated format + ```shell -kubectl get po -l app=pg-postgres,role=master +kubectl get po -l app=pg-postgres,role=master ``` + ### Filtering condition operations -* equality = -* inequality != -* inclusion (in) -* exclusion (notin) +- equality = +- inequality != +- inclusion (in) +- exclusion (notin) ```shell # equality label app is equal to pg-postgres # and label role is equal to master -kubectl get po -l role=master +kubectl get po -l role=master # inequality kubectl get po -l role!=slave - + # inclusion (user single quotes) # get all resources which # key app includes values both pg-postgres or pg-etcd kubectl get po -l 'app in (pg-postgres,pg-etcd)' # exclusion -# get all resources which -# key env value is not equal to both dev or test +# get all resources which +# key env value is not equal to both dev or test kubectl get po -l 'env notin (dev, test)' # multiple conditions @@ -993,18 +1076,23 @@ kubectl get po -l 'app in (pg-postgres,pg-etcd),role!=slave,env notin (dev, test ## Add labels to pod Put label `role` equals `master` to pod `pg-postgres-2` + ```shell kubectl label po pg-postgres-2 role=master ``` + Pod `pg-postgres-2` should not have this label before, or you will get error + ``` error: 'role' already has a value (master), and --overwrite is false ``` + Which means to overwrite value of label `role` use `--overwrite` key ## Remove labels from pod Remove label `unhealhy` from `pg-postgres-0` pod (use minus sing) + ```shell kubectl label po pg-postgres-0 unhealhy- ``` @@ -1012,17 +1100,19 @@ kubectl label po pg-postgres-0 unhealhy- ## Using labels as a resource selectors Labels is used as a selectors for resources types: -* Service -* Deployment -* StatefulSet -* ReplicationController -* Job -* ReplicaSet -* DaemonSet + +- Service +- Deployment +- StatefulSet +- ReplicationController +- Job +- ReplicaSet +- DaemonSet ### Matching condition types `matchLabels` conditions is used for selectors as key=values pair with AND operation + ```yaml selector: matchLabels: @@ -1032,16 +1122,17 @@ selector: ``` `matchExpressions` is usr for selectors with operations `In`, `NotIn`, `Exists` and `DoesNotExist` -````yaml + +```yaml selector: matchLabels: app: pg-postgres healthy: true role: master matchExpressions: - - {key: env, operator: In, values: [production]} - - {key: env, operator: NotIn, values: [dev,test]} -```` + - { key: env, operator: In, values: [production] } + - { key: env, operator: NotIn, values: [dev, test] } +``` # Service Accounts @@ -1050,13 +1141,15 @@ Service Account (sa) - is a special type of account intended to represent a non- ## Managing ServiceAccounts ### Get Service Accounts + ```shell kubectl get sa ``` -### Create Service Accounts +### Create Service Accounts Create a service account named `terminal-sa` + ```shell kubectl create sa terminal-sa ``` @@ -1067,6 +1160,7 @@ Create a Service account from previous example Create pod which uses service account. `spec.serviceAccountName` and `spec.automountServiceAccountToken` + ```shell apiVersion: v1 kind: Pod @@ -1088,7 +1182,7 @@ Service accounts secrets will be mounted to pod container in directory ```shell kubectl exec -ti terminal-pod -- sh -# list contents of +# list contents of ls -la /var/run/secrets/kubernetes.io/serviceaccount/ @@ -1103,9 +1197,9 @@ lrwxrwxrwx 1 root root 12 Jul 30 16:43 token -> ..data/token ``` -* `ca.crt` - CA certificates bundle -* `namespace` - namespace name -* `token` - api auth token +- `ca.crt` - CA certificates bundle +- `namespace` - namespace name +- `token` - api auth token #### Role and RoleBinding @@ -1114,35 +1208,40 @@ An RBAC Role contains rules that represent a set of permissions. Permissions are A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in. Creating a role which forbids everything except get pod named `terminal-pod` + ```shell kubectl create role terminal-role \ --verb=get --resource=pods \ --resource-name=terminal-pod ``` + or + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: terminal-role rules: -- apiGroups: [""] - resourceNames: ["terminal-pod"] - resources: ["pods"] - verbs: ["get"] + - apiGroups: [""] + resourceNames: ["terminal-pod"] + resources: ["pods"] + verbs: ["get"] ``` Where verb is a permissions set: -* get - read a resource -* list - resources -* watch - watch resources -* patch - could be patched runtime -* update - could be updated -* create - could be created new resources of the type -* list - can get a list of resources -* delete - can delete resource of that type + +- get - read a resource +- list - resources +- watch - watch resources +- patch - could be patched runtime +- update - could be updated +- create - could be created new resources of the type +- list - can get a list of resources +- delete - can delete resource of that type Bind this Role to Service Account creating RoleBinding + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding @@ -1153,8 +1252,8 @@ roleRef: kind: Role name: terminal-role subjects: -- kind: ServiceAccount - name: terminal-sa + - kind: ServiceAccount + name: terminal-sa ``` ## Kubernetes APIs @@ -1162,10 +1261,13 @@ subjects: Execute shell in [terminal-pod created earlier](#service-account-usage) API default path: + ``` https://kubernetes.default.svc.cluster.local ``` + or get API IP from environment variables + ```dotenv export KUBERNETES_PORT='tcp://10.96.0.1:443' export KUBERNETES_PORT_443_TCP='tcp://10.96.0.1:443' @@ -1178,6 +1280,7 @@ export KUBERNETES_SERVICE_PORT_HTTPS='443' ``` Export CA certificate bundle, token and namespace + ```shell export NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace) export CA_BUNDLE=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt @@ -1185,6 +1288,7 @@ export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) ``` Get pod information + ```shell curl --cacert ${CA_BUNDLE} \ -H "Authorization: Bearer ${TOKEN}" \ @@ -1199,6 +1303,7 @@ And add permission to `patch` resource named `terminal-pod` ```shell kubectl edit roles terminal-role ``` + ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role @@ -1206,20 +1311,22 @@ metadata: name: terminal-role namespace: test-sandbox rules: -- apiGroups: - - "" - resourceNames: - - terminal-pod - resources: - - pods - verbs: - - get - - patch + - apiGroups: + - "" + resourceNames: + - terminal-pod + resources: + - pods + verbs: + - get + - patch ``` + Add patch verb to the list Get back to our pod `terminal-pod` terminal sh And add labels `main` with value `yes` to our pod using API from inside the container + ```shell curl --cacert ${CA_BUNDLE} \ -H "Authorization: Bearer ${TOKEN}" \ @@ -1228,7 +1335,8 @@ curl --cacert ${CA_BUNDLE} \ -d '{"metadata":{"labels":{"main":"yes"}}}' ``` -Lets check labels +Lets check labels + ```shell kubectl get po terminal-pod --show-labels @@ -1239,12 +1347,14 @@ terminal-pod 1/1 Running 0 51m main=yes # Nodes Get node list + ```shell kubectl get nodes kubectl get nodes -o wide ``` Get node info and statistics + ```shell kubectl describe nodes minikube ``` @@ -1252,14 +1362,19 @@ kubectl describe nodes minikube ## Cordon and uncordon node Cordon - mark node as unschedulable. New pods will not be assigned to this node. + ```shell kubectl cordon minikube ``` + Check if node is unschedulable + ```shell kubectl describe nodes minikube ``` + Output + ``` .... Unschedulable: true @@ -1267,6 +1382,7 @@ Unschedulable: true ``` Make node schedulable again + ```shell kubectl uncordon minikube ``` @@ -1281,8 +1397,9 @@ kubectl drain Pod Affinity and Anti-affinity is a language on how pod attracted to node or distracted from node. There is two types of affinity and anti affinity: -* `preferredDuringSchedulingIgnoredDuringExecution` is a set of soft rules -* `requiredDuringSchedulingIgnoredDuringExecution` is hard requirements + +- `preferredDuringSchedulingIgnoredDuringExecution` is a set of soft rules +- `requiredDuringSchedulingIgnoredDuringExecution` is hard requirements Let's assume we have 3 worker nodes in our kubernetes cluster (for example we will use [minikube setup](minikube.md)) @@ -1296,6 +1413,7 @@ kubectl get nodes ``` Create deployment + ```yaml apiVersion: apps/v1 kind: Deployment @@ -1324,14 +1442,16 @@ spec: values: - nginx containers: - - name: nginx - image: nginx:1.20-alpine - ports: - - containerPort: 80 + - name: nginx + image: nginx:1.20-alpine + ports: + - containerPort: 80 ``` + This deployment means that during scheduling pods can't be on the same node (the node that has pods matching with selector `app In [nginx]`) Make sure the nodes all the pods running and they all running on different nodes + ```shell kubectl get po -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES @@ -1341,32 +1461,36 @@ kubectl get po -o wide ``` Let's scale pods to 4 replicas + ```shell kubectl scale deployment nginx-deployment --replicas 4 # deployment.apps/nginx-deployment scaled ``` -And check pods +And check pods + ```shell -kubectl get po -o wide +kubectl get po -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES # nginx-deployment-5dfdbd8546-5n2dg 1/1 Running 0 36s 10.244.2.2 minikube-m03 # nginx-deployment-5dfdbd8546-lmht4 0/1 Pending 0 2s # nginx-deployment-5dfdbd8546-rv8mg 1/1 Running 0 36s 10.244.0.3 minikube # nginx-deployment-5dfdbd8546-whks7 1/1 Running 0 36s 10.244.1.2 minikube-m02 ``` + 4th pod has been created but is not running and have status `Pending` because of "hard" rules of anti-affinity. There is no node which hasn't pods with label app=nginx Let's create another worker node and add it to kubernetes cluster + ```shell -minikube node add +minikube node add # ... # 🔎 Verifying Kubernetes components... # 🏄 Successfully added m04 to minikube! -kubectl get node +kubectl get node # NAME STATUS ROLES AGE VERSION # minikube Ready control-plane,master 24m v1.21.2 # minikube-m02 Ready 23m v1.21.2 @@ -1375,6 +1499,7 @@ kubectl get node ``` And check 4th pod status + ```shell kubectl get po -o wide # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES @@ -1401,30 +1526,34 @@ kubectl describe po nginx-deployment-5dfdbd8546-lmht4 So now as we added a new node to cluster, which wasn't have pods with label app=nginx, kubernetes scheduler successfully deployed pod and changed status to `Running` - # Deploying ## Rollout Rollout can be applied to: -* [deployments](#deployments) -* [stateful sets](#statefulset) -* daemon stets + +- [deployments](#deployments) +- [stateful sets](#statefulset) +- daemon stets Get status of current deployment + ```shell kubectl rollout status statefulset web-nginx-stateful ``` + ``` partitioned roll out complete: 1 new pods have been updated... ``` Get history of previous 3 deployments + ```shell kubectl rollout history statefulset web-nginx-stateful ``` + ``` -statefulset.apps/web-nginx-stateful +statefulset.apps/web-nginx-stateful REVISION CHANGE-CAUSE 1 2 @@ -1433,7 +1562,7 @@ REVISION CHANGE-CAUSE ### Rollout restart ```shell -kubectl rollout restart statefulset web-nginx-stateful +kubectl rollout restart statefulset web-nginx-stateful ``` ### Partial deployment @@ -1442,10 +1571,12 @@ Partition size designates an ordinal greater than or equal to that will be updat In other words if you have 3 pods numbered from 0 to 2, and partition is set to 3, this means no pods will be updated. To have all pods updated you have to set partition to 0. + ```shell kubectl get statefulsets.apps web-nginx-stateful \ -o json | jq '.spec.updateStrategy' ``` + ```json { "spec": { @@ -1462,24 +1593,28 @@ kubectl get statefulsets.apps web-nginx-stateful \ ```shell kubectl describe statefulsets.apps web-nginx-stateful ``` + ``` Update Strategy: RollingUpdate Partition: 0 ``` -1) Set partition size to 3 +1. Set partition size to 3 + ```shell kubectl patch statefulset web-nginx-stateful \ -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":3}}}}' ``` -2) Patch pod template with a new image of nginx +2. Patch pod template with a new image of nginx + ```shell kubectl patch statefulset web-nginx-stateful \ --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"nginx:1.21-alpine"}]' ``` Let's check that rolling update does not start + ```shell kubectl rollout status statefulset web-nginx-stateful # partitioned roll out complete: 0 new pods have been updated... @@ -1488,62 +1623,74 @@ kubectl rollout status statefulset web-nginx-stateful ```shell kubectl get pods -l app=web-nginx \ -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' - -# web-nginx-stateful-0: nginx:1.20-alpine, -# web-nginx-stateful-1: nginx:1.20-alpine, + +# web-nginx-stateful-0: nginx:1.20-alpine, +# web-nginx-stateful-1: nginx:1.20-alpine, # web-nginx-stateful-2: nginx:1.20-alpine, ``` -3) Set partition size to 2 +3. Set partition size to 2 + ```shell kubectl patch statefulset web-nginx-stateful \ -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":2}}}}' ``` + Let's check that rolling update goes only on one last pod + ```shell kubectl rollout status statefulset web-nginx-stateful # partitioned roll out complete: 1 new pods have been updated... ``` + Check updated pod + ```shell kubectl get pods -l app=web-nginx \ -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' -# web-nginx-stateful-0: nginx:1.20-alpine, -# web-nginx-stateful-1: nginx:1.20-alpine, -# web-nginx-stateful-2: nginx:1.21-alpine, +# web-nginx-stateful-0: nginx:1.20-alpine, +# web-nginx-stateful-1: nginx:1.20-alpine, +# web-nginx-stateful-2: nginx:1.21-alpine, ``` -4) Set partition size to 0 (finish deploy operation) +4. Set partition size to 0 (finish deploy operation) + ```shell kubectl patch statefulset web-nginx-stateful \ -p '{"spec":{"updateStrategy":{"type":"RollingUpdate","rollingUpdate":{"partition":0}}}}' ``` + Check rollout status + ```shell kubectl rollout status statefulset web-nginx-stateful # Waiting for 1 pods to be ready... # Waiting for 1 pods to be ready... # partitioned roll out complete: 3 new pods have been updated... ``` + Check container images: + ```shell kubectl get pods -l app=web-nginx \ -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{end}' -# web-nginx-stateful-0: nginx:1.21-alpine, -# web-nginx-stateful-1: nginx:1.21-alpine, +# web-nginx-stateful-0: nginx:1.21-alpine, +# web-nginx-stateful-1: nginx:1.21-alpine, # web-nginx-stateful-2: nginx:1.21-alpine, ``` ### Rollback to previous revision Get history of deployments revision + ```shell kubectl rollout history statefulset web-nginx-stateful ``` + ``` -statefulset.apps/web-nginx-stateful +statefulset.apps/web-nginx-stateful REVISION CHANGE-CAUSE 1 2 @@ -1551,6 +1698,7 @@ REVISION CHANGE-CAUSE ``` Rollback to revision 2 + ```shell kubectl rollout undo statefulset web-nginx-stateful --to-revision 2 ``` @@ -1578,6 +1726,7 @@ Before you begin [Metrics server](https://github.com/kubernetes-sigs/metrics-server) monitoring needs to be deployed in the cluster to provide metrics through the Metrics API. Addon [metrics-server](minikube.md#add-nodes-to-minikube-cluster) should be enabled for [minikube](minikube.md) setup Check if metrics server properly works + ```shell # check metrics from nodes kubectl top nodes @@ -1601,8 +1750,8 @@ spec: selector: app: sqrt ports: - - port: 8080 - name: http + - port: 8080 + name: http --- apiVersion: apps/v1 kind: Deployment @@ -1621,44 +1770,50 @@ spec: app: sqrt spec: containers: - - name: sqrt-pod - image: iliadmitriev/workload:latest - imagePullPolicy: Always - ports: - - containerPort: 8080 - # it's mandatory to specify - # resources limits - resources: - limits: - cpu: 500m - requests: - cpu: 250m + - name: sqrt-pod + image: iliadmitriev/workload:latest + imagePullPolicy: Always + ports: + - containerPort: 8080 + # it's mandatory to specify + # resources limits + resources: + limits: + cpu: 500m + requests: + cpu: 250m ``` #### Units CPU -* 1 - means vCPU unit -* 500m (milli cpu) - 0,5 of vCPU unit + +- 1 - means vCPU unit +- 500m (milli cpu) - 0,5 of vCPU unit Memory -* suffixes: E, P, T, G, M, k -* suffixes: Ei, Pi, Ti, Gi, Mi, Ki -* 128Mi - 128 megabytes + +- suffixes: E, P, T, G, M, k +- suffixes: Ei, Pi, Ti, Gi, Mi, Ki +- 128Mi - 128 megabytes #### Horizontal pod autoscaler Create autoscaler for deployment `web-sqrt` + ```shell kubectl autoscale deployment web-sqrt \ --max 10 --min 2 --cpu-percent=50 ``` Get autoscaler state + ```shell kubectl describe hpa web-sqrt ``` + Output + ``` Name: web-sqrt Namespace: default @@ -1679,11 +1834,15 @@ Conditions: ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: ``` -Get current workloads + +Get current workloads + ```shell kubectl get pod -l app=sqrt -o wide -w ``` + Output + ``` NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-sqrt-9949c44f7-nhdfv 1/1 Running 0 2m36s 10.244.2.2 minikube-m03 @@ -1691,6 +1850,7 @@ web-sqrt-9949c44f7-p259n 1/1 Running 0 2m36s 10.244.1.2 min ``` Simulate huge load + ```shell # create a new empty container kubectl run terminal -ti --image=alpine --command -- sh @@ -1701,7 +1861,9 @@ apk add apache2-utils # make 10.000 requests in 40 threads to service ab -c 40 -n 10000 http://web-sqrt-svc:8080/ ``` + New pods are created + ``` NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES web-sqrt-9949c44f7-26ngb 1/1 Running 0 2m15s 10.244.1.6 minikube-m02 @@ -1748,6 +1910,7 @@ Events: Normal SuccessfulRescale 2m53s horizontal-pod-autoscaler New size: 4; reason: All metrics below target Normal SuccessfulRescale 2m38s (x2 over 12m) horizontal-pod-autoscaler New size: 2; reason: All metrics below target ``` + # Network ## Services @@ -1783,6 +1946,7 @@ kubectl create svc clusterip nginx --tcp=8080:80 ``` Create `nginx-deployment-svc-clusterip.yaml` + ```yaml apiVersion: v1 kind: Service @@ -1796,26 +1960,30 @@ spec: # should match with labels of deployment and pods app: nginx ports: - - port: 8080 - protocol: TCP - targetPort: 80 + - port: 8080 + protocol: TCP + targetPort: 80 type: ClusterIP ``` + Apply + ```shell kubectl apply -f nginx-deployment-svc-clusterip.yaml ``` Get IP address + ```shell kubectl describe svc nginx-deployment-svc-clusterip | grep IP: - + IP: 10.108.153.142 ``` Check + ```shell -kubectl run -it --rm --image=alpine -- sh +kubectl run -it --rm --image=alpine -- sh apk add curl curl http://10.108.153.142:8080/ ``` @@ -1838,12 +2006,13 @@ kubectl expose deployment nginx-deployment \ --type=LoadBalancer --port=8080 --target-port=80 ``` -* target-port - port of container -* port - external port +- target-port - port of container +- port - external port ## Ingress Before we start, we need to create deployment with 2 replicas and expose it with service of type NodePort + ```shell kubectl create deployment web --replicas=2 \ --image=gcr.io/google-samples/hello-app:1.0 @@ -1853,6 +2022,7 @@ kubectl expose deployment web \ ``` if you are running minikube ingress addon and start tunnel + ```shell minikube addons enable ingress minikube tunnel @@ -1869,12 +2039,15 @@ kubectl create secret tls hello-world-tls \ ``` Create ingress + ```shell kubectl create ingress hello-world-ingress \ --default-backend=web:8080 \ --rule="hello-world.info/*=web:8080,tls=hello-world-tls" ``` + or + ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress @@ -1889,28 +2062,30 @@ spec: port: number: 8080 rules: - - host: hello-world.info - http: - paths: - - backend: - service: - name: web - port: - number: 8080 - path: / - pathType: Prefix + - host: hello-world.info + http: + paths: + - backend: + service: + name: web + port: + number: 8080 + path: / + pathType: Prefix tls: - - hosts: - - hello-world.info - secretName: hello-world-tls + - hosts: + - hello-world.info + secretName: hello-world-tls ``` add string `127.0.0.1 hello-world.info` to your `/etc/hosts` file + ```shell sudo sed -i '' -e '$a\'$'\n''127.0.0.1 hello-world.info'$'\n' /etc/hosts ``` Check ingress is working + ```shell curl https://hello-world.info @@ -1922,12 +2097,13 @@ Hostname: web-79d88c97d6-vdqdp # Resources - [Medium.com](https://medium.com) - * [Kubernetes in three diagrams](https://tsuyoshiushio.medium.com/kubernetes-in-three-diagrams-6aba8432541c) - * [Deploying PostgreSQL as a StatefulSet in Kubernetes](https://www.bmc.com/blogs/kubernetes-postgresql/) + - [Kubernetes in three diagrams](https://tsuyoshiushio.medium.com/kubernetes-in-three-diagrams-6aba8432541c) + - [Deploying PostgreSQL as a StatefulSet in Kubernetes](https://www.bmc.com/blogs/kubernetes-postgresql/) - [Kubernetes.io](https://kubernetes.io/) - * [Kubernetes concepts](https://kubernetes.io/docs/concepts/_print/) - * [kubectl cli Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) - * [Managing resources](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/) - * [Workloads](https://kubernetes.io/docs/concepts/workloads/) - * [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) - * [Service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) \ No newline at end of file + - [Kubernetes concepts](https://kubernetes.io/docs/concepts/_print/) + - [kubectl cli Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) + - [Managing resources](https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/) + - [Workloads](https://kubernetes.io/docs/concepts/workloads/) + - [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) + - [Service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) + diff --git a/Kubernetes/minikube.md b/Kubernetes/minikube.md index f24f007..6ba0cc2 100644 --- a/Kubernetes/minikube.md +++ b/Kubernetes/minikube.md @@ -2,21 +2,21 @@ - [Install minikube](#install-minikube) - [Configuration](#configuration) - * [Get list of keys](#get-list-of-keys) - * [View current config keys and values](#view-current-config-keys-and-values) - * [Get config key value](#get-config-key-value) - * [Set config key a new value](#set-config-key-a-new-value) + - [Get list of keys](#get-list-of-keys) + - [View current config keys and values](#view-current-config-keys-and-values) + - [Get config key value](#get-config-key-value) + - [Set config key a new value](#set-config-key-a-new-value) - [Usage](#usage) - * [Create cluster and start](#create-cluster-and-start) - * [Addons](#addons) - + [List available addons](#list-available-addons) - + [Install addon](#install-addon) - * [Dashboard addon](#dashboard-addon) - + [Create and run dashboard](#create-and-run-dashboard) - * [Network and connection](#network-and-connection) - + [Connect via service](#connect-via-service) - + [Connect using tunnel](#connect-using-tunnel) - * [Add nodes to minikube cluster](#add-nodes-to-minikube-cluster) + - [Create cluster and start](#create-cluster-and-start) + - [Addons](#addons) + - [List available addons](#list-available-addons) + - [Install addon](#install-addon) + - [Dashboard addon](#dashboard-addon) + - [Create and run dashboard](#create-and-run-dashboard) + - [Network and connection](#network-and-connection) + - [Connect via service](#connect-via-service) + - [Connect using tunnel](#connect-using-tunnel) + - [Add nodes to minikube cluster](#add-nodes-to-minikube-cluster) - [Cleanup](#cleanup) # Install minikube @@ -44,7 +44,6 @@ minikube config view minikube config get driver ``` - ## Set config key a new value ```shell @@ -57,6 +56,7 @@ minikube config set driver docker To create and start earlier created cluster Run + ```shell minikube start --driver=docker --nodes=1 ``` @@ -64,11 +64,13 @@ minikube start --driver=docker --nodes=1 Cluster will be automatically added to your kubectl config as a default context Get status of cluset + ```shell minikube status ``` Stop cluster + ```shell minikube stop ``` @@ -89,7 +91,8 @@ minikube addons enable metrics-server ## Dashboard addon -### Create and run dashboard +### Create and run dashboard + ```shell minikube dashboard ``` @@ -99,11 +102,13 @@ minikube dashboard ### Connect via service List available services in namespace `default` + ```shell minikube service list -n default ``` Connect service `service-name` to localhost port + ```shell minikube service -n default service-name ``` @@ -114,11 +119,13 @@ You can create tunnel to a minikube service. Service should be only `loadBalancer` type List available services in namespace `default` + ```shell minikube tunnel list -n default ``` Connect + ```shell minikube tunnel -n default ``` @@ -126,8 +133,9 @@ minikube tunnel -n default ## Add nodes to minikube cluster Before adding be aware of: -* https://github.com/kubernetes/minikube/issues/8055 -* https://github.com/kubernetes/minikube/issues/10382 + +- https://github.com/kubernetes/minikube/issues/8055 +- https://github.com/kubernetes/minikube/issues/10382 ```shell minikube node add @@ -139,7 +147,7 @@ minikube node add # Stop cluster nodes minikube stop -# Delete nodes resources +# Delete nodes resources minikube delete # or docker rm minikube diff --git a/s3/S3.md b/s3/S3.md index c54aa7f..36d0938 100644 --- a/s3/S3.md +++ b/s3/S3.md @@ -1,12 +1,15 @@ # S3 + ## aws cli #### install + ```bash python3 -m pip install awscli ``` ### set env variables + ```bash cat > .env <<_EOF_ AWS_CA_BUNDLE=/path/to/root/ca_root.pem @@ -18,6 +21,7 @@ export $(cat .env | xargs) ``` ### configure + ```bash aws configure set aws_access_key_id aws configure set aws_secret_access_key @@ -26,6 +30,7 @@ cat ~/.aws/credentials ``` ### list files + ```bash aws s3 ls s3://bucket-name/ \ --endpoint-url=https://s3.url-address.ru \ @@ -34,6 +39,7 @@ aws s3 ls s3://bucket-name/ \ ``` ### copy files + ```bash aws s3 cp s3://bucket-name/ ~/local/ \ --recursive \ diff --git a/sa/all-db.md b/sa/all-db.md index 5b22942..c945262 100644 --- a/sa/all-db.md +++ b/sa/all-db.md @@ -1,5 +1,6 @@ +# SQLAlchemy + Async -# install +## install Install SQLAlchemy, greenlet and databases async drivers @@ -7,7 +8,7 @@ Install SQLAlchemy, greenlet and databases async drivers pip3 install sqlalchemy greenlet asyncpg asyncmy pymysql aiosqlite ``` -# create databases +## create databases create docker instances of databases @@ -20,8 +21,8 @@ _EOF docker run -d --name user-postgres --hostname user-postgres \ --env-file .env_postgres -p 5432:5432 postgres:13.4-alpine3.14 - -cat > .env_mysql << _EOF + +cat > .env_mysql << _EOF MYSQL_ROOT_PASSWORD=rootsecret MYSQL_DATABASE=user MYSQL_USER=user @@ -32,7 +33,7 @@ docker run -d --name user-mariadb --hostname user-mariadb \ --env-file .env_mysql -p 3306:3306 mariadb ``` -# Run test scrypt +## Run test scrypt uncomment one of DATABASE_URL you wish to test @@ -94,7 +95,7 @@ async def async_main(): await session.commit() # refresh user from db await session.refresh(user_db) - + print(user_db.id) # run coroutine diff --git a/sa/asyncpg.md b/sa/asyncpg.md index 4a0ac93..a6bd5ad 100644 --- a/sa/asyncpg.md +++ b/sa/asyncpg.md @@ -1,10 +1,13 @@ -# Install +# asyncpg + +## Install ```shell pip install sqlalchemy\[asyncio\] ``` Create postgresql instanse with docker + ```shell # env file with variables cat > .env << _EOF_ @@ -20,7 +23,7 @@ docker run -d --name auth-postgres --hostname auth-postgres \ -p 5432:5432 --env-file .env postgres:13.4-alpine3.14 ``` -# Prepare and run +## Prepare and run ```python import asyncio @@ -74,7 +77,7 @@ async def async_main(): await session.commit() # refresh user from db await session.refresh(user_db) - + print(user_db.id) # run coroutine diff --git a/zsh/zsh_install.md b/zsh/zsh_install.md index ab71c79..ce78b38 100644 --- a/zsh/zsh_install.md +++ b/zsh/zsh_install.md @@ -12,7 +12,7 @@ tags: [] sudo apt install zsh zsh-antigen zsh-doc zsh-syntax-highlighting zsh-autosuggestions ``` -# set zsh as default shell +## set zsh as default shell ```bash chsh -s /usr/bin/zsh @@ -23,7 +23,6 @@ and don't forget to restart your gnome session to apply changes on first start press 2 to populate `~/.zshrc` with default recommended configuration - ## install oh-my-zsh dependencies ```bash @@ -46,7 +45,6 @@ omz theme use theme omz theme set theme ``` - edit file `~/.zshrc` to set defautl theme ```bash @@ -60,6 +58,7 @@ set to desirable theme: `michelebologna` more themes here https://github.com/ohmyzsh/ohmyzsh/wiki/Themes add to the end of file: + ``` source /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh ``` @@ -76,7 +75,7 @@ _EOF_ ## install nerd fonts -### using `font-manager` +### using `font-manager` ```bash # install font mamanger