This chapter explains how an application deployed in a Kubernetes cluster can be updated using Deployment. It also explains Canary Deployment of an application.
Deployment create Replica Set for managing pods. The number of replicas in the Replica Set can be scaled up and down to meet the demands of your application. Updating an application deployed using Deployment requires to update the configuration of Deployment. This modification creates a new Replica Set, which is scaled up while the previous Replica Set is scaled down. This enables no downtime for your application.
Note
|
There is a kubectl rolling-update command but it is applicable only to Replica Set. The update from this command was driven on the client-side. It is strongly recommended to do rolling update using Deployment as the updates are now on the server-side.
|
For our usecase, the application initially uses the image arungupta/app-upgrade:v1
. Then the Deployment is updated to use the image arungupta/app-upgrade:v2
. The v1 image prints “Hello World!”. The v2 image prints “Howdy World!”. The source code for these images is in the images directory.
In order to perform exercises in this chapter, you’ll need to deploy configurations to a Kubernetes cluster. To create an EKS-based Kubernetes cluster, use the AWS CLI (recommended). If you wish to create a Kubernetes cluster without EKS, you can instead use kops.
All configuration files for this chapter are in the app-udpate
directory. Make sure you change to that directory before giving any commands in this chapter.
Updating an application requires to replace all the existing pods with new pods that use a different version of the image. .spec.strategy
in the Deployment configuration can be used to define strategy used to replace the old pods with the new ones. This key can take two values:
-
Recreate
-
RollingUpdate
(default)
Let’s look at these two deployment strategies.
All existing pods are killed before the new ones are created. The configuration file looks like:
apiVersion: apps/v1 kind: Deployment metadata: name: app-recreate spec: replicas: 5 selector: matchLabels: name: app-recreate strategy: type: Recreate template: metadata: labels: name: app-recreate spec: containers: - name: app-recreate image: arungupta/app-upgrade:v1 ports: - containerPort: 8080
-
Create deployment:
$ kubectl create -f templates/app-recreate.yaml --record deployment "app-recreate" created
--record
ensures that the command initiating this deployment is recorded. This is useful when the application has gone through a few updates and you need to correlate a version with the command. -
Get the history of deployments:
$ kubectl rollout history deployment/app-recreate deployments "app-recreate" REVISION CHANGE-CAUSE 1 kubectl create --filename=templates/app-recreate.yaml --record=true
-
Expose service:
$ kubectl expose deployment/app-recreate --port=80 --target-port=8080 --name=app-recreate --type=LoadBalancer service "app-recreate" exposed
-
Get service detail:
$ kubectl describe svc/app-recreate Name: app-recreate Namespace: default Labels: name=app-recreate Annotations: <none> Selector: name=app-recreate Type: LoadBalancer IP: 100.65.43.233 LoadBalancer Ingress: af2dc1f99bda211e791f402037f18a54-1616925381.eu-central-1.elb.amazonaws.com Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30158/TCP Endpoints: 100.96.1.14:80,100.96.2.13:80,100.96.3.13:80 + 2 more... Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreatingLoadBalancer 24s service-controller Creating load balancer Normal CreatedLoadBalancer 23s service-controller Created load balancer
-
Acccess service:
$ curl http://af2dc1f99bda211e791f402037f18a54-1616925381.eu-central-1.elb.amazonaws.com Hello World!
The output shows that
v1
version of the image is being used. -
In a different terminal, watch the status of the pods:
$ kubectl get -w pods app-v1-recreate-486400321-4rwzb 1/1 Running 0 9m app-v1-recreate-486400321-fqh5l 1/1 Running 0 9m app-v1-recreate-486400321-jm02h 1/1 Running 0 9m app-v1-recreate-486400321-rl79n 1/1 Running 0 9m app-v1-recreate-486400321-z89nm 1/1 Running 0 9m
-
Update image of the deployment:
$ kubectl set image deployment/app-recreate app-recreate=arungupta/app-upgrade:v2 deployment "app-recreate" image updated
-
Status of the pods is updated. It shows that all the pods are terminated first, and then the new ones are created:
$ kubectl get -w pods NAME READY STATUS RESTARTS AGE app-v1-recreate-486400321-4rwzb 1/1 Running 0 9m app-v1-recreate-486400321-fqh5l 1/1 Running 0 9m app-v1-recreate-486400321-jm02h 1/1 Running 0 9m app-v1-recreate-486400321-rl79n 1/1 Running 0 9m app-v1-recreate-486400321-z89nm 1/1 Running 0 9m app-v1-recreate-486400321-rl79n 1/1 Terminating 0 10m app-v1-recreate-486400321-jm02h 1/1 Terminating 0 10m app-v1-recreate-486400321-fqh5l 1/1 Terminating 0 10m app-v1-recreate-486400321-z89nm 1/1 Terminating 0 10m app-v1-recreate-486400321-4rwzb 1/1 Terminating 0 10m app-v1-recreate-486400321-rl79n 0/1 Terminating 0 10m app-v1-recreate-486400321-4rwzb 0/1 Terminating 0 10m app-v1-recreate-486400321-fqh5l 0/1 Terminating 0 10m app-v1-recreate-486400321-z89nm 0/1 Terminating 0 10m app-v1-recreate-486400321-jm02h 0/1 Terminating 0 10m app-v1-recreate-486400321-fqh5l 0/1 Terminating 0 10m app-v1-recreate-486400321-fqh5l 0/1 Terminating 0 10m app-v1-recreate-486400321-z89nm 0/1 Terminating 0 10m app-v1-recreate-486400321-z89nm 0/1 Terminating 0 10m app-v1-recreate-486400321-rl79n 0/1 Terminating 0 10m app-v1-recreate-486400321-rl79n 0/1 Terminating 0 10m app-v1-recreate-486400321-jm02h 0/1 Terminating 0 10m app-v1-recreate-486400321-jm02h 0/1 Terminating 0 10m app-v1-recreate-486400321-4rwzb 0/1 Terminating 0 10m app-v1-recreate-486400321-4rwzb 0/1 Terminating 0 10m app-v1-recreate-2362379170-fp3j2 0/1 Pending 0 0s app-v1-recreate-2362379170-xxqqw 0/1 Pending 0 0s app-v1-recreate-2362379170-hkpt7 0/1 Pending 0 0s app-v1-recreate-2362379170-jzh5d 0/1 Pending 0 0s app-v1-recreate-2362379170-k26sf 0/1 Pending 0 0s app-v1-recreate-2362379170-xxqqw 0/1 Pending 0 0s app-v1-recreate-2362379170-fp3j2 0/1 Pending 0 0s app-v1-recreate-2362379170-hkpt7 0/1 Pending 0 0s app-v1-recreate-2362379170-jzh5d 0/1 Pending 0 0s app-v1-recreate-2362379170-k26sf 0/1 Pending 0 0s app-v1-recreate-2362379170-xxqqw 0/1 ContainerCreating 0 0s app-v1-recreate-2362379170-fp3j2 0/1 ContainerCreating 0 1s app-v1-recreate-2362379170-hkpt7 0/1 ContainerCreating 0 1s app-v1-recreate-2362379170-jzh5d 0/1 ContainerCreating 0 1s app-v1-recreate-2362379170-k26sf 0/1 ContainerCreating 0 1s app-v1-recreate-2362379170-fp3j2 1/1 Running 0 3s app-v1-recreate-2362379170-k26sf 1/1 Running 0 3s app-v1-recreate-2362379170-xxqqw 1/1 Running 0 3s app-v1-recreate-2362379170-hkpt7 1/1 Running 0 4s app-v1-recreate-2362379170-jzh5d 1/1 Running 0 4s
The output shows that all pods are terminatd first and then the new ones are created.
-
Get the history of deployments:
$ kubectl rollout history deployment/app-recreate deployments "app-recreate" REVISION CHANGE-CAUSE 1 kubectl create --filename=templates/app-recreate.yaml --record=true 2 kubectl set image deployment/app-recreate app-recreate=arungupta/app-upgrade:v2
-
Access the application again:
$ curl http://af2dc1f99bda211e791f402037f18a54-1616925381.eu-central-1.elb.amazonaws.com Howdy World!
The output now shows
v2
version of the image is being used.
Pods are updated in a rolling update fashion.
Two optional properties can be used to define how rolling update is performed:
-
.spec.strategy.rollingUpdate.maxSurge
specifies the maximum number of pods that can be created over the desired number of pods. The value can be an absolute number or percentage. Default value is25%
. -
.spec.strategy.rollingUpdate.maxUnavailable
specifies the maximum number of pods that can be unavailable during the update process.
The configuration file looks like:
apiVersion: apps/v1 kind: Deployment metadata: name: app-rolling spec: replicas: 5 selector: matchLabels: name: app-rolling strategy: type: RollingUpdate rollingUpdate: maxSurge: 1 maxUnavailable: 1 template: metadata: labels: name: app-rolling spec: containers: - name: app-rolling image: arungupta/app-upgrade:v1 ports: - containerPort: 8080
In this case, 1 more pod can be created over the maximum number of pods and only 1 pod can be unavailable during the update process.
-
Create deployment:
$ kubectl create -f templates/app-rolling.yaml --record deployment "app-rolling" created
Once again,
--record
ensures that the command initiating this deployment is recorded. This is useful when the application has gone through a few updates and you need to correlate a version with the command. -
Get the history of deployments:
$ kubectl rollout history deployment/app-rolling deployments "app-rolling" REVISION CHANGE-CAUSE 1 kubectl create --filename=templates/app-rolling.yaml --record=true
-
Expose service:
$ kubectl expose deployment/app-rolling --port=80 --target-port=8080 --name=app-rolling --type=LoadBalancer service "app-rolling" exposed
-
Get service detail:
$ kubectl describe svc/app-rolling Name: app-rolling Namespace: default Labels: name=app-rolling Annotations: <none> Selector: name=app-rolling Type: LoadBalancer IP: 100.71.164.130 LoadBalancer Ingress: abe27b4c7bdaa11e791f402037f18a54-647142678.eu-central-1.elb.amazonaws.com Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 31521/TCP Endpoints: 100.96.1.16:80,100.96.2.15:80,100.96.3.15:80 + 2 more... Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreatingLoadBalancer 1m service-controller Creating load balancer Normal CreatedLoadBalancer 1m service-controller Created load balancer
-
Acccess service:
$ curl http://abe27b4c7bdaa11e791f402037f18a54-647142678.eu-central-1.elb.amazonaws.com Hello World!
The output shows that
v1
version of the image is being used. -
In a different terminal, watch the status of the pods:
$ kubectl get -w pods NAME READY STATUS RESTARTS AGE app-rolling-1683885671-d7vpf 1/1 Running 0 2m app-rolling-1683885671-dt31h 1/1 Running 0 2m app-rolling-1683885671-k8xn9 1/1 Running 0 2m app-rolling-1683885671-sdjk3 1/1 Running 0 2m app-rolling-1683885671-x1npp 1/1 Running 0 2m
-
Update image of the deployment:
$ kubectl set image deployment/app-rolling app-rolling=arungupta/app-upgrade:v2 deployment "app-rolling" image updated
-
Status of the pods is updated:
$ kubectl get -w pods NAME READY STATUS RESTARTS AGE app-rolling-1683885671-d7vpf 1/1 Running 0 2m app-rolling-1683885671-dt31h 1/1 Running 0 2m app-rolling-1683885671-k8xn9 1/1 Running 0 2m app-rolling-1683885671-sdjk3 1/1 Running 0 2m app-rolling-1683885671-x1npp 1/1 Running 0 2m app-rolling-4154020364-ddn16 0/1 Pending 0 0s app-rolling-4154020364-ddn16 0/1 Pending 0 1s app-rolling-4154020364-ddn16 0/1 ContainerCreating 0 1s app-rolling-1683885671-sdjk3 1/1 Terminating 0 5m app-rolling-4154020364-j0nnk 0/1 Pending 0 1s app-rolling-4154020364-j0nnk 0/1 Pending 0 1s app-rolling-4154020364-j0nnk 0/1 ContainerCreating 0 1s app-rolling-1683885671-sdjk3 0/1 Terminating 0 5m app-rolling-4154020364-ddn16 1/1 Running 0 2s app-rolling-1683885671-dt31h 1/1 Terminating 0 5m app-rolling-4154020364-j0nnk 1/1 Running 0 3s app-rolling-4154020364-wlvfz 0/1 Pending 0 1s app-rolling-4154020364-wlvfz 0/1 Pending 0 1s app-rolling-1683885671-x1npp 1/1 Terminating 0 5m app-rolling-4154020364-wlvfz 0/1 ContainerCreating 0 1s app-rolling-4154020364-qr1lz 0/1 Pending 0 1s app-rolling-4154020364-qr1lz 0/1 Pending 0 1s app-rolling-1683885671-dt31h 0/1 Terminating 0 5m app-rolling-4154020364-qr1lz 0/1 ContainerCreating 0 1s app-rolling-1683885671-x1npp 0/1 Terminating 0 5m app-rolling-4154020364-wlvfz 1/1 Running 0 2s app-rolling-1683885671-d7vpf 1/1 Terminating 0 5m app-rolling-4154020364-vlb4b 0/1 Pending 0 2s app-rolling-4154020364-vlb4b 0/1 Pending 0 2s app-rolling-4154020364-vlb4b 0/1 ContainerCreating 0 2s app-rolling-1683885671-d7vpf 0/1 Terminating 0 5m app-rolling-1683885671-x1npp 0/1 Terminating 0 5m app-rolling-1683885671-x1npp 0/1 Terminating 0 5m app-rolling-4154020364-qr1lz 1/1 Running 0 3s app-rolling-1683885671-k8xn9 1/1 Terminating 0 5m app-rolling-1683885671-k8xn9 0/1 Terminating 0 5m app-rolling-4154020364-vlb4b 1/1 Running 0 2s
The output shows that a new pod is created, then an old one is terminated, then a new pod is created and so on.
-
Get the history of deployments:
$ kubectl rollout history deployment/app-rolling deployments "app-rolling" REVISION CHANGE-CAUSE 1 kubectl create --filename=templates/app-rolling.yaml --record=true 2 kubectl set image deployment/app-rolling app-rolling=arungupta/app-upgrade:v2
-
Access the application again:
$ curl http://abe27b4c7bdaa11e791f402037f18a54-647142678.eu-central-1.elb.amazonaws.com Howdy World!
The output now shows
v2
version of the image is being used.
As discussed above, details about how a Deployment was rolled out can be obtained using kubectl rollout history
command. In order to rollback, lets get the complete history of Deployment:
$ kubectl rollout history deployment/app-rolling deployments "app-rolling" REVISION CHANGE-CAUSE 1 kubectl create --filename=templates/app-rolling.yaml --record=true 2 kubectl set image deployment/app-rolling app-rolling=arungupta/app-upgrade:v2
Roll back to a previous version using the command:
$ kubectl rollout undo deployment/app-rolling --to-revision=1 deployment "app-rolling" rolled back
Now access the service again:
$ curl http://abe27b4c7bdaa11e791f402037f18a54-647142678.eu-central-1.elb.amazonaws.com Hello World!
The output shows that v1
version of the image is now being used.
Canary deployment allows to deploy a new version of the application in production by slowly rolling out the change to a small subset of users before rolling it out to everybody.
There are multiple ways to achieve this in Kubernetes:
-
Using Service, Deployment and Labels
-
Using Ingress Controller
-
Using DNS Controller
At this time, only one means of Canary deployment is explained. Details on other methods will be added later.
Two Deployments with image for different versions are used togther. Both Deployments have same pod labels but differ in at least one label. The common pod labels are uesd as selector for the Service. Different pod labels are used to scale the number of replicas. One replica of the new version of Deployment is released alongside the old version. If no errors are detected for some time, then the number of replicas of the new version are scaled up and the number of replicas for the old version are scaled down. Eventually, the old version is deleted.
Let’s look at version v1
of the Deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: app-v1 spec: replicas: 2 selector: matchLabels: name: app version: v1 template: metadata: labels: name: app version: v1 spec: containers: - name: app image: arungupta/app-upgrade:v1 ports: - containerPort: 8080
It uses arungupta/app-upgrade:v1
image. It has two labels name: app
and version: v1
.
Let’s look at version v2
of the Deployment:
apiVersion: apps/v1 kind: Deployment metadata: name: app-v2 spec: replicas: 2 selector: matchLabels: name: app version: v2 template: metadata: labels: name: app version: v2 spec: containers: - name: app image: arungupta/app-upgrade:v2 ports: - containerPort: 8080
It uses a different image, i.e. arungupta/app-upgrade:v2
. It has one label, name: app
, that matches the v1
version of the Deployment. It has another label that is similar to v2
but uses a different value, i.e. version: v2
. This label allows to independently scale this Deployment, without overriding v1
version of the Deployment.
Finally, let’s look at the service definition that uses these Deployments:
apiVersion: v1 kind: Service metadata: name: app-service spec: selector: name: app ports: - name: app port: 80 type: LoadBalancer
The Service uses labels that are common to both versions of the application. This allows the pods from both Deployment to be part of the Service.
Let’s verify.
-
Deploy
v1
version of Deployment:$ kubectl apply -f templates/app-v1.yaml deployment "app-v1" created
-
Deploy
v2
version of Deployment:$ kubectl apply -f templates/app-v2.yaml deployment "app-v2" created
-
Deploy Service:
$ kubectl apply -f templates/app-service.yaml service "app-service" created
-
Check the list of pods for this service:
$ kubectl get pods -l name=app NAME READY STATUS RESTARTS AGE app-v1-3101668686-4mhcj 1/1 Running 0 2m app-v1-3101668686-ncbfv 1/1 Running 0 2m app-v2-2627414310-89j1v 1/1 Running 0 2m app-v2-2627414310-bgg1t 1/1 Running 0 2m
Note that we are explicitly specifying the label
name=app
in the query to only pick the pods that are specified in the service definition attemplates/app-service.yaml
. There are two pods fromv1
version and 2 pods fromv2
version. Accessing this service will have 50% response fromv1
version and the other 50% fromv2
version.
The number of pods to be included from v1
version and v2
version can now be indepently scaled using the two Deployments.
-
Increase the number of replicas for
v2
Deployment:$ kubectl scale deploy/app-v2 --replicas=4 deployment "app-v2" scaled
-
Check the pods that are part of the Service:
$ kubectl get pods -l name=app NAME READY STATUS RESTARTS AGE app-v1-3101668686-4mhcj 1/1 Running 0 6m app-v1-3101668686-ncbfv 1/1 Running 0 6m app-v2-2627414310-89j1v 1/1 Running 0 6m app-v2-2627414310-8jpzd 1/1 Running 0 7s app-v2-2627414310-b17v8 1/1 Running 0 7s app-v2-2627414310-bgg1t 1/1 Running 0 6m
You can see that 4 pods are now coming from
v2
version of the application and 2 pods are coming fromv1
version of the application. So, two-thirds traffic from the user will now be served from the new application. -
Reduce the number of replicas for
v1
version to 0:$ kubectl scale deploy/app-v1 --replicas=0 deployment "app-v1" scaled
-
Check the pods that are part of the Service:
$ kubectl get pods -l name=app NAME READY STATUS RESTARTS AGE app-v2-2627414310-89j1v 1/1 Running 0 8m app-v2-2627414310-8jpzd 1/1 Running 0 1m app-v2-2627414310-b17v8 1/1 Running 0 1m app-v2-2627414310-bgg1t 1/1 Running 0 8m
Now all pods are serving
v2
version of the Deployment.
Achieving the right percentage of traffic using Deployments and Services requires spinning-up as many pods as necessary. For example, if the version v1
has 4 replicas of a pod. Then, in order to direct 5% traffic to the version v2
, 1 pod replica of v2
version would require 16 more replicas of v1
version. This is not an optimal usage of resource. Weighted traffic switching with Kubernetes Ingress can be used to solve this problem.
Kubernetes Ingress Controller for AWS, by Zalando, is an ingress controller for Kubernetes.
$ kubectl apply -f templates/ingress-controller.yaml deployment "app-ingress-controller" created
You are now ready to continue on with the workshop!