This is the documentation - and executable code! - for the Service Mesh Academy workshop about what's coming in Linkerd 2.13. The easiest way to use this file is to execute it with demosh.
Things in Markdown comments are safe to ignore when reading this later. When
executing this with demosh, things after the horizontal rule below (which
is just before a commented @SHOW
directive) will get displayed.
For this workshop, you'll need a running Kubernetes cluster set up with
Linkerd, Emissary, and the Faces app. You can use create-cluster.sh
to
create an appropriate k3d
cluster, and setup-demo.sh
to initialize it.
BAT_STYLE="grid,numbers"
You'll also want two web browsers running pointing to the Faces app at
(assuming you used create-cluster.sh
to set up your cluster)
https://localhost/faces/ -- one normal one, and one using ModHeader or the
like to set X-Faces-User: testuser
for the canarying test.
Two significant new features in Linkerd 2.13 are dynamic request routing and circuit breaking.
Dynamic request routing permits HTTP routing based on headers, HTTP method, etc. We'll use this to demonstrate progessive delivery and A/B testing deep in the call graph of our application.
Circuit breaking is a resilience feature that allows Linkerd to stop sending requests to endpoints that fail too much. We'll use this to protect our application from a failing workload.
As running right now, the Faces GUI should be showing all grinning faces on green backgrounds. We can see that with a web browser.
The green background comes from the color
workload. If we have a new version
of that workload (color2
) which returns blue instead of green, we can slowly
shift traffic to color2
using a new HTTPRoute
resource.
Here's the resource we'll apply:
#@immed
bat k8s/02-canary/color-canary-10.yaml
When we apply that, we should immediately see 10% of the backgrounds shift to blue.
kubectl apply -f k8s/02-canary/color-canary-10.yaml
Let's take a quick look at the demo architecture again to see what's going on here.
We can change the amount of traffic by changing the weights. For example, now that we see 10% blue working, we can shift to a 50/50 split:
diff -u99 --color k8s/02-canary/color-canary-{10,50}.yaml
Applying that, we'll immediately see the amount of blue backgrounds increase.
kubectl apply -f k8s/02-canary/color-canary-50.yaml
Finally, when we're happy with the 50/50 split, we can shift to routing all
our traffic to the blue color2
service. We could do this by deleting one of
our backendRefs
, but here we'll show using a weight
of 0 instead:
diff -u99 --color k8s/02-canary/color-canary-{50,100}.yaml
Applying that, we should see no green backgrounds at all.
kubectl apply -f k8s/02-canary/color-canary-100.yaml
Another thing we can do with dynamic request routing is A/B testing deep in the call graph, where the ingress controller can't reach. Here we'll use two browsers to show an A/B test where all requests with the header
X-Faces-Header: testuser
get routed to a different smiley
workload (named smiley2
), which returns a
heart eyes smiley instead of the normal grinning smiley.
One browser is "normal" and will not send the X-Faces-User
header. It will
show User: unknown
.
The other browser is set up using the ModHeader
extension to send X-Faces-User: testuser
. It will show User: testuser
.
Here we have both browsers visible: the top browser sends no
X-Faces-User
, the bottom browser is using ModHeader to
send X-Faces-User: testuser
.
Here's the HTTPRoute we'll add to do the A/B test:
#@immed
bat k8s/03-abtest/smiley-ab.yaml
Applying that, we will immediately start to see the browser
setting the X-Faces-User: testuser
header start getting
heart eyes smilies.
kubectl apply -f k8s/03-abtest/smiley-ab.yaml
Once we're happy that the B side of our A/B test results in
happy users (maybe they really like the heart eyes smiley?)
we can delete the matches
clause entirely, and change the
backendRefs
of the route to unconditionally route all
traffic to our smiley2
workload.
Here's the HTTPRoute:
#@immed
bat k8s/03-abtest/smiley2-unconditional.yaml
Applying it will result in both browsers getting heart eyes smilies:
kubectl apply -f k8s/03-abtest/smiley2-unconditional.yaml
On to circuit breaking!
Circuit breaking stops routing traffic to failing endpoints. To demonstrate this, we'll first switch the Faces GUI to show us which Pods we're getting responses from – this will make it easier to see when the breakers open and close.
At this point, we're getting responses from two face
Pods.
These Pods are part of the face
Deployment, and the face
Service uses labels to select those Pods for traffic.
Let's add two more Pods behind the face
Service. These will
be the face2
Deployment, but they have labels so that they
will also be matched by the label selector on the face
service.
kubectl apply -f k8s/04-circuit-breaking/face2.yaml
The big new thing in the face2
Deployment is that its Pods
will rapidly get stuck in an error state, which will appear as
a "meh" face on a pink background, and stay there until they
get no traffic for awhile.
We'll start by waiting until our browser shows the face2
Pods
returning failures.
Now that we see failures, we'll enable circuit breaking. At the moment, this is configured by annotations. We'll set it to break the circuit after 30 consecutive failures (which happens quickly in this demo), and to stay failed for at least 10 seconds.
kubectl annotate -n faces svc/face \
balancer.linkerd.io/failure-accrual=consecutive \
balancer.linkerd.io/failure-accrual-consecutive-max-failures=30 \
balancer.linkerd.io/failure-accrual-consecutive-min-penalty=10s
At this point, we should be back to seeing blue backgrounds and grinning
faces, except for every so often when a request will be allowed through to see
if the face2
Pods have recovered. Eventually they will, after which they'll
continue getting traffic until the next time they fail and the circuit breaker
opens.
There you have it, a quick tour of dynamic request routing and circuit breaking in Linkerd 2.13! This area will be evolving quickly; keep an eye out for future workshops!
There's a new linkerd diagnostics policy
command which can help a lot with
dynamic request routing:
linkerd diagnostics policy -n faces svc/smiley 80 | bat -l yaml
It is extremely verbose, but it can tell you great things about exactly
what's up with complex routing situations. Let's restore one of the color
routes:
kubectl apply -f k8s/02-canary/color-canary-50.yaml
...and then check linkerd diagnostics policy
for color
.
linkerd diagnostics policy -n faces svc/color 80 | bat -l yaml
Unfortunately, there's not a cool diagnostic thing that will show you circuit
breakers directly. We can, though, use linkerd viz stat
to infer things:
specifically, linkerd viz stat pods
is pretty powerful here.
For example, if we look at traffic to the face
Deployment, we just see
traffic as usual.
linkerd viz stat deployment -n faces face
However, if we look at all the Pods with the service=face
label that the
face
Service will select, we can see which Pods are taking traffic... and
which are not.
linkerd viz stat pods -n faces -l "service=face"
One note: it takes awhile for the linkerd viz stat
numbers to really catch
up to changing situations, so be prepared to give it a minute or two to really
get the measure of things after you change anything.
Thanks for watching!!