From 8ba9c2e41b6ede78153d94907759b1d0398c9cda Mon Sep 17 00:00:00 2001 From: Jerome Petazzoni Date: Sun, 9 Feb 2020 15:12:55 -0600 Subject: [PATCH] Reorg admin content Split network interconnect in separate section --- slides/k8s/cni.md | 208 ++++++++++------------------------------ slides/k8s/interco.md | 157 ++++++++++++++++++++++++++++++ slides/kadm-fullday.yml | 32 ++++--- slides/kadm-twodays.yml | 1 + 4 files changed, 225 insertions(+), 173 deletions(-) create mode 100644 slides/k8s/interco.md diff --git a/slides/k8s/cni.md b/slides/k8s/cni.md index 26a736b9f..7f0141e23 100644 --- a/slides/k8s/cni.md +++ b/slides/k8s/cni.md @@ -162,6 +162,8 @@ class: extra-details --- +class: extra-details + ## What's BGP? - BGP (Border Gateway Protocol) is the protocol used between internet routers @@ -220,6 +222,22 @@ class: extra-details --- +class: extra-details + +## Checking the CNI configuration + +- By default, kubelet gets the CNI configuration from `/etc/cni/net.d` + +.exercise[ + +- Check the content of `/etc/cni/net.d` + +] + +(On most machines, at this point, `/etc/cni/net.d` doesn't even exist).) + +--- + ## Our control plane - We will use a Compose file to start the control plane @@ -358,6 +376,26 @@ Note: the DaemonSet won't create any pods (yet) since there are no nodes (yet). --- +class: extra-details + +## Checking the CNI configuration + +- At this point, kuberouter should have installed its CNI configuration + + (in `/etc/cni/net.d`) + +.exercise[ + +- Check the content of `/etc/cni/net.d` + +] + +- There should be a file created by kuberouter + +- The file should contain the node's podCIDR + +--- + ## Setting up a test - Let's create a Deployment and expose it with a Service @@ -405,6 +443,8 @@ This shows that we are using IPVS (vs. iptables, which picked random endpoints). --- +class: extra-details + ## Troubleshooting - What if we need to check that everything is working properly? @@ -428,6 +468,8 @@ We should see the local pod CIDR connected to `kube-bridge`, and the other nodes --- +class: extra-details + ## More troubleshooting - We can also look at the output of the kube-router pods @@ -444,6 +486,8 @@ We should see the local pod CIDR connected to `kube-bridge`, and the other nodes --- +class: extra-details + ## Trying `kubectl logs` / `kubectl exec` .exercise[ @@ -469,6 +513,8 @@ What does that mean? --- +class: extra-details + ## Internal name resolution - To execute these commands, the API server needs to connect to kubelet @@ -485,6 +531,8 @@ What does that mean? --- +class: extra-details + ## Another way to check the logs - We can also ask the logs directly to the container engine @@ -526,163 +574,3 @@ done - This could be useful for embedded platforms with very limited resources (or lab environments for learning purposes) - ---- - -# Interconnecting clusters - -- We assigned different Cluster CIDRs to each cluster - -- This allows us to connect our clusters together - -- We will leverage kube-router BGP abilities for that - -- We will *peer* each kube-router instance with a *route reflector* - -- As a result, we will be able to ping each other's pods - ---- - -## Disclaimers - -- There are many methods to interconnect clusters - -- Depending on your network implementation, you will use different methods - -- The method shown here only works for nodes with direct layer 2 connection - -- We will often need to use tunnels or other network techniques - ---- - -## The plan - -- Someone will start the *route reflector* - - (typically, that will be the person presenting these slides!) - -- We will update our kube-router configuration - -- We will add a *peering* with the route reflector - - (instructing kube-router to connect to it and exchange route information) - -- We should see the routes to other clusters on our nodes - - (in the output of e.g. `route -n` or `ip route show`) - -- We should be able to ping pods of other nodes - ---- - -## Starting the route reflector - -- Only do this slide if you are doing this on your own - -- There is a Compose file in the `compose/frr-route-reflector` directory - -- Before continuing, make sure that you have the IP address of the route reflector - ---- - -## Configuring kube-router - -- This can be done in two ways: - - - with command-line flags to the `kube-router` process - - - with annotations to Node objects - -- We will use the command-line flags - - (because it will automatically propagate to all nodes) - -.footnote[Note: with Calico, this is achieved by creating a BGPPeer CRD.] - ---- - -## Updating kube-router configuration - -- We need to pass two command-line flags to the kube-router process - -.exercise[ - -- Edit the `kuberouter.yaml` file - -- Add the following flags to the kube-router arguments: - ``` - - "--peer-router-ips=`X.X.X.X`" - - "--peer-router-asns=64512" - ``` - (Replace `X.X.X.X` with the route reflector address) - -- Update the DaemonSet definition: - ```bash - kubectl apply -f kuberouter.yaml - ``` - -] - ---- - -## Restarting kube-router - -- The DaemonSet will not update the pods automatically - - (it is using the default `updateStrategy`, which is `OnDelete`) - -- We will therefore delete the pods - - (they will be recreated with the updated definition) - -.exercise[ - -- Delete all the kube-router pods: - ```bash - kubectl delete pods -n kube-system -l k8s-app=kube-router - ``` - -] - -Note: the other `updateStrategy` for a DaemonSet is RollingUpdate. -
-For critical services, we might want to precisely control the update process. - ---- - -## Checking peering status - -- We can see informative messages in the output of kube-router: - ``` - time="2019-04-07T15:53:56Z" level=info msg="Peer Up" - Key=X.X.X.X State=BGP_FSM_OPENCONFIRM Topic=Peer - ``` - -- We should see the routes of the other clusters show up - -- For debugging purposes, the reflector also exports a route to 1.0.0.2/32 - -- That route will show up like this: - ``` - 1.0.0.2 172.31.X.Y 255.255.255.255 UGH 0 0 0 eth0 - ``` - -- We should be able to ping the pods of other clusters! - ---- - -## If we wanted to do more ... - -- kube-router can also export ClusterIP addresses - - (by adding the flag `--advertise-cluster-ip`) - -- They are exported individually (as /32) - -- This would allow us to easily access other clusters' services - - (without having to resolve the individual addresses of pods) - -- Even better if it's combined with DNS integration - - (to facilitate name → ClusterIP resolution) diff --git a/slides/k8s/interco.md b/slides/k8s/interco.md new file mode 100644 index 000000000..5907b3e7c --- /dev/null +++ b/slides/k8s/interco.md @@ -0,0 +1,157 @@ +# Interconnecting clusters + +- We assigned different Cluster CIDRs to each cluster + +- This allows us to connect our clusters together + +- We will leverage kube-router BGP abilities for that + +- We will *peer* each kube-router instance with a *route reflector* + +- As a result, we will be able to ping each other's pods + +--- + +## Disclaimers + +- There are many methods to interconnect clusters + +- Depending on your network implementation, you will use different methods + +- The method shown here only works for nodes with direct layer 2 connection + +- We will often need to use tunnels or other network techniques + +--- + +## The plan + +- Someone will start the *route reflector* + + (typically, that will be the person presenting these slides!) + +- We will update our kube-router configuration + +- We will add a *peering* with the route reflector + + (instructing kube-router to connect to it and exchange route information) + +- We should see the routes to other clusters on our nodes + + (in the output of e.g. `route -n` or `ip route show`) + +- We should be able to ping pods of other nodes + +--- + +## Starting the route reflector + +- Only do this slide if you are doing this on your own + +- There is a Compose file in the `compose/frr-route-reflector` directory + +- Before continuing, make sure that you have the IP address of the route reflector + +--- + +## Configuring kube-router + +- This can be done in two ways: + + - with command-line flags to the `kube-router` process + + - with annotations to Node objects + +- We will use the command-line flags + + (because it will automatically propagate to all nodes) + +.footnote[Note: with Calico, this is achieved by creating a BGPPeer CRD.] + +--- + +## Updating kube-router configuration + +- We need to pass two command-line flags to the kube-router process + +.exercise[ + +- Edit the `kuberouter.yaml` file + +- Add the following flags to the kube-router arguments: + ``` + - "--peer-router-ips=`X.X.X.X`" + - "--peer-router-asns=64512" + ``` + (Replace `X.X.X.X` with the route reflector address) + +- Update the DaemonSet definition: + ```bash + kubectl apply -f kuberouter.yaml + ``` + +] + +--- + +## Restarting kube-router + +- The DaemonSet will not update the pods automatically + + (it is using the default `updateStrategy`, which is `OnDelete`) + +- We will therefore delete the pods + + (they will be recreated with the updated definition) + +.exercise[ + +- Delete all the kube-router pods: + ```bash + kubectl delete pods -n kube-system -l k8s-app=kube-router + ``` + +] + +Note: the other `updateStrategy` for a DaemonSet is RollingUpdate. +
+For critical services, we might want to precisely control the update process. + +--- + +## Checking peering status + +- We can see informative messages in the output of kube-router: + ``` + time="2019-04-07T15:53:56Z" level=info msg="Peer Up" + Key=X.X.X.X State=BGP_FSM_OPENCONFIRM Topic=Peer + ``` + +- We should see the routes of the other clusters show up + +- For debugging purposes, the reflector also exports a route to 1.0.0.2/32 + +- That route will show up like this: + ``` + 1.0.0.2 172.31.X.Y 255.255.255.255 UGH 0 0 0 eth0 + ``` + +- We should be able to ping the pods of other clusters! + +--- + +## If we wanted to do more ... + +- kube-router can also export ClusterIP addresses + + (by adding the flag `--advertise-cluster-ip`) + +- They are exported individually (as /32) + +- This would allow us to easily access other clusters' services + + (without having to resolve the individual addresses of pods) + +- Even better if it's combined with DNS integration + + (to facilitate name → ClusterIP resolution) diff --git a/slides/kadm-fullday.yml b/slides/kadm-fullday.yml index 9e63d8eb8..e2b0aa9b9 100644 --- a/slides/kadm-fullday.yml +++ b/slides/kadm-fullday.yml @@ -22,24 +22,30 @@ chapters: - k8s/intro.md - shared/about-slides.md - shared/toc.md -- - k8s/prereqs-admin.md +- + - k8s/prereqs-admin.md - k8s/architecture.md + - k8s/deploymentslideshow.md - k8s/dmuc.md -- - k8s/multinode.md +- + - k8s/multinode.md - k8s/cni.md + - k8s/interco.md +- - k8s/apilb.md - - k8s/control-plane-auth.md -- - k8s/setup-managed.md - - k8s/setup-selfhosted.md + #- k8s/setup-managed.md + #- k8s/setup-selfhosted.md - k8s/cluster-upgrade.md - - k8s/staticpods.md - k8s/cluster-backup.md - - k8s/cloud-controller-manager.md - - k8s/bootstrap.md -- - k8s/resource-limits.md - - k8s/metrics-server.md - - k8s/cluster-sizing.md - - k8s/horizontal-pod-autoscaler.md -- - k8s/lastwords-admin.md + - k8s/staticpods.md +- + #- k8s/cloud-controller-manager.md + #- k8s/bootstrap.md + - k8s/control-plane-auth.md + - k8s/podsecuritypolicy.md + - k8s/csr-api.md + - k8s/openid-connect.md +- + #- k8s/lastwords-admin.md - k8s/links.md - shared/thankyou.md diff --git a/slides/kadm-twodays.yml b/slides/kadm-twodays.yml index ac78d49f6..23911f577 100644 --- a/slides/kadm-twodays.yml +++ b/slides/kadm-twodays.yml @@ -29,6 +29,7 @@ chapters: - k8s/dmuc.md - - k8s/multinode.md - k8s/cni.md + - k8s/interco.md - - k8s/apilb.md - k8s/setup-managed.md - k8s/setup-selfhosted.md