Skip to content

Commit

Permalink
Reorg admin content
Browse files Browse the repository at this point in the history
Split network interconnect in separate section
  • Loading branch information
jpetazzo committed Feb 9, 2020
1 parent 36d1199 commit 8ba9c2e
Show file tree
Hide file tree
Showing 4 changed files with 225 additions and 173 deletions.
208 changes: 48 additions & 160 deletions slides/k8s/cni.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,6 +162,8 @@ class: extra-details

---

class: extra-details

## What's BGP?

- BGP (Border Gateway Protocol) is the protocol used between internet routers
Expand Down Expand Up @@ -220,6 +222,22 @@ class: extra-details

---

class: extra-details

## Checking the CNI configuration

- By default, kubelet gets the CNI configuration from `/etc/cni/net.d`

.exercise[

- Check the content of `/etc/cni/net.d`

]

(On most machines, at this point, `/etc/cni/net.d` doesn't even exist).)

---

## Our control plane

- We will use a Compose file to start the control plane
Expand Down Expand Up @@ -358,6 +376,26 @@ Note: the DaemonSet won't create any pods (yet) since there are no nodes (yet).

---

class: extra-details

## Checking the CNI configuration

- At this point, kuberouter should have installed its CNI configuration

(in `/etc/cni/net.d`)

.exercise[

- Check the content of `/etc/cni/net.d`

]

- There should be a file created by kuberouter

- The file should contain the node's podCIDR

---

## Setting up a test

- Let's create a Deployment and expose it with a Service
Expand Down Expand Up @@ -405,6 +443,8 @@ This shows that we are using IPVS (vs. iptables, which picked random endpoints).

---

class: extra-details

## Troubleshooting

- What if we need to check that everything is working properly?
Expand All @@ -428,6 +468,8 @@ We should see the local pod CIDR connected to `kube-bridge`, and the other nodes

---

class: extra-details

## More troubleshooting

- We can also look at the output of the kube-router pods
Expand All @@ -444,6 +486,8 @@ We should see the local pod CIDR connected to `kube-bridge`, and the other nodes

---

class: extra-details

## Trying `kubectl logs` / `kubectl exec`

.exercise[
Expand All @@ -469,6 +513,8 @@ What does that mean?

---

class: extra-details

## Internal name resolution

- To execute these commands, the API server needs to connect to kubelet
Expand All @@ -485,6 +531,8 @@ What does that mean?

---

class: extra-details

## Another way to check the logs

- We can also ask the logs directly to the container engine
Expand Down Expand Up @@ -526,163 +574,3 @@ done
- This could be useful for embedded platforms with very limited resources

(or lab environments for learning purposes)

---

# Interconnecting clusters

- We assigned different Cluster CIDRs to each cluster

- This allows us to connect our clusters together

- We will leverage kube-router BGP abilities for that

- We will *peer* each kube-router instance with a *route reflector*

- As a result, we will be able to ping each other's pods

---

## Disclaimers

- There are many methods to interconnect clusters

- Depending on your network implementation, you will use different methods

- The method shown here only works for nodes with direct layer 2 connection

- We will often need to use tunnels or other network techniques

---

## The plan

- Someone will start the *route reflector*

(typically, that will be the person presenting these slides!)

- We will update our kube-router configuration

- We will add a *peering* with the route reflector

(instructing kube-router to connect to it and exchange route information)

- We should see the routes to other clusters on our nodes

(in the output of e.g. `route -n` or `ip route show`)

- We should be able to ping pods of other nodes

---

## Starting the route reflector

- Only do this slide if you are doing this on your own

- There is a Compose file in the `compose/frr-route-reflector` directory

- Before continuing, make sure that you have the IP address of the route reflector

---

## Configuring kube-router

- This can be done in two ways:

- with command-line flags to the `kube-router` process

- with annotations to Node objects

- We will use the command-line flags

(because it will automatically propagate to all nodes)

.footnote[Note: with Calico, this is achieved by creating a BGPPeer CRD.]

---

## Updating kube-router configuration

- We need to pass two command-line flags to the kube-router process

.exercise[

- Edit the `kuberouter.yaml` file

- Add the following flags to the kube-router arguments:
```
- "--peer-router-ips=`X.X.X.X`"
- "--peer-router-asns=64512"
```
(Replace `X.X.X.X` with the route reflector address)

- Update the DaemonSet definition:
```bash
kubectl apply -f kuberouter.yaml
```

]

---

## Restarting kube-router

- The DaemonSet will not update the pods automatically

(it is using the default `updateStrategy`, which is `OnDelete`)

- We will therefore delete the pods

(they will be recreated with the updated definition)

.exercise[

- Delete all the kube-router pods:
```bash
kubectl delete pods -n kube-system -l k8s-app=kube-router
```

]

Note: the other `updateStrategy` for a DaemonSet is RollingUpdate.
<br/>
For critical services, we might want to precisely control the update process.

---

## Checking peering status

- We can see informative messages in the output of kube-router:
```
time="2019-04-07T15:53:56Z" level=info msg="Peer Up"
Key=X.X.X.X State=BGP_FSM_OPENCONFIRM Topic=Peer
```

- We should see the routes of the other clusters show up

- For debugging purposes, the reflector also exports a route to 1.0.0.2/32

- That route will show up like this:
```
1.0.0.2 172.31.X.Y 255.255.255.255 UGH 0 0 0 eth0
```

- We should be able to ping the pods of other clusters!

---

## If we wanted to do more ...

- kube-router can also export ClusterIP addresses

(by adding the flag `--advertise-cluster-ip`)

- They are exported individually (as /32)

- This would allow us to easily access other clusters' services

(without having to resolve the individual addresses of pods)

- Even better if it's combined with DNS integration

(to facilitate name → ClusterIP resolution)
Loading

0 comments on commit 8ba9c2e

Please sign in to comment.