- Node Networking Concepts
- Pod Networking Concepts
- Service Networking Concepts
- Container Network Interface
- Load Balancer Configure and Deploy
- Ingress Rules Configure
- Cluster DNS Configure
-- Manage and display the state of all network
$ ip link
$ ip link set em1 down
$ ip link set em1 mtu 9000
$ ip link add veth-red type veth peer name veth-bridge
-- Display IP Addresses and property information
$ ip addr
$ ip addr add 192.168.1.1/24 dev em1
-- Display and alter the routing table
$ ip route
-- see all interface in a network
$ ip arp
-- Manage network namespace
$ ip netns
$ ip netns add blue
$ ip netns exec red blue
Create network namespaces
- Create Network Namespace
-- create two network namespace red and blue
$ ip netns add red
$ ip netns add blue
- Create Bridge Network/Interface
-- create bridge network
$ ip link add v-net-0 type bridge
- Create VETH Pairs (Pipe/Virtual Cable)
-- create ip network cable
$ ip link add veth-red type veth peer name veth-red-br
$ ip link add veth-blue type veth peer name veth-blue-br
- Attach vEth to Namespace
-- attach blue and red cable with namespace
$ ip link set veth-red netns red
$ ip link set veth-blue netns blue
- Attach Other vEth to Bridge
-- attach cable with bridge
$ ip link set veth-red-br master v-net-0
$ ip link set veth-blue-br master v-net-0
- Assign IP Address
-- set ip address with blue and red namepsace
$ ip -n red addr add 192.168.15.1 dev veth-red
$ ip -n blue addr add 192.168.15.2 dev veth-blue
- Bring the Interface UP
-- bring namespce and bridge network up
$ ip -n red link set veth-red up
$ ip -n blue link set veth-blue up
$ ip link set dev v-net-0 up
- Enable NAT-IP Masquerade
-- enable nat-ip masquerade
$ iptable -t nat -A POSTROUTING -s 192.168.15.0/24 -j MASQUERADE
For node networking
- Each node must have a Interface
- Each node must have a MAC Address
Name | Ports |
---|---|
kube-apiserver | 6443 |
kube-scheduler | 10251 |
kube-controller-manager | 10252 |
etcd | 2379,2380 |
kubelet | 10250 |
worker-node | 30000-32767 |
Pod Networking Model
- Every POD should have an IP address
- Every POD should be able to communicate with every other POD in the same node.
- Every POD should be able to communicate with every other POD on other nodes without NAT.
To make a POD accessible create a service for that POD. Service is actually create a forwarding rule in each node
IP Address | Forward To |
---|---|
192.168.13.178 | 10.244.1.2 |
- ClusterIP:
To make a POD accessible for all POD within the cluster create a service with type clusterIP. Create a ClusterIP from a filecluster-ip.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
-- create a service from a yaml defination file
$ kubectl create -f cluster-ip.yaml
-- create clusterip for a pod nginx
$ kubectl expose pod nginx --name nginx-service --type=ClusterIP --port=80 --target-port=80 --protocol=TCP
- NodePort:
When a POD is needed to access by outside of the cluster then it's called nodePort.
Create a NodePort from a filenodeport-ip.yaml
apiVersion: v1
kind: Service
metadata:
name: my-nodeport
spec:
type: NodePort
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
nodePort: 32212
-- create a service from a yaml defination file
$ kubectl create -f nodeport-ip.yaml
-- create clusterip for a pod nginx
$ kubectl expose pod nginx --name nginx-service --type=NodePort --port=80 --target-port=80 --protocol=TCP
- LoadBalancer:
When a POD is needed to access by outside of the cluster then it's called nodePort.
- Kube-proxy
- iptables
- ipvs
-- view iptable rules
$ sudo iptables-save | grep KUBE | grep nginx
-- get iptables rules created by a service named db-service
$ iptables -L -t net | grep db-service
-- get details of kube-proxy in logs
$ cat /var/log/kube-proxy.log
- With each service, a endpoint is created if define a selector in each service.
- If selector not define in service then create a endpoint manually.
- When a service created kube-proxy updates the iptables rules. See below image.
- https://kubernetes.io/docs/concepts/services-networking/service/
- https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
A CNI goes on top of existing network and allowing to build a tunnel between nodes.
Ingress is a layer 7 load balancer built-in with kubernetes which implement SSL. Ingress need services to communicate with outer world.
Two major component in Ingress
1. Ingress Controller
2. Ingress Resources
Ingress Controller is a reverse-proxy type software like nginx, haproxy, traefik. Ingress Controller is not default in kubernetes. It need to configure manually.
Supported ingress controller software package are
- GCP HTTP(S) Load Balancer (GCE)
- NGINX
- Create ingress controller deployment
Create a kubernetes deployment using nginx-ingress-controller image.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespaces
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- Create a configMap object for nginx configuration value.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
- Create a service for exposing ingress controller to receive traffic.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: nginx-ingress-controller
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- Create an Service Account with proper auth permission and role binding.
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
- Create an Auth Object.
Ingress resource is set of rules which direct traffic to appropriate url direction.
Ingress resources are created using kubernetes definition file.
Ingress Resource creates in kubernetes definition file ingress-wear.yaml
.
Traffic route based on following criteria
1. Route
2. Domain Name
- Create ingress resources Create a resource kind ingress.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: online.com
http:
paths:
- path: /watch
pathType: Prefix
backend:
serviceName: test
servicePort: 80
- path: /wear
pathType: Prefix
backend:
serviceName: test
servicePort: 80
-- list all ingress resources
$ kubectl get ingress
-- details of ingress resources
$ kubectl describe ingress ingress-resources
Fully qualified domain name
Hostname | Namespace | Type | Root | IP Address |
---|---|---|---|---|
web-service | apps | svc | cluster.local | 10.10.56.20 |
10-12-30-20 | apps | pod | cluster.local | 10.12.30.20 |
-- view CoreDNS pods
$ kubectl get pods -n kube-system
-- view CoreDNS deployments
$ kubectl get deploy -n kube-system
-- view CoreDNS services
$ kubectl get services -n kube-system
-- view pod's resolv.conf file
$ kubectl exec -it nginx -- cat /etc/resolv.conf
-- look up the kubernetes serviceDNS
$ kubectl exec -it nginx -- nslookup kubernetes
-- look up the kubernetes pod DNS
$ kubectl exec -it busybox --nslookup 10-244--1-2.default.pod.cluster.local
-- get the logs for CoreDNS errors
$ kubectl logs [core_dns_pods]
- https://kubernetes.io/blog/2018/07/10/coredns-ga-for-kubernetes-cluster-dns/
- https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
Table of Contents
Prev Chapter: Chapter 3: Cluster (11%)
Next Chapter: Chapter 5: Scheduling (5%)