The Services and Networking topic in the CKAD (Certified Kubernetes Application Developer) exam focuses on understanding and managing connectivity within Kubernetes clusters. This includes creating and managing network policies, exposing applications using services, and configuring ingress rules for external access.
- Demonstrate basic understanding of NetworkPolicies
- Provide and troubleshoot access to applications via services
- Use Ingress rules to expose applications
Scenario:
- Create a NetworkPolicy named
deny-all
that denies all ingress and egress traffic for Pods in thedefault
namespace.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: default
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
- Save the YAML file and apply it:
kubectl apply -f deny-all.yaml
- Verify the policy:
kubectl describe networkpolicy deny-all
Scenario:
- Create a NetworkPolicy named
allow-frontend
to allow traffic to Pods labeledapp=backend
only from Pods labeledrole=frontend
.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
namespace: default
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
- Save the YAML file and apply it:
kubectl apply -f allow-frontend.yaml
- Test connectivity from a frontend Pod to a backend Pod.
Scenario:
- Create a NetworkPolicy named
restrict-egress
to restrict Pods labeledapp=web
to only communicate with the IP range192.168.1.0/24
.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-egress
namespace: default
spec:
podSelector:
matchLabels:
app: web
egress:
- to:
- ipBlock:
cidr: 192.168.1.0/24
- Save the YAML file and apply it:
kubectl apply -f restrict-egress.yaml
- Verify egress rules:
kubectl describe networkpolicy restrict-egress
Scenario:
- Create a NetworkPolicy named
allow-port
to allow traffic to Pods labeledapp=database
only on port 3306 (MySQL).
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port
namespace: default
spec:
podSelector:
matchLabels:
app: database
ingress:
- from:
- podSelector: {}
ports:
- protocol: TCP
port: 3306
- Save the YAML file and apply it:
kubectl apply -f allow-port.yaml
- Test connectivity to the database Pod on port 3306.
Scenario:
- Create a NetworkPolicy named
combined-policy
to allow ingress traffic fromrole=frontend
Pods and egress traffic to192.168.2.0/24
.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: combined-policy
namespace: default
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
egress:
- to:
- ipBlock:
cidr: 192.168.2.0/24
- Save the YAML file and apply it:
kubectl apply -f combined-policy.yaml
- Test ingress and egress connectivity for the backend Pods.
Scenario:
- Deploy Pods in a namespace without any NetworkPolicy.
- Verify that all traffic is allowed by default.
Details
- Deploy two Pods:
kubectl run pod1 --image=busybox --command -- sleep 3600 kubectl run pod2 --image=busybox --command -- sleep 3600
- Test connectivity:
kubectl exec pod1 -- ping pod2
Scenario:
- Create a NetworkPolicy named
deny-egress
that blocks all egress traffic for Pods in theprod
namespace.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-egress
namespace: prod
spec:
podSelector: {}
policyTypes:
- Egress
egress: []
- Save the YAML file and apply it:
kubectl apply -f deny-egress.yaml
- Test egress connectivity from any Pod in the
prod
namespace.
Scenario:
- Create a NetworkPolicy named
namespace-egress
that allows Pods indev
to communicate with Pods inprod
.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: namespace-egress
namespace: dev
spec:
podSelector:
matchLabels:
app: frontend
egress:
- to:
- namespaceSelector:
matchLabels:
environment: prod
- Save the YAML file and apply it:
kubectl apply -f namespace-egress.yaml
- Test connectivity between
dev
andprod
namespaces.
Scenario:
- Apply a default deny-all NetworkPolicy to isolate all Pods in the
staging
namespace.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-namespace
namespace: staging
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
- Save the YAML file and apply it:
kubectl apply -f isolate-namespace.yaml
- Verify that traffic is denied.
Scenario:
- Allow Pods labeled
app=web
to access DNS servers on port 53.
Details
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: default
spec:
podSelector:
matchLabels:
app: web
egress:
- to:
- ipBlock:
cidr: 8.8.8.8/32
ports:
- protocol: UDP
port: 53
- Save the YAML file and apply it:
kubectl apply -f allow-dns.yaml
- Verify DNS access for the web Pods.
Scenario:
- Create a Deployment named
my-app
with 3 replicas. - Expose it internally using a ClusterIP Service named
my-app-service
.
Details
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: busybox
command: ["/bin/sh", "-c", "while true; do echo hello; sleep 5; done"]
---
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
- Save the YAML file and apply it:
kubectl apply -f my-app-service.yaml
- Test the Service:
kubectl exec -it <pod-name> -- curl my-app-service
Scenario:
- Create a Deployment named
external-app
. - Expose it externally using a NodePort Service on port 30007.
Details
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-app
labels:
app: external-app
spec:
replicas: 2
selector:
matchLabels:
app: external-app
template:
metadata:
labels:
app: external-app
spec:
containers:
- name: app
image: busybox
command: ["/bin/sh", "-c", "while true; do echo hello; sleep 5; done"]
---
apiVersion: v1
kind: Service
metadata:
name: external-app-service
spec:
selector:
app: external-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
nodePort: 30007
type: NodePort
- Save the YAML file and apply it:
kubectl apply -f external-app-service.yaml
- Test the Service externally:
curl <node-ip>:30007
Scenario:
- Deploy an application named
loadbalanced-app
. - Expose it using a LoadBalancer Service.
Details
apiVersion: apps/v1
kind: Deployment
metadata:
name: loadbalanced-app
labels:
app: loadbalanced-app
spec:
replicas: 3
selector:
matchLabels:
app: loadbalanced-app
template:
metadata:
labels:
app: loadbalanced-app
spec:
containers:
- name: app
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: loadbalanced-service
spec:
selector:
app: loadbalanced-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
- Save the YAML file and apply it:
kubectl apply -f loadbalanced-service.yaml
- Verify the external IP of the LoadBalancer:
kubectl get svc loadbalanced-service
- Test access to the application using the external IP.
Scenario:
- A Service named
troubleshoot-service
is not forwarding traffic to the backend Pods. - Investigate and resolve the issue.
Details
- Verify Service configuration:
kubectl describe service troubleshoot-service
- Check the endpoint mappings:
kubectl get endpoints troubleshoot-service
- Ensure the backend Pods are running and labeled correctly:
kubectl get pods -l app=<label>
- Correct any misconfigurations and test again.
Scenario:
- Create an ExternalName Service named
external-service
to aliasexample.com
.
Details
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: example.com
- Save the YAML file and apply it:
kubectl apply -f external-service.yaml
- Test the alias:
kubectl exec -it <pod-name> -- curl external-service
Scenario:
- Create a StatefulSet named
stateful-app
. - Use a headless Service to provide direct access to each Pod.
Details
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: stateful-app
spec:
serviceName: "stateful-service"
replicas: 3
selector:
matchLabels:
app: stateful
template:
metadata:
labels:
app: stateful
spec:
containers:
- name: app
image: busybox
command: ["/bin/sh", "-c", "while true; do echo hello; sleep 5; done"]
---
apiVersion: v1
kind: Service
metadata:
name: stateful-service
spec:
clusterIP: None
selector:
app: stateful
ports:
- protocol: TCP
port: 80
targetPort: 8080
- Save the YAML file and apply it:
kubectl apply -f stateful-service.yaml
- Verify the DNS entries for the headless Service:
kubectl exec -it <pod-name> -- nslookup stateful-service
Scenario:
- Create a Service named
session-service
with session affinity enabled.
Details
apiVersion: v1
kind: Service
metadata:
name: session-service
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
sessionAffinity: ClientIP
- Save the YAML file and apply it:
kubectl apply -f session-service.yaml
- Test session affinity:
kubectl exec -it <pod-name> -- curl session-service
Scenario:
- Create a Deployment named
health-app
. - Expose it using a Service with a health check configured.
Details
apiVersion: apps/v1
kind: Deployment
metadata:
name: health-app
labels:
app: health-app
spec:
replicas: 2
selector:
matchLabels:
app: health-app
template:
metadata:
labels:
app: health-app
spec:
containers:
- name: app
image: nginx
readinessProbe:
httpGet:
path: /
port: 80
---
apiVersion: v1
kind: Service
metadata:
name: health-service
spec:
selector:
app: health-app
ports:
- protocol: TCP
port: 80
targetPort: 80
- Save the YAML file and apply it:
kubectl apply -f health-service.yaml
- Verify Pod readiness:
kubectl get pods -l app=health-app
Scenario:
- Deploy an application named
simple-app
. - Expose it using an Ingress resource with the hostname
simple-app.local
.
Details
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-app
labels:
app: simple-app
spec:
replicas: 2
selector:
matchLabels:
app: simple-app
template:
metadata:
labels:
app: simple-app
spec:
containers:
- name: app
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: simple-app-service
spec:
selector:
app: simple-app
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-app-ingress
spec:
rules:
- host: simple-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: simple-app-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f simple-app-ingress.yaml
- Test the Ingress:
curl -H "Host: simple-app.local" <ingress-controller-ip>
Scenario:
- Secure an application exposed through Ingress with TLS.
- Use a Secret named
tls-secret
for the certificate and key.
Details
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: default
type: kubernetes.io/tls
data:
tls.crt: <base64-encoded-cert>
tls.key: <base64-encoded-key>
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-app-ingress
spec:
tls:
- hosts:
- tls-app.local
secretName: tls-secret
rules:
- host: tls-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tls-app-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f tls-app-ingress.yaml
- Test HTTPS access:
curl -k https://tls-app.local --resolve tls-app.local:<ingress-controller-ip>
Scenario:
- Expose two applications (
app1
andapp2
) through a single Ingress. - Use host-based rules to route traffic.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-host-ingress
spec:
rules:
- host: app1.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- host: app2.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f multi-host-ingress.yaml
- Test each application:
curl -H "Host: app1.local" <ingress-controller-ip> curl -H "Host: app2.local" <ingress-controller-ip>
Scenario:
- Route
/app1
toapp1-service
and/app2
toapp2-service
using an Ingress resource.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: path-routing-ingress
spec:
rules:
- host: path-app.local
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f path-routing-ingress.yaml
- Test each path:
curl -H "Host: path-app.local" <ingress-controller-ip>/app1 curl -H "Host: path-app.local" <ingress-controller-ip>/app2
Scenario:
- An Ingress resource named
debug-ingress
is not routing traffic to the backend. - Debug and resolve the issue.
Details
- Verify Ingress configuration:
kubectl describe ingress debug-ingress
- Check Service endpoints:
kubectl get endpoints <service-name>
- Ensure backend Pods are running and labeled correctly:
kubectl get pods -l <label-selector>
- Correct any misconfigurations and test again.
Scenario:
- Configure an Ingress resource with a default backend to handle unmatched requests.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: default-backend-ingress
spec:
defaultBackend:
service:
name: default-service
port:
number: 80
rules:
- host: example.local
http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f default-backend-ingress.yaml
- Test default backend routing:
curl -H "Host: example.local" <ingress-controller-ip>/unmatched-path
Scenario:
- Use annotations to rewrite
/old-path
to/new-path
for an application.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rewrite-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /new-path
spec:
rules:
- host: rewrite-app.local
http:
paths:
- path: /old-path
pathType: Prefix
backend:
service:
name: rewrite-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f rewrite-ingress.yaml
- Test the rewrite rule:
curl -H "Host: rewrite-app.local" <ingress-controller-ip>/old-path
Scenario:
- Add CORS headers to an Ingress resource to allow cross-origin requests.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: cors-ingress
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization"
spec:
rules:
- host: cors-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: cors-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f cors-ingress.yaml
- Test CORS configuration:
curl -H "Origin: http://example.com" -H "Access-Control-Request-Method: GET" -i http://cors-app.local
Scenario:
- Host two applications (
tenant1-app
andtenant2-app
) on the same Ingress with different subdomains:tenant1.example.com
andtenant2.example.com
.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-tenant-ingress
spec:
rules:
- host: tenant1.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tenant1-service
port:
number: 80
- host: tenant2.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tenant2-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f multi-tenant-ingress.yaml
- Test each subdomain:
curl -H "Host: tenant1.example.com" <ingress-controller-ip> curl -H "Host: tenant2.example.com" <ingress-controller-ip>
Scenario:
- Configure rate limiting for an application using Ingress annotations.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rate-limited-ingress
annotations:
nginx.ingress.kubernetes.io/limit-rps: "5"
nginx.ingress.kubernetes.io/limit-burst-multiplier: "2"
spec:
rules:
- host: rate-limited-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rate-limited-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f rate-limited-ingress.yaml
- Test rate limiting:
ab -n 100 -c 10 -H "Host: rate-limited-app.local" http://<ingress-controller-ip>/
Scenario:
- Split traffic between two versions (
v1
andv2
) of an application using Ingress annotations.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-ingress
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
rules:
- host: canary-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: stable-service
port:
number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-ingress-v2
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
rules:
- host: canary-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: canary-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f canary-ingress.yaml
- Test traffic distribution:
curl -H "Host: canary-app.local" <ingress-controller-ip>
Scenario:
- Use Ingress annotations to enforce HTTPS by redirecting HTTP traffic.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: https-redirect-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- https-app.local
secretName: tls-secret
rules:
- host: https-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: https-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f https-redirect-ingress.yaml
- Test HTTPS redirection:
curl http://https-app.local --resolve https-app.local:<ingress-controller-ip>
Scenario:
- Configure a single Ingress to handle requests for multiple subdomains using a wildcard hostname.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: wildcard-ingress
spec:
rules:
- host: "*.example.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: wildcard-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f wildcard-ingress.yaml
- Test with subdomains:
curl -H "Host: app1.example.com" <ingress-controller-ip> curl -H "Host: app2.example.com" <ingress-controller-ip>
Scenario:
- Set timeout for backend responses in an Ingress resource.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: timeout-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
spec:
rules:
- host: timeout-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: timeout-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f timeout-ingress.yaml
- Test timeout behavior:
curl -H "Host: timeout-app.local" <ingress-controller-ip>
Scenario:
- Use Ingress annotations to display custom error pages for HTTP 404 errors.
Details
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: error-page-ingress
annotations:
nginx.ingress.kubernetes.io/custom-http-errors: "404"
nginx.ingress.kubernetes.io/server-snippet: |
error_page 404 /custom_404.html;
location = /custom_404.html {
internal;
root /usr/share/nginx/html;
}
spec:
rules:
- host: error-app.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: error-service
port:
number: 80
- Save and apply the YAML configuration:
kubectl apply -f error-page-ingress.yaml
- Test the custom error page:
curl -H "Host: error-app.local" <ingress-controller-ip>/nonexistent-path
-
NetworkPolicies:
- NetworkPolicies are designed to control pod-level network communication. They are namespace-scoped resources.
- Understand the default behavior (allow all traffic if no policy exists) and how to create rules to allow/deny ingress and egress traffic.
-
Services:
- Understand the types of Kubernetes Services (ClusterIP, NodePort, LoadBalancer, and ExternalName) and their use cases.
- Be familiar with
kubectl
commands for troubleshooting services, such as checking endpoints withkubectl describe
.
-
Ingress:
- Ingress rules provide HTTP and HTTPS routes to services.
- Know how to configure
Ingress
objects with different paths, hostnames, and TLS configurations.