You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there! I have tried to deploy the pod gateway and admission controller using your Helm chart, using the following values:
image:
repository: ghcr.io/angelnu/pod-gateway# I am using dev version for testing - others should be using latesttag: v1.8.1DNSPolicy: ClusterFirstwebhook:
image:
repository: ghcr.io/angelnu/gateway-admision-controller# Use dev versionpullPolicy: Alwaystag: v3.9.0namespaceSelector:
type: labellabel: routed-gatewaygatewayDefault: truegatewayLabel: setGatewayaddons:
vpn:
enabled: truetype: gluetungluetun:
image:
repository: docker.io/qmcgaw/gluetuntag: latestenv:
- name: VPN_SERVICE_PROVIDERvalue: custom
- name: VPN_TYPEvalue: wireguard
- name: VPN_INTERFACEvalue: wg0
- name: FIREWALLvalue: "off"
- name: DOTvalue: "off"
- name: DNS_KEEP_NAMESERVERvalue: "on"envFrom:
- secretRef:
name: wireguard-configlivenessProbe:
exec:
command:
- sh
- -c
- if [ $(wget -q -O- https://ipinfo.io/country) == 'NL' ]; then exit 0; else exit $?; fiinitialDelaySeconds: 30periodSeconds: 60failureThreshold: 3networkPolicy:
enabled: trueegress:
- to:
- ipBlock:
cidr: 0.0.0.0/0ports:
# VPN traffic
- port: 51820protocol: UDP
- to:
- ipBlock:
cidr: 10.0.0.0/8settings:
# -- If using a VPN, interface name created by itVPN_INTERFACE: wg0# -- Prevent non VPN traffic to leave the gatewayVPN_BLOCK_OTHER_TRAFFIC: true# -- If VPN_BLOCK_OTHER_TRAFFIC is true, allow VPN traffic over this portVPN_TRAFFIC_PORT: 51820# -- Traffic to these IPs will be send through the K8S gatewayVPN_LOCAL_CIDRS: "10.0.0.0/8 192.168.0.0/16"# -- settings to expose ports, usually through a VPN provider.# NOTE: if you change it you will need to manually restart the gateway PODpublicPorts:
- hostname: transmission-client.media-centerIP: 10ports:
- type: udpport: 51413
- type: tcpport: 51413
So far so good, I've got the pod gateway and admission controller up and running in my vpn-gateway ns with wireguard VPN client working on the pod gateway, now trying to actually route a pod in my media-center routed ns:
These are the logs of the gateway-init container of my transmission-client pod:
❯ kubectl logs transmission-client-7bbc685b44-xkk6h -n media-center -c gateway-init
...
++ dig +short vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local @10.43.0.10
+ GATEWAY_IP=';; connection timed out; no servers could be reached'
It looks like it's not able to resolve vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local in the init container, but the cluster local DNS works fine I tried running the same pod in a non routed namespace, exec into it and nslookup and it worked fine:
Found the issue, so the service IP range of my cluster is 10.43.0.0/16 and my Pod IP range is 10.42.0.0/16, however only the later is specifically mentioned in my Pod routing table:
$ ip route
default via 10.42.1.1 dev eth0
10.42.0.0/16 via 10.42.1.1 dev eth0
10.42.1.0/24 dev eth0 proto kernel scope link src 10.42.1.24
It means that when the client_init.sh script deletes the existing default gateway the pod isn't able to access the DNS server any longer.
To fix that as a workaround I have manually configured my routed deployment as follow with a gateway-preinit initContainer that adds 10.43.0.0/16:
That way after default gateway deletion it's still able to reach the services IP range and therefore the K8s internal DNS server.
@angelnu I'm sure there is a way to have a clean fix by slightly updating the client_init.sh script e.g. by adding that route ip route add ${K8S_DNS_IP}/16 via ${K8S_DEFAULT_GW} dev eth0 before removing default GW, happy to discuss/contribute.
I'm not sure if it's due to my K8s topology which is pretty standard: K3s running K8s v1.26 with default Flannel CNI.
Details
What steps did you take and what happened:
Hi there! I have tried to deploy the pod gateway and admission controller using your Helm chart, using the following values:
So far so good, I've got the pod gateway and admission controller up and running in my
vpn-gateway
ns with wireguard VPN client working on the pod gateway, now trying to actually route a pod in mymedia-center
routed ns:These are the logs of the
gateway-init
container of mytransmission-client
pod:❯ kubectl logs transmission-client-7bbc685b44-xkk6h -n media-center -c gateway-init ... ++ dig +short vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local @10.43.0.10 + GATEWAY_IP=';; connection timed out; no servers could be reached'
It looks like it's not able to resolve
vpn-gateway-pod-gateway.vpn-gateway.svc.cluster.local
in the init container, but the cluster local DNS works fine I tried running the same pod in a non routed namespace, exec into it andnslookup
and it worked fine:Any idea what can cause this behavior?
What did you expect to happen:
I was expecting the routed pod gateway to be updated successfully with the pod starting up.
Anything else you would like to add:
Any help appreciated, there is probably something I'm missing here, happy to provide more information to debug this, thanks :)
The text was updated successfully, but these errors were encountered: