Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

natOutgoing: true seems to not work with private: true #3408

Closed
abravalheri opened this issue Nov 13, 2023 · 12 comments
Closed

natOutgoing: true seems to not work with private: true #3408

abravalheri opened this issue Nov 13, 2023 · 12 comments

Comments

@abravalheri
Copy link

Bug Report

The configuration natOutgoing: true seems to not work with private: true for Subnets.

If I create a subnet with both configurations, the pods in this subnet does not seem to be able to access an external address (making use of the NAT outgoing).

Expected Behavior

I would like to be able to combine natOutgoing: true with private: true, so that internal isolation between the subnets is achieved, while also allowing the pods to access addresses on the internet (e.g. for downloading datasets) via NAT-ing (so that external internet addresses cannot initiate any connection with a pod inside the cluster).

Actual Behavior

natOutgoing: true seems to be ignored when private: true is set.

Steps to Reproduce the Problem

  1. Create 2 isolated subnet that do not overlap with natOutgoing and private:
# nw1.yaml
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: nw1
spec:
  protocol: IPv4
  default: false
  private: true
  natOutgoing: true
  cidrBlock: 10.18.1.0/28
$ kubectl apply -f nw1.yaml
# nw2.yaml
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
  name: nw2
spec:
  protocol: IPv4
  default: false
  private: true
  natOutgoing: true
  cidrBlock: 10.18.1.16/28
$ kubectl apply -f nw2.yaml
  1. Create 2 pods, one in each subnet
# nw1-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nw1-pod
  annotations:
    ovn.kubernetes.io/logical_switch: nw1
spec:
  containers:
    - name: nw1-container
      image: "alpine:3.17"
      command: ["sleep"]
      args: ["infinity"]
$ kubectl apply -f nw1-pod.yaml
# nw2-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nw2-pod
  annotations:
    ovn.kubernetes.io/logical_switch: nw2
spec:
  containers:
    - name: nw2-container
      image: "alpine:3.17"
      command: ["sleep"]
      args: ["infinity"]
$ kubectl apply -f nw2-pod.yaml
  1. Now, get inside of the pods and try to ping some address on internet and the other pod:
$ kubectl exec -it nw1-pod -- ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
939: eth0@if940: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1400 qdisc noqueue state UP
    link/ether 00:00:00:5f:39:c4 brd ff:ff:ff:ff:ff:ff
    inet 10.18.1.2/28 brd 10.18.1.15 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::200:ff:fe5f:39c4/64 scope link
       valid_lft forever preferred_lft forever

$ kubectl exec -it nw1-pod -- ping -w 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1

$ kubectl exec -it nw2-pod -- ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
941: eth0@if942: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1400 qdisc noqueue state UP
    link/ether 00:00:00:f2:d5:6e brd ff:ff:ff:ff:ff:ff
    inet 10.18.1.18/28 brd 10.18.1.31 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::200:ff:fef2:d56e/64 scope link
       valid_lft forever preferred_lft forever

$ kubectl exec -it nw1-pod -- ping -w 3 10.18.1.18
PING 10.18.1.18 (10.18.1.18): 56 data bytes

--- 10.18.1.18 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
command terminated with exit code 1

We can see that the nw1-pod cannot ping either the 8.8.8.8 address or the nw2-pod.

Additional Info

  • Kubernetes version:

    Output of kubectl version:

    $ kubectl version
    Client Version: v1.28.2
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.28.3

    Installed via microk8s (sudo snap install microk8s --classic --channel=1.28)

  • kube-ovn version:

    kubeovn/kube-ovn:v1.10.0   # image of kube-ovn-controller-...

installed via microk8s (microk8s enable kube-ovn)

  • operation-system/kernel version:

    Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release:
    Output of uname -r:

    $ awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release
    "Ubuntu 22.04.3 LTS"

/cc @Chrisys93 @JuanMaParraU

@oilbeater
Copy link
Collaborator

@abravalheri, this is an expected behavior. When private is set to true, only traffic for which both the source and destination belong to the subnet CIDR is allowed. You need to set the allowSubnets to allow specific traffic.

@abravalheri
Copy link
Author

abravalheri commented Nov 13, 2023

Hi @oilbeater thank you very much for the response. So is it correct to assume that it is not possible to allow the pods downloading things from the "general internet", but we need to know beforehand which CIDRs they need to access?

I noticed the the docs mention that the acls parameter can be used for finer control than private: true. Do you know what would be the equivalent ACL that would result in a similar behaviour than private: true? Maybe by playing with the ACLs I can find a way of achieving this...

@oilbeater
Copy link
Collaborator

ACLs can achieve this by manipulating the rules. You can use a drop rule to prevent traffic from other subnets with higher priority, followed by a low-priority rule that allows all traffic.

Copy link
Contributor

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 21, 2024
@Chrisys93
Copy link

I have started working on a local update at the end of last week. We need to sort this out. Further, our Kube-OVN continues having stability problems.

Copy link
Contributor

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@abravalheri
Copy link
Author

Would it be possible to keep this issue open as a "Documentaiom" request?

It would be nice if the docs contain an example of how to achieve this by manipulating the ACLs... I never managed to figure it out myself.

Copy link
Contributor

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@abravalheri
Copy link
Author

abravalheri commented May 23, 2024

Would it be possible to keep this issue open as a "Documentaiom" request?\n\nIt would be nice if the docs contain an example of how to achieve this by manipulating the ACLs... I never managed to figure it out myself.

^^^

@bobz965
Copy link
Collaborator

bobz965 commented May 24, 2024

@abravalheri could you please help to post a PR to the kubeovn/doc about the issue? thanks!

@abravalheri
Copy link
Author

abravalheri commented May 24, 2024

I can open an issue, but I can't open a PR unfortunately, because I don't know how to do it 😅.

kubeovn/docs#167.

Copy link
Contributor

Issues go stale after 60d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants