Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help making it work with colima #16

Open
TiagoJacobs opened this issue Aug 24, 2022 · 8 comments
Open

Help making it work with colima #16

TiagoJacobs opened this issue Aug 24, 2022 · 8 comments

Comments

@TiagoJacobs
Copy link

Hello!

I am using colima, to run x64 dockers in M1 (this is required for some components of our project).

For an example only, I've performed these commands on my M1:

brew install docker
brew install colima
colima start --arch x86_64 --cpu 4 --memory 6 --disk 20 --network-address --layer=true

After this is done, I can create docker containers using the regular docker commands.

To login within the colima machine, we can use:

colima ssh

I've checked and colima kernel have the wireguard support:

root@colima:~# ip link add dev wg0 type wireguard
root@colima:~# ip addr | grep wg0
16: wg0: <POINTOPOINT,NOARP> mtu 1420 qdisc noop state DOWN group default qlen 1000

From this point, what would be the best way to plug your tool with this environment?

@gregnr
Copy link
Member

gregnr commented Apr 14, 2023

Hey @TiagoJacobs! Sorry for the delay. Out of curiosity, any chance docker-mac-net-connect just worked as is with the above setup? Assuming you were able to run regular docker commands, usually that would mean you're connecting over the standard /var/run/docker.sock unix socket which docker-mac-net-connect also uses. Otherwise if colima sets this up somewhere else, we will just need docker-mac-net-connect to connect via that endpoint instead.

Beyond that, I can't think of any reason docker-mac-net-connect wouldn't just work as-is. In fact, this has been successfully tested with Rancher Desktop which exposes Docker via the standard /var/run/docker.sock socket (when Docker is enabled).

Let me know!

@gregnr gregnr mentioned this issue Apr 14, 2023
@night0wl
Copy link

I have been trying to get this to work with both Colima and Rancher in order to replace Docker for Desktop. I cannot get it to work with either. For Colima I had to manually symlink /var/run/docker.sock to be able to run Docker commands, Rancher does this automatically.

In both cases when I run docker-mac-net-connector I see the following in the log

DEBUG: (utun3) 2023/04/21 10:50:07 Interface utun3 created
DEBUG: (utun3) 2023/04/21 10:50:07 Wireguard server listening
DEBUG: (utun3) 2023/04/21 10:50:07 Setting up Wireguard on Docker Desktop VM
Creating WireGuard interface chip0
Assigning IP to WireGuard interface
Configuring WireGuard device
Adding iptables NAT rule for host WireGuard IP
Setup container complete
Adding route for 172.17.0.0/16 -> utun3 (bridge)
DEBUG: (utun3) 2023/04/21 10:50:09 Watching Docker events

This suggests that all is well and that it has created both ends of the tunnel. However, if I run an nginx container (as per the example), I cannot curl -I 172.17.0.2 from the host.

I previously had this working with Docker for Desktop. Any suggestions of what else might be causing an issue?

@gregnr
Copy link
Member

gregnr commented Apr 21, 2023

Hi @night0wl, thanks for the report. I will try to replicate this setup on Colima when I get a minute.

In the mean time can you confirm:

  • You do not have another interface bound to 172.17.0.0/16 in your routing tables (see below)?
  • You can run netstat -r to help debug this - please share the results if you are willing
  • What is the result of docker network inspect bridge (after starting your test container, eg. nginx)?
  • What is the result of ifconfig utun3 (replace utun3 with the virtual interface shown in the logs)

@night0wl
Copy link

Thanks for the reply @gregnr, I appreciate the help. See below:

The only interface binding for 172.17 is to utun3

Result of docker network inspect bridge with nginx container running

[
    {
        "Name": "bridge",
        "Id": "eb8ffd3b267e0627f4305d43b6ed48d1da33d334676d290c9c2f5f544ffe0600",
        "Created": "2023-04-21T10:08:56.055517587Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "0b423b8e5f58578570270136b4a72e47c64c47d7fc8101b3dac78dfb5dd90eac": {
                "Name": "nginx",
                "EndpointID": "27b08684228460837883a545a839cef6bf2cb437b7e9d3c3c1bb1e4559a5da3e",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Result of ifconfig utun3

utun3: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1420
	inet 10.33.33.1 --> 10.33.33.2 netmask 0xffffffff

@gregnr
Copy link
Member

gregnr commented Apr 21, 2023

Thanks for sharing. Everything looks normal 🤔

A few more things to try (from the VM side):

  • Test if host.docker.internal resolves from the VM:
    docker run --rm --net host wbitt/network-multitool nslookup host.docker.internal
  • Test if host.docker.internal is reachable from the VM:
    docker run --rm --net host wbitt/network-multitool ping -c 4 host.docker.internal
  • Test if 10.33.33.1 Wireguard peer is reachable from the VM:
    docker run --rm --net host wbitt/network-multitool ping -c 4 10.33.33.1
  • Test if test nginx container at 172.17.0.2 is reachable from the VM:
    docker run --rm --net host wbitt/network-multitool curl -I 172.17.0.2
  • Check if chip0 interface is set up correctly on VM:
    docker run --rm --net host wbitt/network-multitool ip address show chip0
  • Check if docker0 bridge network interface is set up correctly on VM:
    docker run --rm --net host wbitt/network-multitool ip address show docker0
  • Check if eth0 network interface to the host is set up correctly on VM:
    docker run --rm --net host wbitt/network-multitool ip address show eth0

@night0wl
Copy link

night0wl commented Apr 24, 2023

I have run the above tests using Colima with colima start --network-address, see below:

  • Test if host.docker.internal resolves from the VM ✅

  • Test if host.docker.internal is reachable from the VM ✅

  • Test if 10.33.33.1 Wireguard peer is reachable from the VM ✅

  • Test if test nginx container at 172.17.0.2 is reachable from the VM ✅

  • Check if chip0 interface is set up correctly on VM ❓
    I suspect this could the issue here, as state is UNKNOWN, however it is UP in the brackets

12: chip0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.33.33.2 peer 10.33.33.1/32 scope global chip0
       valid_lft forever preferred_lft forever

I think the other network interfaces are fine, but not 100%, including the output for them too

  • Check if docker0 bridge network interface is set up correctly on VM
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:8c:93:af:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:8cff:fe93:af99/64 scope link
       valid_lft forever preferred_lft forever
  • Check if eth0 network interface to the host is set up correctly on VM
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:55:55:fe:12:1f brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.15/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fec0::5055:55ff:fefe:121f/64 scope site deprecated dynamic mngtmpaddr
       valid_lft 68741sec preferred_lft 0sec
    inet6 fe80::5055:55ff:fefe:121f/64 scope link
       valid_lft forever preferred_lft forever

@night0wl
Copy link

night0wl commented Apr 25, 2023

I figured this out!

With a little bit of inspiration from this blog post
https://baptistout.net/posts/kubernetes-clusters-on-macos-with-loadbalancer-without-docker-desktop/

Colima does a lot of the initial setup described, including it seems some of what socket_vmnet was being used to do (quite possibly using it under the hood), creating a bridge100 interface.

So what I had to do was find the inet address for bridge100 on the Mac host

SRC_IP=$(ifconfig bridge100 | grep "inet " | cut -d' ' -f2)

and then SSH into the VM an setup an iptables rule and find the inet address of the col0 interface on the VM

colima ssh
sudo iptables -t filter -A FORWARD -4 -p tcp -s <SRC_IP from host> -d 172.17.0.0/16 -j ACCEPT -i col0 -o docker0
COL_IP=ifconfig col0 | grep "inet addr" | cut -d':' -f2 | cut -d' ' -f1

Then set up the route to COL_IP for 172.17 on the Mac host

sudo route -nv add -net 172.17 <COL_IP from VM>

This is essentially what docker-mac-net-connect was doing, but using an existing bridge interface instead of creating a utun interface. I assume that the existence of these interfaces and routes was getting in the way. For now, I can use this manual solution, but it would be awesome for docker-mac-net-connect to support this somehow.

EDIT: I have partially figured this out... it works for the docker0 network but not for the Kind network
EDIT 2: I was creating an equivalent iptables rule for 172.18.0.0/16 but forgot to change the output to -o br-11aa22bb33cc - everything works now!

@gregnr
Copy link
Member

gregnr commented Apr 25, 2023

Nice one @night0wl! Looks like colima uses lima under the hood for Linux VM management (I should have gathered that from the name 😄 ), just like Rancher Desktop. When I had experimented with Rancher Desktop I noticed there were extra bridges created that Docker Desktop didn't have. Very brief discussion here:
#8 (comment)

When I get some time I'll dig into this further to see if there's a way we can automate your above logic in lima environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants