Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot connect to container after setup #31

Open
normtown opened this issue Jul 18, 2019 · 6 comments
Open

Cannot connect to container after setup #31

normtown opened this issue Jul 18, 2019 · 6 comments

Comments

@normtown
Copy link

I really don't get why this is not working. I'm simply running a netcat listener in my container:

% docker run --rm --privileged alpine nc -v -l -p 54321
listening on [::]:54321 ...

...and trying to connect from my laptop's shell, which fails:

% nc -v 172.17.0.1 54321
nc: connectx to 172.17.0.1 port 54321 (tcp) failed: Operation timed out

Before doing this test, I had set up a route on my laptop that uses the 10.0.75.2 gateway:

% sudo route -v add -net 172.17.0.1 -netmask 255.255.255.0 10.0.75.2

...which we can see here:

% netstat -nr
Routing tables

Internet:
Destination        Gateway            Flags        Refs      Use   Netif Expire
default            10.219.16.1        UGSc           90        0     en4
default            10.219.5.1         UGScI           3        0     en0
10.0.75/30         link#14            UC              4        0    tap1
10.0.75.2          link#14            UHLWI           0        0    tap1
10.0.75.3          ff:ff:ff:ff:ff:ff  UHLWbI          0       88    tap1
10.219.5/24        link#6             UCS             2        0     en0
10.219.5.1/32      link#6             UCS             1        0     en0
10.219.5.1         0:0:c:7:ac:8       UHLWIir         3       10     en0   1138
10.219.5.3         0:5d:73:dc:ae:7f   UHLWI           0        0     en0    309
10.219.5.45/32     link#6             UCS             1        0     en0
10.219.5.255       ff:ff:ff:ff:ff:ff  UHLWbI          0       88     en0
10.219.16/22       link#5             UCS            12        0     en4
10.219.16.1/32     link#5             UCS             1        0     en4
10.219.16.1        0:0:5e:0:1:a       UHLWIir        18        0     en4   1195
10.219.16.2        a8:2b:b5:58:5:47   UHLWI           0        0     en4   1167
10.219.16.3        a8:2b:b5:57:da:bd  UHLWI           0        0     en4   1199
10.219.16.8        0:26:73:f7:6:57    UHLWI           0        0     en4    899
10.219.16.9        0:26:73:f7:6:3b    UHLWI           0        0     en4    897
10.219.16.10       0:26:73:f7:5:7     UHLWI           0        0     en4    898
10.219.16.49       98:5a:eb:d2:ee:2a  UHLWI           0        0     en4    924
10.219.16.50       38:c9:86:14:1e:22  UHLWI           0        0     en4    762
10.219.16.55       50:65:f3:2d:8c:6c  UHLWI           0        0     en4    795
10.219.18.26       link#5             UHLWI           0        0     en4
10.219.18.33       3c:2c:30:f8:14:4a  UHLWIi          3     1899     en4    436
10.219.18.54/32    link#5             UCS             0        0     en4
10.219.18.56       54:bf:64:12:37:af  UHLWIi          2     4285     en4    768
10.219.19.255      ff:ff:ff:ff:ff:ff  UHLWbI          0       88     en4
127                127.0.0.1          UCS             1   275814     lo0
127.0.0.1          127.0.0.1          UH             15  1571399     lo0
127.255.255.255    127.0.0.1          UHW3I           0   275807     lo0      3
169.254            link#5             UCS             0        0     en4
169.254            link#6             UCSI            0        0     en0
172.17/24          10.0.75.2          UGSc            0        0    tap1
224.0.0/4          link#5             UmCS            2        0     en4
224.0.0/4          link#6             UmCSI           2        0     en0
224.0.0.251        1:0:5e:0:0:fb      UHmLWI          0        0     en4
224.0.0.251        1:0:5e:0:0:fb      UHmLWI          0        0     en0
239.255.255.250    1:0:5e:7f:ff:fa    UHmLWI          0      128     en4
239.255.255.250    1:0:5e:7f:ff:fa    UHmLWI          0      128     en0
255.255.255.255/32 link#5             UCS             0        0     en4
255.255.255.255/32 link#6             UCSI            0        0     en0

We can see the tap1 virtual device is present on the laptop:

% ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
	options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
	inet 127.0.0.1 netmask 0xff000000
	inet6 ::1 prefixlen 128
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
	nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
XHC20: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	ether 18:65:90:d2:88:b5
	inet 10.219.5.45 netmask 0xffffff00 broadcast 10.219.5.255
	media: autoselect
	status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
	ether 0a:65:90:d2:88:b5
	media: autoselect
	status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
	ether 5a:db:2b:c4:ca:c7
	inet6 fe80::58db:2bff:fec4:cac7%awdl0 prefixlen 64 scopeid 0x8
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=60<TSO4,TSO6>
	ether 4a:00:08:48:04:20
	media: autoselect <full-duplex>
	status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=60<TSO4,TSO6>
	ether 4a:00:08:48:04:21
	media: autoselect <full-duplex>
	status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
	ether 4a:00:08:48:04:20
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x2
	member: en1 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 9 priority 0 path cost 0
	member: en2 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 10 priority 0 path cost 0
	media: <unknown type>
	status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
	inet6 fe80::9655:8cdf:cde:2188%utun0 prefixlen 64 scopeid 0xc
	nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::1035:48b6:7222:41ca%utun1 prefixlen 64 scopeid 0xd
	nd6 options=201<PERFORMNUD,DAD>
tap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	ether b2:b2:d7:83:2d:ac
	inet 10.0.75.1 netmask 0xfffffffc broadcast 10.0.75.3
	media: autoselect
	status: active
	open (pid 79033)
en4: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=10b<RXCSUM,TXCSUM,VLAN_HWTAGGING,AV>
	ether ac:87:a3:14:a3:a5
	inet 10.219.18.54 netmask 0xfffffc00 broadcast 10.219.19.255
	media: autoselect (1000baseT <full-duplex>)
	status: active

...and we can see the network devices in the container here:

% docker run --rm --net=host --privileged alpine ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:CC:10:A0:EF
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:ccff:fe10:a0ef/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

eth0      Link encap:Ethernet  HWaddr 02:50:00:00:00:01
          inet addr:192.168.65.3  Bcast:192.168.65.255  Mask:255.255.255.0
          inet6 addr: fe80::50:ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:978 errors:0 dropped:0 overruns:0 frame:0
          TX packets:986 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:100738 (98.3 KiB)  TX bytes:80543 (78.6 KiB)

eth1      Link encap:Ethernet  HWaddr 00:A0:98:BC:F5:D7
          inet addr:10.0.75.2  Bcast:10.0.75.3  Mask:255.255.255.252
          inet6 addr: fe80::2a0:98ff:febc:f5d7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4616 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1419346 (1.3 MiB)  TX bytes:1138 (1.1 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:140 (140.0 B)  TX bytes:140 (140.0 B)

Hyperkit appears to be running with the tap1 device passed to it:

% ps axj | grep hyperkit
ntownsen         79027 78985 78985      0    1 S      ??    0:02.52 com.docker.vpnkit --ethernet fd:3 --port vpnkit.port.sock --port hyperkit://:62373/./vms/0 --diagnostics fd:4 --pcap fd:5 --vsock-path vms/0/connect --host-names host.docker.internal,docker.for.mac.host.internal,docker.for.mac.localhost --gateway-names gateway.docker.internal,docker.for.mac.gateway.internal,docker.for.mac.http.internal --vm-names docker-for-desktop --listen-backlog 32 --mtu 1500 --allowed-bind-addresses 0.0.0.0 --http /Users/ntownsen/Library/Group Containers/group.com.docker/http_proxy.json --dhcp /Users/ntownsen/Library/Group Containers/group.com.docker/dhcp.json --port-max-idle-time 300 --max-connections 2000 --gateway-ip 192.168.65.1 --host-ip 192.168.65.2 --lowest-ip 192.168.65.3 --highest-ip 192.168.65.254 --log-destination asl --udpv4-forwards 123:127.0.0.1:51032 --gc-compact-interval 1800
ntownsen         79033 79028 78985      0    1 S      ??    3:01.51 /Applications/Docker.app/Contents/Resources/bin/com.docker.hyperkit.original -A -u -F vms/0/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-vpnkit,path=vpnkit.eth.sock,uuid=45abaeb6-e762-4c5c-a923-08cf31152639 -U 0c889a65-63ac-4032-8e26-4d8e4985393a -s 2:0,ahci-hd,/Users/ntownsen/Library/Containers/com.docker.docker/Data/vms/0/Docker.raw -s 2:1,virtio-tap,tap1 -s 3,virtio-sock,guest_cid=3,path=vms/0,guest_forwards=2376;1525 -s 4,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker-for-mac.iso -s 5,ahci-cd,vms/0/config.iso -s 6,ahci-cd,/Applications/Docker.app/Contents/Resources/linuxkit/docker.iso -s 7,virtio-rnd -l com1,autopty=vms/0/tty,asl -f bootrom,/Applications/Docker.app/Contents/Resources/uefi/UEFI.fd,,

One thing that seems a little odd to me is that bus 2 has both a "hard disk" on it (ahci-hd) and the tap device that was injected by the shim script (virtio-tap). The script seems to assume that anything on bus 2 is a network device. I'm curious why that is the case.

Interestingly, the laptop cannot connect to the host VM either. I can get a shell with screen:

% screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty

Here's ifconfig from the host VM:

linuxkit-025000000001:~# ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:CC:10:A0:EF
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:ccff:fe10:a0ef/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:828 (828.0 B)

eth0      Link encap:Ethernet  HWaddr 02:50:00:00:00:01
          inet addr:192.168.65.3  Bcast:192.168.65.255  Mask:255.255.255.0
          inet6 addr: fe80::50:ff:fe00:1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1069 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:108356 (105.8 KiB)  TX bytes:87745 (85.6 KiB)

eth1      Link encap:Ethernet  HWaddr 00:A0:98:BC:F5:D7
          inet addr:10.0.75.2  Bcast:10.0.75.3  Mask:255.255.255.252
          inet6 addr: fe80::2a0:98ff:febc:f5d7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:5377 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1651451 (1.5 MiB)  TX bytes:1138 (1.1 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:140 (140.0 B)  TX bytes:140 (140.0 B)

And I run a netcat listener in the host VM:

linuxkit-025000000001:~# nc -v -l -p 54321
listening on [::]:54321 ...

...with the same result when I try to connect from the Mac shell:

% nc -v 10.0.75.2 54321
nc: connectx to 10.0.75.2 port 54321 (tcp) failed: Operation timed out

For completeness, here's the rest of any debug info I can think of giving.

The route table on the host VM:

linuxkit-025000000001:~# netstat -nr
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.65.1    0.0.0.0         UG        0 0          0 eth0
10.0.75.0       0.0.0.0         255.255.255.252 U         0 0          0 eth1
127.0.0.0       0.0.0.0         255.0.0.0       U         0 0          0 lo
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
192.168.65.0    0.0.0.0         255.255.255.0   U         0 0          0 eth0

And the tuntap devices installed on the Mac:

% ls -l /dev/tap*
crw-rw----  1 root      wheel   36,   0 Jul 15 17:38 /dev/tap0
crw-rw----  1 ntownsen  wheel   36,   1 Jul 18 17:24 /dev/tap1
crw-rw----  1 root      wheel   36,  10 Jul 15 17:38 /dev/tap10
crw-rw----  1 root      wheel   36,  11 Jul 15 17:38 /dev/tap11
crw-rw----  1 root      wheel   36,  12 Jul 15 17:38 /dev/tap12
crw-rw----  1 root      wheel   36,  13 Jul 15 17:38 /dev/tap13
crw-rw----  1 root      wheel   36,  14 Jul 15 17:38 /dev/tap14
crw-rw----  1 root      wheel   36,  15 Jul 15 17:38 /dev/tap15
crw-rw----  1 root      wheel   36,   2 Jul 15 17:38 /dev/tap2
crw-rw----  1 root      wheel   36,   3 Jul 15 17:38 /dev/tap3
crw-rw----  1 root      wheel   36,   4 Jul 15 17:38 /dev/tap4
crw-rw----  1 root      wheel   36,   5 Jul 15 17:38 /dev/tap5
crw-rw----  1 root      wheel   36,   6 Jul 15 17:38 /dev/tap6
crw-rw----  1 root      wheel   36,   7 Jul 15 17:38 /dev/tap7
crw-rw----  1 root      wheel   36,   8 Jul 15 17:38 /dev/tap8
crw-rw----  1 root      wheel   36,   9 Jul 15 17:38 /dev/tap9

% ls -l /dev/tun*
crw-rw----  1 root  wheel   37,   0 Jul 15 17:38 /dev/tun0
crw-rw----  1 root  wheel   37,   1 Jul 15 17:38 /dev/tun1
crw-rw----  1 root  wheel   37,  10 Jul 15 17:38 /dev/tun10
crw-rw----  1 root  wheel   37,  11 Jul 15 17:38 /dev/tun11
crw-rw----  1 root  wheel   37,  12 Jul 15 17:38 /dev/tun12
crw-rw----  1 root  wheel   37,  13 Jul 15 17:38 /dev/tun13
crw-rw----  1 root  wheel   37,  14 Jul 15 17:38 /dev/tun14
crw-rw----  1 root  wheel   37,  15 Jul 15 17:38 /dev/tun15
crw-rw----  1 root  wheel   37,   2 Jul 15 17:38 /dev/tun2
crw-rw----  1 root  wheel   37,   3 Jul 15 17:38 /dev/tun3
crw-rw----  1 root  wheel   37,   4 Jul 15 17:38 /dev/tun4
crw-rw----  1 root  wheel   37,   5 Jul 15 17:38 /dev/tun5
crw-rw----  1 root  wheel   37,   6 Jul 15 17:38 /dev/tun6
crw-rw----  1 root  wheel   37,   7 Jul 15 17:38 /dev/tun7
crw-rw----  1 root  wheel   37,   8 Jul 15 17:38 /dev/tun8
crw-rw----  1 root  wheel   37,   9 Jul 15 17:38 /dev/tun9
@normtown
Copy link
Author

Docker Engine: 18.09.2
macOS: 10.13.6

@normtown
Copy link
Author

% docker run --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i iptables-save
nsenter: failed to execute iptables-save: No such file or directory

@AlmirKadric
Copy link
Collaborator

AlmirKadric commented Jul 19, 2019

@normtown if you check the README, there is an additional note under the command you're using
Depending on the docker-for-mac version the command may change

Can you try this command instead?

$ docker run --rm --privileged --pid=host  docker4w/nsenter-dockerd /bin/sh -c 'iptables -A FORWARD -i eth1 -j ACCEPT'

Then attempt to ping the container?
If the ping works, the routing has been setup properly

I keep all the latest wrappings for this system within the nodejs docker helpers package I created
You can always look there to see how I tie it all together
https://github.com/AlmirKadric-Published/helpers-docker-nodejs

@normtown
Copy link
Author

Thanks for that clarification. I ran the command. There were no error messages, but also still no change in behavior. My laptop shell still cannot connect to nc running in a Docker container. ping also doesn't work (times out).

On a side note, it wasn't clear to me that I needed to run that command because the README says:

Note: Although not required for docker-for-mac versions greater than 17.12.0, the above command can be replaced with the following if ever needed...

I read that as neither command (above or below) being necessary when running a version greater than 17.12.0. In my case, I'm running 18.09.2.

@AlmirKadric
Copy link
Collaborator

AlmirKadric commented Jul 26, 2019

so to clarify you followed these steps:

  • ran install script ./sbin/docker_tap_install.sh
  • waited for docker to restart (should be a part of the above script)
  • ran iface binding script docker_tap_up.sh
  • ran the routing command route add -net <IP RANGE> -netmask <IP MASK> 10.0.75.2
    (where IP RANGE is the range of your docker containers network, usually defined in docker compose file)
  • try to ping container ping <CONTAINER IP STARTING WITH IP RANGE>
  • try to run iptables fix if ping didnt work the first time
    docker run --rm --privileged --pid=host docker4w/nsenter-dockerd /bin/sh -c 'iptables -A FORWARD -i eth1 -j ACCEPT'
  • try to ping again

let me know if the above helps in any way

P.S. Also check this issue for a list of information you can provide me to help debug the issue: #11

Example docker-compose file:

version: '2'

services:
    percona:
        container_name: ${COMPOSE_PROJECT_NAME}-percona
        image: percona:5.7.21

        environment:
            - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}

        restart: always

        networks:
            app_net:
                ipv4_address: ${IP_RANGE}.3

    redis:
        container_name: ${COMPOSE_PROJECT_NAME}-redis
        image: redis:4.0.10

        restart: always

        networks:
            app_net:
                ipv4_address: ${IP_RANGE}.4

networks:
    app_net:
        driver: bridge

        ipam:
            driver: default
            config:
            - subnet: ${IP_RANGE}.0/24
              gateway: ${IP_RANGE}.1

@AlmirKadric
Copy link
Collaborator

@normtown any update on this?
Did you manage to fix your issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants