Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fails on kube-master-01 #4

Open
segmond opened this issue Apr 2, 2018 · 3 comments
Open

Fails on kube-master-01 #4

segmond opened this issue Apr 2, 2018 · 3 comments

Comments

@segmond
Copy link

segmond commented Apr 2, 2018

I tried to install, it first failed on verifying etcd. The output in stdout was[] but if I go into the vagrant vm and run etcdctl ls I saw the output. Seeing that etcd worked, I tried increasing the delay but had no luck. So I set ignored_errors: yes and the script continued.

But it now fails with

TASK [configure/kube-components : Add add-on files] **********************************************
ok: [kube-master-01 -> 127.0.0.1] => (item=dns-addon)
ok: [kube-master-01 -> 127.0.0.1] => (item=kube-dashboard-rc)
ok: [kube-master-01 -> 127.0.0.1] => (item=kube-dashboard-svc)

TASK [configure/kube-components : Verify if kube-system resources already exist] *****************
fatal: [kube-master-01 -> 127.0.0.1]: FAILED! => {"changed": false, "cmd": ["kubectl", "get", "pods", "--namespace=kube-system"], "delta": "0:00:00.073467", "end": "2018-04-01 23:25:43.422224", "msg": "non-zero return code", "rc": 1, "start": "2018-04-01 23:25:43.348757", "stderr": "The connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}

I logged in to master and figured that the kubectl was running inside the containers. Not in the VM.

~/kubernetes-coreos$ vagrant ssh kube-master-01
Last login: Mon Apr 2 12:26:12 UTC 2018 from 10.0.2.2 on pts/0
Container Linux by CoreOS stable (1632.3.0)
core@kube-master-01 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
247bebe4498d quay.io/calico/leader-elector:v0.1.0 "/run.sh --electio..." 13 hours ago Up 9 hours k8s_leader-elector.89250a02_calico-policy-controller-10.0.0.111_calico-system_57776fd3fc1ccde1f510665cb3600f3a_beddb16e
ed82c9226edc quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube apiser..." 13 hours ago Up 9 hours k8s_kube-apiserver.cee357c9_kube-apiserver-10.0.0.111_kube-system_23224f421b9ec012c8ee55d8bb1a898b_96b3e6fe
735f04159975 quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube proxy ..." 13 hours ago Up 9 hours k8s_kube-proxy.117567ac_kube-proxy-10.0.0.111_kube-system_deca3459fe418db4856c979bbdc2fe90_88a9e395
51c2c19fcf9d quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube contro..." 13 hours ago Up 9 hours k8s_kube-controller-manager.cad716ee_kube-controller-manager-10.0.0.111_kube-system_10187fd76486b908be61d347f9b64570_384a32d9
82415e8c790a quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube schedu..." 13 hours ago Up 9 hours k8s_kube-scheduler.eb9698df_kube-scheduler-10.0.0.111_kube-system_85527cbed8cc57c3e7194c0c4f48fc5e_dc648cad
480fe67f8263 calico/kube-policy-controller:v0.2.0 "/dist/controller" 13 hours ago Up 9 hours k8s_k8s-policy-controller.a60b4aaa_calico-policy-controller-10.0.0.111_calico-system_57776fd3fc1ccde1f510665cb3600f3a_e887ab0b
9e06b8eb7929 gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-apiserver-10.0.0.111_kube-system_23224f421b9ec012c8ee55d8bb1a898b_233eebc2
8e011f0e434c gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-proxy-10.0.0.111_kube-system_deca3459fe418db4856c979bbdc2fe90_5a2e31a3
7d8153cdf4be gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-controller-manager-10.0.0.111_kube-system_10187fd76486b908be61d347f9b64570_1033266b
68aeb46be376 gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-scheduler-10.0.0.111_kube-system_85527cbed8cc57c3e7194c0c4f48fc5e_5b60154c
1ff8d0d36730 gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_calico-policy-controller-10.0.0.111_calico-system_57776fd3fc1ccde1f510665cb3600f3a_45dd53f2

core@kube-master-01 ~ $ docker exec -it ed8 bash
root@kube-master-01:/# ./kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-apiserver-10.0.0.111 1/1 Running 0 13h
kube-controller-manager-10.0.0.111 1/1 Running 0 13h
kube-proxy-10.0.0.111 1/1 Running 0 13h
kube-proxy-10.0.0.121 1/1 Running 0 13h
kube-proxy-10.0.0.122 1/1 Running 0 13h
kube-proxy-10.0.0.123 1/1 Running 0 9h
kube-proxy-10.0.0.124 1/1 Running 0 9h
kube-scheduler-10.0.0.111 1/1 Running 0 13h
root@kube-master-01:/#

@segmond
Copy link
Author

segmond commented Apr 3, 2018

BTW, I was able to complete the deployment manually. I copied the ymls into the apiserver container and applied them. Did a hello-world and the guestbook deployments. ssh port forwarded the dashboard and can connect to it. Although the cluster is up and running, if I run "make up" again, it still fails.

@sebiwi
Copy link
Owner

sebiwi commented Apr 10, 2018

For the etcd error, did you pull the latest version of the code? I had some people tell me that they had a similar error, and I pushed some corrective measures in 2a4f1a1.

If this doesn't solve your problem, would you share the output of the Ansible execution concerning the etcd failure?

The kubectl error makes me think that the API Server is not ready at the time the kubectl requests are made. This might be caused by a slow internet connection. The kubectl requests are made in your computer, not on the VMs or on the containers (see https://github.com/sebiwi/kubernetes-coreos/blob/master/kubernetes.yml#L45).

After it fails, log into the kube-master-01 instance (vagrant ssh kube-master-01) and see what the kubelet is actually doing (sudo journalctl -u kubelet). Please, share the output here for debugging purposes.

@pinterl
Copy link

pinterl commented Jul 22, 2019

I cloned commit

Update etcd2 to etcd3 \o/ @sebiwi sebiwi committed on Jan 13

When I first executed make up on my Macintosh I got

--- learning-kubernetes/kubernetes-coreos ‹master› » make up                                                2 ↵
Bringing machine 'etcd-01' up with 'virtualbox' provider...
Bringing machine 'kube-master-01' up with 'virtualbox' provider...
Bringing machine 'kube-worker-01' up with 'virtualbox' provider...
Bringing machine 'kube-worker-02' up with 'virtualbox' provider...
==> etcd-01: Importing base box 'coreos-stable'...
==> etcd-01: Matching MAC address for NAT networking...
==> etcd-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> etcd-01: Setting the name of the VM: kubernetes-coreos_etcd-01_1563756045576_4877
==> etcd-01: Clearing any previously set network interfaces...
==> etcd-01: Preparing network interfaces based on configuration...
    etcd-01: Adapter 1: nat
    etcd-01: Adapter 2: hostonly
==> etcd-01: Forwarding ports...
    etcd-01: 22 (guest) => 2222 (host) (adapter 1)
==> etcd-01: Running 'pre-boot' VM customizations...
==> etcd-01: Booting VM...
==> etcd-01: Waiting for machine to boot. This may take a few minutes...
    etcd-01: SSH address: 127.0.0.1:2222
    etcd-01: SSH username: core
    etcd-01: SSH auth method: private key
==> etcd-01: Machine booted and ready!
==> etcd-01: Setting hostname...
==> etcd-01: Configuring and enabling network interfaces...
==> etcd-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-master-01: Importing base box 'coreos-stable'...
==> kube-master-01: Matching MAC address for NAT networking...
==> kube-master-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-master-01: Setting the name of the VM: kubernetes-coreos_kube-master-01_1563756066427_9289
==> kube-master-01: Fixed port collision for 22 => 2222. Now on port 2200.
==> kube-master-01: Clearing any previously set network interfaces...
==> kube-master-01: Preparing network interfaces based on configuration...
    kube-master-01: Adapter 1: nat
    kube-master-01: Adapter 2: hostonly
==> kube-master-01: Forwarding ports...
    kube-master-01: 22 (guest) => 2200 (host) (adapter 1)
==> kube-master-01: Running 'pre-boot' VM customizations...
==> kube-master-01: Booting VM...
==> kube-master-01: Waiting for machine to boot. This may take a few minutes...
    kube-master-01: SSH address: 127.0.0.1:2200
    kube-master-01: SSH username: core
    kube-master-01: SSH auth method: private key
==> kube-master-01: Machine booted and ready!
==> kube-master-01: Setting hostname...
==> kube-master-01: Configuring and enabling network interfaces...
==> kube-master-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-worker-01: Importing base box 'coreos-stable'...
==> kube-worker-01: Matching MAC address for NAT networking...
==> kube-worker-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-worker-01: Setting the name of the VM: kubernetes-coreos_kube-worker-01_1563756088619_35683
==> kube-worker-01: Fixed port collision for 22 => 2222. Now on port 2201.
==> kube-worker-01: Clearing any previously set network interfaces...
==> kube-worker-01: Preparing network interfaces based on configuration...
    kube-worker-01: Adapter 1: nat
    kube-worker-01: Adapter 2: hostonly
==> kube-worker-01: Forwarding ports...
    kube-worker-01: 22 (guest) => 2201 (host) (adapter 1)
==> kube-worker-01: Running 'pre-boot' VM customizations...
==> kube-worker-01: Booting VM...
==> kube-worker-01: Waiting for machine to boot. This may take a few minutes...
    kube-worker-01: SSH address: 127.0.0.1:2201
    kube-worker-01: SSH username: core
    kube-worker-01: SSH auth method: private key
==> kube-worker-01: Machine booted and ready!
==> kube-worker-01: Setting hostname...
==> kube-worker-01: Configuring and enabling network interfaces...
==> kube-worker-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-worker-02: Importing base box 'coreos-stable'...
==> kube-worker-02: Matching MAC address for NAT networking...
==> kube-worker-02: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-worker-02: Setting the name of the VM: kubernetes-coreos_kube-worker-02_1563756110176_45299
==> kube-worker-02: Fixed port collision for 22 => 2222. Now on port 2202.
==> kube-worker-02: Clearing any previously set network interfaces...
==> kube-worker-02: Preparing network interfaces based on configuration...
    kube-worker-02: Adapter 1: nat
    kube-worker-02: Adapter 2: hostonly
==> kube-worker-02: Forwarding ports...
    kube-worker-02: 22 (guest) => 2202 (host) (adapter 1)
==> kube-worker-02: Running 'pre-boot' VM customizations...
==> kube-worker-02: Booting VM...
==> kube-worker-02: Waiting for machine to boot. This may take a few minutes...
    kube-worker-02: SSH address: 127.0.0.1:2202
    kube-worker-02: SSH username: core
    kube-worker-02: SSH auth method: private key
==> kube-worker-02: Machine booted and ready!
==> kube-worker-02: Setting hostname...
==> kube-worker-02: Configuring and enabling network interfaces...
==> kube-worker-02: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names
 by default, this will change, but still be user configurable on deprecation. This feature will be removed in
version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
 [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details


PLAY [Bootstrap coreos hosts] **********************************************************************************

TASK [bootstrap/ansible-bootstrap : Check if Python is installed] **********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if install tar file exists] ******************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if pypy directory exists] ********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if libtinfo is simlinked] ********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Download PyPy] *************************************************************
changed: [kube-worker-02]
changed: [kube-worker-01]
changed: [etcd-01]
changed: [kube-master-01]

TASK [bootstrap/ansible-bootstrap : Extract PyPy] **************************************************************
changed: [kube-worker-02]
changed: [kube-master-01]
changed: [etcd-01]
changed: [kube-worker-01]

TASK [bootstrap/ansible-bootstrap : Install PyPy] **************************************************************
changed: [kube-master-01]
changed: [kube-worker-01]
changed: [etcd-01]
changed: [kube-worker-02]

TASK [bootstrap/ansible-bootstrap : Test if Python is installed] ***********************************************
ok: [kube-master-01]
ok: [kube-worker-01]
ok: [kube-worker-02]
ok: [etcd-01]

TASK [bootstrap/ansible-bootstrap : Gather ansible facts] ******************************************************
ok: [kube-worker-01]
ok: [kube-master-01]
ok: [kube-worker-02]
ok: [etcd-01]

PLAY [Create certificates for Kubernetes componentes] **********************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [kube-master-01]
ok: [kube-worker-01]
ok: [etcd-01]
ok: [kube-worker-02]

TASK [configure/ca : Create ca directory] **********************************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]

TASK [configure/ca : Create CA root key] ***********************************************************************
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]

TASK [configure/ca : Create CA root certificate] ***************************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Add openssl configuration for Kuberentes API server] **************************************
changed: [kube-master-01 -> 127.0.0.1]
changed: [etcd-01 -> 127.0.0.1]
changed: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server key] *********************************************************
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server csr] *********************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server certificate] *************************************************
ok: [etcd-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Add openssl configuration for Kuberentes workers] *****************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes worker server key] ******************************************************
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)
changed: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create Kubernetes worker server csr] ******************************************************
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create Kubernetes worker certificate] *****************************************************
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)
changed: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create cluster administrator key] *********************************************************
ok: [etcd-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create cluster administrator csr] *********************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create cluster administrator certificate] *************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

PLAY [Configure etcd cluster] **********************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [etcd-01]

TASK [configure/etcd : create etcd service directory] **********************************************************
changed: [etcd-01]

TASK [configure/etcd : create etcd configuration] **************************************************************
changed: [etcd-01]

TASK [configure/etcd : add etcd unit file] *********************************************************************
changed: [etcd-01]

TASK [configure/etcd : start etcd service] *********************************************************************
changed: [etcd-01]

TASK [configure/etcd : Wait for port 2379 to listen] ***********************************************************
ok: [etcd-01]

RUNNING HANDLER [configure/etcd : restart etcd] ****************************************************************
changed: [etcd-01]

PLAY [Configure Kubernetes master node] ************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes master ssl directory] ******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Kubernetes master SSL resources] *********************************************
changed: [kube-master-01] => (item=ca.pem)
changed: [kube-master-01] => (item=apiserver.pem)
changed: [kube-master-01] => (item=apiserver-key.pem)

TASK [configure/kube-master : Create flannel configuration directory] ******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add flannel local configuration] *************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create flannel systemd configuration directory] **********************************
changed: [kube-master-01]

TASK [configure/kube-master : Add flannel systemd drop-in] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Docker systemd configuration directory] ***********************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Docker systemd drop-in (require flannel before starting)] ********************
changed: [kube-master-01]

TASK [configure/kube-master : Add kubelet service configuration] ***********************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes manifests directory] *******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-apiserver manifest] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-proxy manifest] *********************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-controller-manager manifest] ********************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-scheduler manifest] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico service configuration] ************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico policy controller pod] ************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes CNI directory] *************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico's CNI configuration] **************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Check if flannel pod network range configuration exists in etcd] *****************
ok: [kube-master-01]

TASK [configure/kube-master : Add flannel pod network range configuration to etcd] *****************************
ok: [kube-master-01]

TASK [configure/kube-master : Start flannel] *******************************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Start Kubelet] *******************************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Start Calico] ********************************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Test apiserver] ******************************************************************
ok: [kube-master-01]

TASK [configure/kube-master : Verify calico-system namespace existance] ****************************************
ok: [kube-master-01]

TASK [configure/kube-master : Create Calico namespace] *********************************************************
[DEPRECATION WARNING]: Supplying headers via HEADER_* is deprecated. Please use `headers` to supply headers for
 the request. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting
deprecation_warnings=False in ansible.cfg.
ok: [kube-master-01]

PLAY [Configure Kubernetes worker node] ************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [kube-worker-01]
ok: [kube-worker-02]

TASK [configure/kube-worker : Create Kubernetes master ssl directory] ******************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add Kubernetes worker SSL resources] *********************************************
changed: [kube-worker-01] => (item=ca.pem)
changed: [kube-worker-02] => (item=ca.pem)
changed: [kube-worker-01] => (item=kube-worker-01-worker.pem)
changed: [kube-worker-02] => (item=kube-worker-02-worker.pem)
changed: [kube-worker-01] => (item=kube-worker-01-worker-key.pem)
changed: [kube-worker-02] => (item=kube-worker-02-worker-key.pem)

TASK [configure/kube-worker : Create symlinks to each worker-specific certificate] *****************************
changed: [kube-worker-01] => (item=worker.pem)
changed: [kube-worker-02] => (item=worker.pem)
changed: [kube-worker-01] => (item=worker-key.pem)
changed: [kube-worker-02] => (item=worker-key.pem)

TASK [configure/kube-worker : Create flannel configuration directory] ******************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add flannel local configuration] *************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Create flannel systemd configuration directory] **********************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add flannel systemd drop-in] *****************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Create Docker systemd configuration directory] ***********************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add Docker systemd drop-in] ******************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add kubelet service configuration] ***********************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Create Kubernetes cni/net.d directory] *******************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add the CNI configuration] *******************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Create Kubernetes manifests directory] *******************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add kube-proxy manifest] *********************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add worker kubeconfig] ***********************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Add calico-node service configuration] *******************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Start flannel] *******************************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Start Kubelet] *******************************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

TASK [configure/kube-worker : Start Calico] ********************************************************************
changed: [kube-worker-01]
changed: [kube-worker-02]

PLAY [Configure kubectl] ***************************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [etcd-01]
ok: [kube-worker-01]
ok: [kube-worker-02]
ok: [kube-master-01]

TASK [configure/kubectl : Verify if kubectl is already configured] *********************************************
ok: [kube-master-01 -> 127.0.0.1]

TASK [configure/kubectl : Set default cluster] *****************************************************************
skipping: [kube-master-01]

TASK [configure/kubectl : Set credentials] *********************************************************************
skipping: [kube-master-01]

TASK [configure/kubectl : Set context] *************************************************************************
skipping: [kube-master-01]

TASK [configure/kubectl : Use context] *************************************************************************
skipping: [kube-master-01]

PLAY [Add extra Kubernetes components] *************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [etcd-01]
ok: [kube-worker-02]
ok: [kube-master-01]
ok: [kube-worker-01]

TASK [configure/kube-components : Create add-on directory] *****************************************************
changed: [kube-master-01 -> 127.0.0.1]

TASK [configure/kube-components : Add add-on files] ************************************************************
changed: [kube-master-01 -> 127.0.0.1] => (item=dns-addon)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-dashboard-rc)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-dashboard-svc)

TASK [configure/kube-components : Verify if kube-system resources already exist] *******************************
fatal: [kube-master-01 -> 127.0.0.1]: FAILED! => {"changed": false, "cmd": ["kubectl", "get", "pods", "--namespace=kube-system"], "delta": "0:00:00.070078", "end": "2019-07-21 17:45:34.969216", "msg": "non-zero return code", "rc": 1, "start": "2019-07-21 17:45:34.899138", "stderr": "The connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}

NO MORE HOSTS LEFT *********************************************************************************************

PLAY RECAP *****************************************************************************************************
etcd-01                    : ok=33   changed=11   unreachable=0    failed=0    skipped=0    rescued=0    ignored=4
kube-master-01             : ok=56   changed=40   unreachable=0    failed=1    skipped=4    rescued=0    ignored=4
kube-worker-01             : ok=46   changed=23   unreachable=0    failed=0    skipped=0    rescued=0    ignored=4
kube-worker-02             : ok=46   changed=22   unreachable=0    failed=0    skipped=0    rescued=0    ignored=4

make: *** [playbook] Error 2

After this I executed the clean script, switched to sudo and installed the vagrant-hostmanager plugin

make clean
sudo -i
vagrant plugin install vagrant-hostmanager

and I got this output

USWHBML00208263:kubernetes-coreos root# make up
Bringing machine 'etcd-01' up with 'virtualbox' provider...
Bringing machine 'kube-master-01' up with 'virtualbox' provider...
Bringing machine 'kube-worker-01' up with 'virtualbox' provider...
Bringing machine 'kube-worker-02' up with 'virtualbox' provider...
==> etcd-01: Importing base box 'coreos-stable'...
==> etcd-01: Matching MAC address for NAT networking...
==> etcd-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> etcd-01: Setting the name of the VM: kubernetes-coreos_etcd-01_1563756757049_23327
==> etcd-01: Clearing any previously set network interfaces...
==> etcd-01: Preparing network interfaces based on configuration...
    etcd-01: Adapter 1: nat
    etcd-01: Adapter 2: hostonly
==> etcd-01: Forwarding ports...
    etcd-01: 22 (guest) => 2222 (host) (adapter 1)
==> etcd-01: Running 'pre-boot' VM customizations...
==> etcd-01: Booting VM...
==> etcd-01: Waiting for machine to boot. This may take a few minutes...
    etcd-01: SSH address: 127.0.0.1:2222
    etcd-01: SSH username: core
    etcd-01: SSH auth method: private key
==> etcd-01: Machine booted and ready!
==> etcd-01: Setting hostname...
==> etcd-01: Configuring and enabling network interfaces...
==> etcd-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-master-01: Importing base box 'coreos-stable'...
==> kube-master-01: Matching MAC address for NAT networking...
==> kube-master-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-master-01: Setting the name of the VM: kubernetes-coreos_kube-master-01_1563756778085_89641
==> kube-master-01: Fixed port collision for 22 => 2222. Now on port 2200.
==> kube-master-01: Clearing any previously set network interfaces...
==> kube-master-01: Preparing network interfaces based on configuration...
    kube-master-01: Adapter 1: nat
    kube-master-01: Adapter 2: hostonly
==> kube-master-01: Forwarding ports...
    kube-master-01: 22 (guest) => 2200 (host) (adapter 1)
==> kube-master-01: Running 'pre-boot' VM customizations...
==> kube-master-01: Booting VM...
==> kube-master-01: Waiting for machine to boot. This may take a few minutes...
    kube-master-01: SSH address: 127.0.0.1:2200
    kube-master-01: SSH username: core
    kube-master-01: SSH auth method: private key
==> kube-master-01: Machine booted and ready!
==> kube-master-01: Setting hostname...
==> kube-master-01: Configuring and enabling network interfaces...
==> kube-master-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-worker-01: Importing base box 'coreos-stable'...
==> kube-worker-01: Matching MAC address for NAT networking...
==> kube-worker-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-worker-01: Setting the name of the VM: kubernetes-coreos_kube-worker-01_1563756799379_61081
==> kube-worker-01: Fixed port collision for 22 => 2222. Now on port 2201.
==> kube-worker-01: Clearing any previously set network interfaces...
==> kube-worker-01: Preparing network interfaces based on configuration...
    kube-worker-01: Adapter 1: nat
    kube-worker-01: Adapter 2: hostonly
==> kube-worker-01: Forwarding ports...
    kube-worker-01: 22 (guest) => 2201 (host) (adapter 1)
==> kube-worker-01: Running 'pre-boot' VM customizations...
==> kube-worker-01: Booting VM...
==> kube-worker-01: Waiting for machine to boot. This may take a few minutes...
    kube-worker-01: SSH address: 127.0.0.1:2201
    kube-worker-01: SSH username: core
    kube-worker-01: SSH auth method: private key
==> kube-worker-01: Machine booted and ready!
==> kube-worker-01: Setting hostname...
==> kube-worker-01: Configuring and enabling network interfaces...
==> kube-worker-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-worker-02: Importing base box 'coreos-stable'...
==> kube-worker-02: Matching MAC address for NAT networking...
==> kube-worker-02: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-worker-02: Setting the name of the VM: kubernetes-coreos_kube-worker-02_1563756820615_25820
==> kube-worker-02: Fixed port collision for 22 => 2222. Now on port 2202.
==> kube-worker-02: Clearing any previously set network interfaces...
==> kube-worker-02: Preparing network interfaces based on configuration...
    kube-worker-02: Adapter 1: nat
    kube-worker-02: Adapter 2: hostonly
==> kube-worker-02: Forwarding ports...
    kube-worker-02: 22 (guest) => 2202 (host) (adapter 1)
==> kube-worker-02: Running 'pre-boot' VM customizations...
==> kube-worker-02: Booting VM...
==> kube-worker-02: Waiting for machine to boot. This may take a few minutes...
    kube-worker-02: SSH address: 127.0.0.1:2202
    kube-worker-02: SSH username: core
    kube-worker-02: SSH auth method: private key
==> kube-worker-02: Machine booted and ready!
==> kube-worker-02: Setting hostname...
==> kube-worker-02: Configuring and enabling network interfaces...
==> kube-worker-02: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names
 by default, this will change, but still be user configurable on deprecation. This feature will be removed in
version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
 [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details


PLAY [Bootstrap coreos hosts] **********************************************************************************

TASK [bootstrap/ansible-bootstrap : Check if Python is installed] **********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if install tar file exists] ******************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if pypy directory exists] ********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if libtinfo is simlinked] ********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Download PyPy] *************************************************************
changed: [kube-worker-02]
changed: [etcd-01]
changed: [kube-worker-01]
changed: [kube-master-01]

TASK [bootstrap/ansible-bootstrap : Extract PyPy] **************************************************************
changed: [kube-master-01]
changed: [kube-worker-02]
changed: [etcd-01]
changed: [kube-worker-01]

TASK [bootstrap/ansible-bootstrap : Install PyPy] **************************************************************
changed: [kube-worker-01]
changed: [etcd-01]
changed: [kube-master-01]
changed: [kube-worker-02]

TASK [bootstrap/ansible-bootstrap : Test if Python is installed] ***********************************************
ok: [kube-worker-01]
ok: [kube-worker-02]
ok: [kube-master-01]
ok: [etcd-01]

TASK [bootstrap/ansible-bootstrap : Gather ansible facts] ******************************************************
ok: [kube-worker-01]
ok: [kube-worker-02]
ok: [etcd-01]
ok: [kube-master-01]

PLAY [Create certificates for Kubernetes componentes] **********************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [kube-master-01]
ok: [kube-worker-01]
ok: [etcd-01]
ok: [kube-worker-02]

TASK [configure/ca : Create ca directory] **********************************************************************
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
changed: [etcd-01 -> 127.0.0.1]
ok: [kube-master-01 -> 127.0.0.1]

TASK [configure/ca : Create CA root key] ***********************************************************************
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]

TASK [configure/ca : Create CA root certificate] ***************************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Add openssl configuration for Kuberentes API server] **************************************
changed: [kube-worker-02 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server key] *********************************************************
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server csr] *********************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server certificate] *************************************************
ok: [etcd-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Add openssl configuration for Kuberentes workers] *****************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes worker server key] ******************************************************
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)
changed: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create Kubernetes worker server csr] ******************************************************
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create Kubernetes worker certificate] *****************************************************
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
changed: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)
changed: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create cluster administrator key] *********************************************************
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]

TASK [configure/ca : Create cluster administrator csr] *********************************************************
changed: [etcd-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create cluster administrator certificate] *************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

PLAY [Configure etcd cluster] **********************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [etcd-01]

TASK [configure/etcd : create etcd service directory] **********************************************************
changed: [etcd-01]

TASK [configure/etcd : create etcd configuration] **************************************************************
changed: [etcd-01]

TASK [configure/etcd : add etcd unit file] *********************************************************************
changed: [etcd-01]

TASK [configure/etcd : start etcd service] *********************************************************************
changed: [etcd-01]

TASK [configure/etcd : Wait for port 2379 to listen] ***********************************************************
ok: [etcd-01]

RUNNING HANDLER [configure/etcd : restart etcd] ****************************************************************
changed: [etcd-01]

PLAY [Configure Kubernetes master node] ************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes master ssl directory] ******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Kubernetes master SSL resources] *********************************************
changed: [kube-master-01] => (item=ca.pem)
changed: [kube-master-01] => (item=apiserver.pem)
changed: [kube-master-01] => (item=apiserver-key.pem)

TASK [configure/kube-master : Create flannel configuration directory] ******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add flannel local configuration] *************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create flannel systemd configuration directory] **********************************
changed: [kube-master-01]

TASK [configure/kube-master : Add flannel systemd drop-in] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Docker systemd configuration directory] ***********************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Docker systemd drop-in (require flannel before starting)] ********************
changed: [kube-master-01]

TASK [configure/kube-master : Add kubelet service configuration] ***********************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes manifests directory] *******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-apiserver manifest] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-proxy manifest] *********************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-controller-manager manifest] ********************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-scheduler manifest] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico service configuration] ************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico policy controller pod] ************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes CNI directory] *************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico's CNI configuration] **************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Check if flannel pod network range configuration exists in etcd] *****************
ok: [kube-master-01]

TASK [configure/kube-master : Add flannel pod network range configuration to etcd] *****************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "content": "", "elapsed": 2, "msg": "Status code was -1 and not [201]: Request failed: <urlopen error [Errno 113] No route to host>", "redirected": false, "status": -1, "url": "http://10.0.0.101:2379/v2/keys/coreos.com/network/config"}

PLAY RECAP *****************************************************************************************************
etcd-01                    : ok=31   changed=12   unreachable=0    failed=0    skipped=0    rescued=0    ignored=4
kube-master-01             : ok=44   changed=33   unreachable=0    failed=1    skipped=0    rescued=0    ignored=4
kube-worker-01             : ok=24   changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=4
kube-worker-02             : ok=24   changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=4

make: *** [playbook] Error 2

Ran the clean script and exited sudo mode

make clean
exit    # exited sudo mode

and tried in non sudo mode again, and got a different error message

--- learning-kubernetes/kubernetes-coreos ‹master› » make up
Bringing machine 'etcd-01' up with 'virtualbox' provider...
Bringing machine 'kube-master-01' up with 'virtualbox' provider...
Bringing machine 'kube-worker-01' up with 'virtualbox' provider...
Bringing machine 'kube-worker-02' up with 'virtualbox' provider...
==> etcd-01: Importing base box 'coreos-stable'...
==> etcd-01: Matching MAC address for NAT networking...
==> etcd-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> etcd-01: Setting the name of the VM: kubernetes-coreos_etcd-01_1563765285411_14758
==> etcd-01: Clearing any previously set network interfaces...
==> etcd-01: Preparing network interfaces based on configuration...
    etcd-01: Adapter 1: nat
    etcd-01: Adapter 2: hostonly
==> etcd-01: Forwarding ports...
    etcd-01: 22 (guest) => 2222 (host) (adapter 1)
==> etcd-01: Running 'pre-boot' VM customizations...
==> etcd-01: Booting VM...
==> etcd-01: Waiting for machine to boot. This may take a few minutes...
    etcd-01: SSH address: 127.0.0.1:2222
    etcd-01: SSH username: core
    etcd-01: SSH auth method: private key
==> etcd-01: Machine booted and ready!
==> etcd-01: Setting hostname...
==> etcd-01: Configuring and enabling network interfaces...
==> etcd-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-master-01: Importing base box 'coreos-stable'...
==> kube-master-01: Matching MAC address for NAT networking...
==> kube-master-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-master-01: Setting the name of the VM: kubernetes-coreos_kube-master-01_1563765306431_55659
==> kube-master-01: Fixed port collision for 22 => 2222. Now on port 2200.
==> kube-master-01: Clearing any previously set network interfaces...
==> kube-master-01: Preparing network interfaces based on configuration...
    kube-master-01: Adapter 1: nat
    kube-master-01: Adapter 2: hostonly
==> kube-master-01: Forwarding ports...
    kube-master-01: 22 (guest) => 2200 (host) (adapter 1)
==> kube-master-01: Running 'pre-boot' VM customizations...
==> kube-master-01: Booting VM...
==> kube-master-01: Waiting for machine to boot. This may take a few minutes...
    kube-master-01: SSH address: 127.0.0.1:2200
    kube-master-01: SSH username: core
    kube-master-01: SSH auth method: private key
==> kube-master-01: Machine booted and ready!
==> kube-master-01: Setting hostname...
==> kube-master-01: Configuring and enabling network interfaces...
==> kube-master-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-worker-01: Importing base box 'coreos-stable'...
==> kube-worker-01: Matching MAC address for NAT networking...
==> kube-worker-01: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-worker-01: Setting the name of the VM: kubernetes-coreos_kube-worker-01_1563765327560_94085
==> kube-worker-01: Fixed port collision for 22 => 2222. Now on port 2201.
==> kube-worker-01: Clearing any previously set network interfaces...
==> kube-worker-01: Preparing network interfaces based on configuration...
    kube-worker-01: Adapter 1: nat
    kube-worker-01: Adapter 2: hostonly
==> kube-worker-01: Forwarding ports...
    kube-worker-01: 22 (guest) => 2201 (host) (adapter 1)
==> kube-worker-01: Running 'pre-boot' VM customizations...
==> kube-worker-01: Booting VM...
==> kube-worker-01: Waiting for machine to boot. This may take a few minutes...
    kube-worker-01: SSH address: 127.0.0.1:2201
    kube-worker-01: SSH username: core
    kube-worker-01: SSH auth method: private key
==> kube-worker-01: Machine booted and ready!
==> kube-worker-01: Setting hostname...
==> kube-worker-01: Configuring and enabling network interfaces...
==> kube-worker-01: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
==> kube-worker-02: Importing base box 'coreos-stable'...
==> kube-worker-02: Matching MAC address for NAT networking...
==> kube-worker-02: Checking if box 'coreos-stable' version '2135.5.0' is up to date...
==> kube-worker-02: Setting the name of the VM: kubernetes-coreos_kube-worker-02_1563765348893_26543
==> kube-worker-02: Fixed port collision for 22 => 2222. Now on port 2202.
==> kube-worker-02: Clearing any previously set network interfaces...
==> kube-worker-02: Preparing network interfaces based on configuration...
    kube-worker-02: Adapter 1: nat
    kube-worker-02: Adapter 2: hostonly
==> kube-worker-02: Forwarding ports...
    kube-worker-02: 22 (guest) => 2202 (host) (adapter 1)
==> kube-worker-02: Running 'pre-boot' VM customizations...
==> kube-worker-02: Booting VM...
==> kube-worker-02: Waiting for machine to boot. This may take a few minutes...
    kube-worker-02: SSH address: 127.0.0.1:2202
    kube-worker-02: SSH username: core
    kube-worker-02: SSH auth method: private key
==> kube-worker-02: Machine booted and ready!
==> kube-worker-02: Setting hostname...
==> kube-worker-02: Configuring and enabling network interfaces...
==> kube-worker-02: [vagrant-hostmanager:guests] Updating hosts file on active guest virtual machines...
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names
 by default, this will change, but still be user configurable on deprecation. This feature will be removed in
version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
 [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details


PLAY [Bootstrap coreos hosts] **********************************************************************************

TASK [bootstrap/ansible-bootstrap : Check if Python is installed] **********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 127, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "bash: /home/core/bin/python: No such file or directory\r\n", "stdout_lines": ["bash: /home/core/bin/python: No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if install tar file exists] ******************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/tmp/pypy-6.0.0.tar.bz2': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if pypy directory exists] ********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Check if libtinfo is simlinked] ********************************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [etcd-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [kube-worker-01]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring
fatal: [kube-worker-02]: FAILED! => {"changed": false, "msg": "non-zero return code", "rc": 1, "stderr": "Connection to 127.0.0.1 closed.\r\n", "stderr_lines": ["Connection to 127.0.0.1 closed."], "stdout": "stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory\r\n", "stdout_lines": ["stat: cannot stat '/home/core/pypy/lib/libtinfo.so.5': No such file or directory"]}
...ignoring

TASK [bootstrap/ansible-bootstrap : Download PyPy] *************************************************************
changed: [kube-worker-02]
changed: [kube-master-01]
changed: [kube-worker-01]
changed: [etcd-01]

TASK [bootstrap/ansible-bootstrap : Extract PyPy] **************************************************************
changed: [kube-worker-02]
changed: [kube-master-01]
changed: [kube-worker-01]
changed: [etcd-01]

TASK [bootstrap/ansible-bootstrap : Install PyPy] **************************************************************
changed: [kube-worker-01]
changed: [kube-master-01]
changed: [etcd-01]
changed: [kube-worker-02]

TASK [bootstrap/ansible-bootstrap : Test if Python is installed] ***********************************************
ok: [kube-worker-02]
ok: [kube-master-01]
ok: [etcd-01]
ok: [kube-worker-01]

TASK [bootstrap/ansible-bootstrap : Gather ansible facts] ******************************************************
ok: [kube-worker-01]
ok: [kube-master-01]
ok: [etcd-01]
ok: [kube-worker-02]

PLAY [Create certificates for Kubernetes componentes] **********************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [kube-worker-01]
ok: [kube-master-01]
ok: [etcd-01]
ok: [kube-worker-02]

TASK [configure/ca : Create ca directory] **********************************************************************
changed: [kube-worker-01 -> 127.0.0.1]
ok: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create CA root key] ***********************************************************************
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create CA root certificate] ***************************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Add openssl configuration for Kuberentes API server] **************************************
changed: [kube-master-01 -> 127.0.0.1]
changed: [kube-worker-02 -> 127.0.0.1]
changed: [kube-worker-01 -> 127.0.0.1]
changed: [etcd-01 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server key] *********************************************************
ok: [kube-worker-01 -> 127.0.0.1]
changed: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server csr] *********************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes API server certificate] *************************************************
ok: [etcd-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Add openssl configuration for Kuberentes workers] *****************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create Kubernetes worker server key] ******************************************************
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)
changed: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create Kubernetes worker server csr] ******************************************************
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create Kubernetes worker certificate] *****************************************************
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [etcd-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-01)
ok: [kube-worker-01 -> 127.0.0.1] => (item=kube-worker-02)
changed: [etcd-01 -> 127.0.0.1] => (item=kube-worker-02)
changed: [kube-master-01 -> 127.0.0.1] => (item=kube-worker-02)
ok: [kube-worker-02 -> 127.0.0.1] => (item=kube-worker-02)

TASK [configure/ca : Create cluster administrator key] *********************************************************
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
changed: [kube-master-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create cluster administrator csr] *********************************************************
changed: [kube-master-01 -> 127.0.0.1]
ok: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

TASK [configure/ca : Create cluster administrator certificate] *************************************************
changed: [kube-master-01 -> 127.0.0.1]
changed: [etcd-01 -> 127.0.0.1]
ok: [kube-worker-01 -> 127.0.0.1]
ok: [kube-worker-02 -> 127.0.0.1]

PLAY [Configure etcd cluster] **********************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [etcd-01]

TASK [configure/etcd : create etcd service directory] **********************************************************
changed: [etcd-01]

TASK [configure/etcd : create etcd configuration] **************************************************************
changed: [etcd-01]

TASK [configure/etcd : add etcd unit file] *********************************************************************
changed: [etcd-01]

TASK [configure/etcd : start etcd service] *********************************************************************
changed: [etcd-01]

TASK [configure/etcd : Wait for port 2379 to listen] ***********************************************************
ok: [etcd-01]

RUNNING HANDLER [configure/etcd : restart etcd] ****************************************************************
changed: [etcd-01]

PLAY [Configure Kubernetes master node] ************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes master ssl directory] ******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Kubernetes master SSL resources] *********************************************
changed: [kube-master-01] => (item=ca.pem)
changed: [kube-master-01] => (item=apiserver.pem)
changed: [kube-master-01] => (item=apiserver-key.pem)

TASK [configure/kube-master : Create flannel configuration directory] ******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add flannel local configuration] *************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create flannel systemd configuration directory] **********************************
changed: [kube-master-01]

TASK [configure/kube-master : Add flannel systemd drop-in] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Docker systemd configuration directory] ***********************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Docker systemd drop-in (require flannel before starting)] ********************
changed: [kube-master-01]

TASK [configure/kube-master : Add kubelet service configuration] ***********************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes manifests directory] *******************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-apiserver manifest] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-proxy manifest] *********************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-controller-manager manifest] ********************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add kube-scheduler manifest] *****************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico service configuration] ************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico policy controller pod] ************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Create Kubernetes CNI directory] *************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Add Calico's CNI configuration] **************************************************
changed: [kube-master-01]

TASK [configure/kube-master : Check if flannel pod network range configuration exists in etcd] *****************
ok: [kube-master-01]

TASK [configure/kube-master : Add flannel pod network range configuration to etcd] *****************************
fatal: [kube-master-01]: FAILED! => {"changed": false, "content": "", "elapsed": 2, "msg": "Status code was -1 and not [201]: Request failed: <urlopen error [Errno 113] No route to host>", "redirected": false, "status": -1, "url": "http://10.0.0.101:2379/v2/keys/coreos.com/network/config"}

PLAY RECAP *****************************************************************************************************
etcd-01                    : ok=31   changed=13   unreachable=0    failed=0    skipped=0    rescued=0    ignored=4
kube-master-01             : ok=44   changed=34   unreachable=0    failed=1    skipped=0    rescued=0    ignored=4
kube-worker-01             : ok=24   changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=4
kube-worker-02             : ok=24   changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=4

make: *** [playbook] Error 2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants