-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fails on kube-master-01 #4
Comments
BTW, I was able to complete the deployment manually. I copied the ymls into the apiserver container and applied them. Did a hello-world and the guestbook deployments. ssh port forwarded the dashboard and can connect to it. Although the cluster is up and running, if I run "make up" again, it still fails. |
For the etcd error, did you pull the latest version of the code? I had some people tell me that they had a similar error, and I pushed some corrective measures in 2a4f1a1. If this doesn't solve your problem, would you share the output of the Ansible execution concerning the etcd failure? The kubectl error makes me think that the API Server is not ready at the time the kubectl requests are made. This might be caused by a slow internet connection. The kubectl requests are made in your computer, not on the VMs or on the containers (see https://github.com/sebiwi/kubernetes-coreos/blob/master/kubernetes.yml#L45). After it fails, log into the kube-master-01 instance ( |
I cloned commit Update etcd2 to etcd3 \o/ @sebiwi sebiwi committed on Jan 13 When I first executed make up on my Macintosh I got
After this I executed the clean script, switched to sudo and installed the vagrant-hostmanager plugin
and I got this output
Ran the clean script and exited sudo mode
and tried in non sudo mode again, and got a different error message
|
I tried to install, it first failed on verifying etcd. The output in stdout was[] but if I go into the vagrant vm and run etcdctl ls I saw the output. Seeing that etcd worked, I tried increasing the delay but had no luck. So I set ignored_errors: yes and the script continued.
But it now fails with
TASK [configure/kube-components : Add add-on files] **********************************************
ok: [kube-master-01 -> 127.0.0.1] => (item=dns-addon)
ok: [kube-master-01 -> 127.0.0.1] => (item=kube-dashboard-rc)
ok: [kube-master-01 -> 127.0.0.1] => (item=kube-dashboard-svc)
TASK [configure/kube-components : Verify if kube-system resources already exist] *****************
fatal: [kube-master-01 -> 127.0.0.1]: FAILED! => {"changed": false, "cmd": ["kubectl", "get", "pods", "--namespace=kube-system"], "delta": "0:00:00.073467", "end": "2018-04-01 23:25:43.422224", "msg": "non-zero return code", "rc": 1, "start": "2018-04-01 23:25:43.348757", "stderr": "The connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}
I logged in to master and figured that the kubectl was running inside the containers. Not in the VM.
~/kubernetes-coreos$ vagrant ssh kube-master-01
Last login: Mon Apr 2 12:26:12 UTC 2018 from 10.0.2.2 on pts/0
Container Linux by CoreOS stable (1632.3.0)
core@kube-master-01 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
247bebe4498d quay.io/calico/leader-elector:v0.1.0 "/run.sh --electio..." 13 hours ago Up 9 hours k8s_leader-elector.89250a02_calico-policy-controller-10.0.0.111_calico-system_57776fd3fc1ccde1f510665cb3600f3a_beddb16e
ed82c9226edc quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube apiser..." 13 hours ago Up 9 hours k8s_kube-apiserver.cee357c9_kube-apiserver-10.0.0.111_kube-system_23224f421b9ec012c8ee55d8bb1a898b_96b3e6fe
735f04159975 quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube proxy ..." 13 hours ago Up 9 hours k8s_kube-proxy.117567ac_kube-proxy-10.0.0.111_kube-system_deca3459fe418db4856c979bbdc2fe90_88a9e395
51c2c19fcf9d quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube contro..." 13 hours ago Up 9 hours k8s_kube-controller-manager.cad716ee_kube-controller-manager-10.0.0.111_kube-system_10187fd76486b908be61d347f9b64570_384a32d9
82415e8c790a quay.io/coreos/hyperkube:v1.4.3_coreos.0 "/hyperkube schedu..." 13 hours ago Up 9 hours k8s_kube-scheduler.eb9698df_kube-scheduler-10.0.0.111_kube-system_85527cbed8cc57c3e7194c0c4f48fc5e_dc648cad
480fe67f8263 calico/kube-policy-controller:v0.2.0 "/dist/controller" 13 hours ago Up 9 hours k8s_k8s-policy-controller.a60b4aaa_calico-policy-controller-10.0.0.111_calico-system_57776fd3fc1ccde1f510665cb3600f3a_e887ab0b
9e06b8eb7929 gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-apiserver-10.0.0.111_kube-system_23224f421b9ec012c8ee55d8bb1a898b_233eebc2
8e011f0e434c gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-proxy-10.0.0.111_kube-system_deca3459fe418db4856c979bbdc2fe90_5a2e31a3
7d8153cdf4be gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-controller-manager-10.0.0.111_kube-system_10187fd76486b908be61d347f9b64570_1033266b
68aeb46be376 gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_kube-scheduler-10.0.0.111_kube-system_85527cbed8cc57c3e7194c0c4f48fc5e_5b60154c
1ff8d0d36730 gcr.io/google_containers/pause-amd64:3.0 "/pause" 13 hours ago Up 13 hours k8s_POD.d8dbe16c_calico-policy-controller-10.0.0.111_calico-system_57776fd3fc1ccde1f510665cb3600f3a_45dd53f2
core@kube-master-01 ~ $ docker exec -it ed8 bash
root@kube-master-01:/# ./kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
kube-apiserver-10.0.0.111 1/1 Running 0 13h
kube-controller-manager-10.0.0.111 1/1 Running 0 13h
kube-proxy-10.0.0.111 1/1 Running 0 13h
kube-proxy-10.0.0.121 1/1 Running 0 13h
kube-proxy-10.0.0.122 1/1 Running 0 13h
kube-proxy-10.0.0.123 1/1 Running 0 9h
kube-proxy-10.0.0.124 1/1 Running 0 9h
kube-scheduler-10.0.0.111 1/1 Running 0 13h
root@kube-master-01:/#
The text was updated successfully, but these errors were encountered: