After reboot second node appears #802
Replies: 8 comments
-
After typing
My master Node gets ready and the new one get the status notReady. I deleted the new and now all pods come up. The rnacher is availabe again. But can someone tell me what i'm making wrong? why i get a second node after reboot and whats going on? After every reboot the second node appears again with STATUS NotReady. I have to delete it every time. |
Beta Was this translation helpful? Give feedback.
-
Does no one have an idea or a tip for me? |
Beta Was this translation helpful? Give feedback.
-
If I understand you correctly, it seems like the hostname of your node changed after that snapshot, causing the issue? |
Beta Was this translation helpful? Give feedback.
-
This is what it looks like to me. It is incongruous, however, that you have specified the |
Beta Was this translation helpful? Give feedback.
-
That problem occurs after ervery restart of the VPS. The Hostname ist configured in the config.yml and does not change after reebot or snapshoting. But it seems like a problem with teh config. After typing "sudo k3os config" the new node goes offline and the "original" upspot-cluster comes up and all is fine. |
Beta Was this translation helpful? Give feedback.
-
I was never able to get "exactly 2" master nodes to work in a cluster |
Beta Was this translation helpful? Give feedback.
-
Hi @patrik-upspot , I happen to also host some k3os nodes on netcup and was seeing the same behavior when running with a minimal cloud init configuration specifying only the host name and ssh keys. I was able to pin the origin of this down to the default dynamic network configuration. Please gladly find my redacted config below showing parts of the cloud config file I use.
I hope this helps. |
Beta Was this translation helpful? Give feedback.
-
Hey @t1murl , |
Beta Was this translation helpful? Give feedback.
-
Version (k3OS / kernel)
k3os version v0.21.1-k3s1r0
5.4.0-73-generic #82 SMP Thu Jun 3 02:29:43 UTC 2021
Architecture
x86_64
Describe the bug
Hello,
i have a big issue with K3OS. yesterday in configured the cluster the third time. Today i wanted to create a VM Snapshop at my hoster netcup, if you do this you have to stop the VPS take a snapshot and restart the VPS. After the restart my rancher isnt come back.
After the reboot i have a second node and my correct node is NotReady. What can i do to start ma "old" node an delete the new one? I'm very new to Kubernetes, so i'm not very truted with all the commands.
Additional context
My k3os Config
Thats for your help!
Beta Was this translation helpful? Give feedback.
All reactions