-
Notifications
You must be signed in to change notification settings - Fork 675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error during kubeadm init #7
Comments
Hi Darlyn,
Plese use private IP for anycost, using the Public IP You're allowing
cluster attack surface.
check the container runtime is running, (containerd)
if not restart,
else if you run circtl ps is the status
also check the cat /etc/containerd/config.toml | grep systemd_cgroup
if cgroup is not set update the cgroup
SystemdCgroup = true
restart the condaind service
also mean time check all the static-pod /etc/kubernetes/manifests. yaml
file are there
…On Mon, Nov 25, 2024 at 3:07 AM Darlyn ***@***.***> wrote:
I am following day27 steps for setting up kubernetes cluster, however when
i try to use:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
--apiserver-advertise-address=<my public ip of mater node> --node-name
master
I receive error:
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane
as static Pods from directory "/etc/kubernetes/manifests". This can take u
p to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some
way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the
error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when
started by the container runtime.
To troubleshoot, list all containers using your preferred container
runtimes CLI.
Here is one example how you may list all running Kubernetes containers by
using crictl:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps
-a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock
logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes
cluster
To see the stack trace of this error execute with --v=5 or higher
I am unable to locate the reason for this error, no step before has
resulted in error.
Thanks for help!
—
Reply to this email directly, view it on GitHub
<#7>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMMJDUG7OGR2OAYQNQ3MULL2CJBHZAVCNFSM6AAAAABSMVBWQ2VHI2DSMVQWIX3LMV43ASLTON2WKOZSGY4DQNBVGEYDKMY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Thanks & Regards
Jeyendran Sundaram
M:- +91-9842978456
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I am following day27 steps for setting up kubernetes cluster, however when i try to use:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<my public ip of mater node> --node-name master
I receive error:
I am unable to locate the reason for this error, no step before has resulted in error.
Thanks for help!
The text was updated successfully, but these errors were encountered: