Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

svclb-whoareyou-service shows pending state #1

Open
nvoskuilen opened this issue Dec 23, 2019 · 12 comments
Open

svclb-whoareyou-service shows pending state #1

nvoskuilen opened this issue Dec 23, 2019 · 12 comments

Comments

@nvoskuilen
Copy link

I followed your great tutorial on linux, but all of the svclb-whoareyou-service-* pods show the 0/1 Pending state.

root@debian10:~# kubectl get pods --all-namespaces
NAMESPACE        NAME                                      READY   STATUS      RESTARTS   AGE
default          svclb-whoareyou-service-jkt2z             0/1     Pending     0          121m
default          svclb-whoareyou-service-2vrvk             0/1     Pending     0          121m
default          svclb-whoareyou-service-8dl44             0/1     Pending     0          121m
...

Here is the description of one of the pending pods

root@debian10:~# kubectl describe pod svclb-whoareyou-service-jkt2z
Name:           svclb-whoareyou-service-jkt2z
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=svclb-whoareyou-service
                controller-revision-hash=5f6bdb4cbb
                pod-template-generation=1
                svccontroller.k3s.cattle.io/svcname=whoareyou-service
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  DaemonSet/svclb-whoareyou-service
Containers:
  lb-port-80:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       80/TCP
    Host Port:  80/TCP
    Environment:
      SRC_PORT:    80
      DEST_PROTO:  TCP
      DEST_PORT:   80
      DEST_IP:     10.43.134.66
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sgsqz (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-sgsqz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sgsqz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/4 nodes are available: 3 node(s) didn't have free ports for the requested pod ports, 3 node(s) didn't match node selector.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/4 nodes are available: 3 node(s) didn't have free ports for the requested pod ports, 3 node(s) didn't match node selector.
  Warning  FailedScheduling  <unknown>  default-scheduler  0/4 nodes are available: 3 node(s) didn't have free ports for the requested pod ports, 3 node(s) didn't match node selector.

Anyway curling the traefik ip multiple times shows different containers are responding and I assume things are working like it should, but leaving stuff in pending state is a bit messy right?

@arashkaffamanesh
Copy link
Owner

@nvoskuilen thanks for testing on Linux.
Yeh, that's a bit messy and I don't have any idea yet why it happens and what really this warning means:

Warning  FailedScheduling  <unknown>  default-scheduler  0/4 nodes are available: 3 node(s) didn't
 have free ports for the requested pod ports, 3 node(s) didn't match node selector.

But it works and that's the good thing for now :-)
I'll see how to get around this and let you know.

@nvoskuilen
Copy link
Author

Did you spot the pending pods as well on your mac deploy or is it on my end only?

I did search a bit further which led me to this issue in k3d repo: k3d-io/k3d#104
It might have something to do with traefik keeping ports 80/433 possessed.

@nvoskuilen
Copy link
Author

One must disable the servicelb, from the k3s docs:
To disable the embedded load balancer run the server with the --no-deploy servicelb option. This is necessary if you wish to run a different load balancer, such as MetalLB.

No more pending pods with MetalLB

@edwint88
Copy link
Contributor

edwint88 commented May 16, 2020

Hey guys great project, I am on macos too. I followed the steps until I've got stuck at kubectl get svc | grep whoareyou-service | awk '{ print $4 }' where it comes back as pending.

I use

k3d version                                            
k3d version v1.7.0
k3s version latest

( and soon they will release 3.0.0 ) where is no option --no-deploy servicelb. Is there another way to stop the lb?

@edwint88
Copy link
Contributor

Ok so my solution was to delete the trafik service from the cluster

@arashkaffamanesh
Copy link
Owner

i'll have a look, things might have changed.
To be honest, k3d is nice, but I'd highly recommend to go with this multipass k3s implementation:
https://github.com/arashkaffamanesh/multipass-k3s-rancher
A related blog post is here:
https://blog.kubernauts.io/k3s-with-metallb-on-multipass-vms-ac2b37298589

@edwint88
Copy link
Contributor

I know, but for that I will need linux or VM and I try for the moment to avoid that.

@arashkaffamanesh
Copy link
Owner

@edwint88 are you on windows?
K3d has performance problems, due to docker, you won't get lucky in the long term :-)

@arashkaffamanesh
Copy link
Owner

@edwint88
Copy link
Contributor

edwint88 commented May 16, 2020

@edwint88 are you on windows?
K3d has performance problems, due to docker, you won't get lucky in the long term :-)

MacOS and for local testing (2-3 local clusters) k3d should do for now

@wpatton
Copy link

wpatton commented Oct 21, 2020

I also wanted to figure out how to keep from having the lb pods hanging in a pending state. I am running k3d on openSUSE/Docker. I started a k3d cluster with this command line:

k3d cluster create test-cluster -v /data:/data --port 8082:80@loadbalancer -s 1 -a 2 --k3s-server-arg "--disable=servicelb"

The key was disabling the servicelb, NOT disabling Traefik. Now it all works as expected.

@arashkaffamanesh
Copy link
Owner

@wpatton very nice, thanks for the great hint.
Running k3s in a namespace in k3s is fun as well:
https://github.com/kubernauts/bonsai/tree/master/addons/k3s-in-k3s

If anyone would like to join us by the talk, we'd love to have you:
https://www.meetup.com/kubernauts/events/273234449/

Greetings!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants