-
-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Current namespace not used when creating daskcluster in k8s #921
Comments
It looks like you're trying to create the spec = make_cluster_spec(
name="test-cluster",
namespace="myns",
) |
@jacobtomlinson The operator is in actually in My use case is that I don't especially know in advance the namespace name but would like to create a spec = make_cluster_spec(
name="test-cluster",
)
cluster = KubeCluster(
custom_cluster_spec=spec,
)
# Will use `myns` in 2024.5.0
# Will use `default` in 2024.8.0 |
If you do But I see that the expectation here is that if the operator is installed in a single namespace then all You said you are running your code inside a Pod. Which namespace is that Pod running in? |
All the pods (the pod from where I run this code to contact the operator + the operator itself) are in the same But good point regarding the update to comply with kubectl behavior 🤔 I was just curious why this worked in prior version but it seems it was a In any case thanks for your support! |
It's likely this was just some unintended behaviour that changed at some point, see Hyrum's Law. That being said I don't think it's unreasonable to dig into this further as I am a little surprised by this behaviour. My next question would be how are you authenticating with the Kubernetes API from within your Pod? Are you using a Service Account, or are you storing credentials in |
Ok I just confirmed that Reproducer Create a new namespace
$ kubectl create namespace foo
namespace "foo" created
Run an interactive Pod in that namespace
$ kubectl run --image ubuntu --namespace foo --rm -it -- bash
Install kubectl
# apt update && apt install curl -y && curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" && chmod +x kubectl && mv kubectl /usr/local/bin/
...
Try to list Pods
# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:foo:default" cannot list resource "pods" in API group "" in the namespace "foo" We can see that when we try to list the Pods it gives us a Permission error, but it's defaulting to the |
It looks like this is a bug in Reproducer Create a new namespace
$ kubectl create namespace foo
namespace "foo" created
Run an interactive Pod in that namespace
$ kubectl run python --image python --namespace foo --rm -it -- bash
Install kr8s
# pip install kr8s
...
Try to list Pods
# python -c 'import kr8s; kr8s.get("pods")'
...
kr8s._exceptions.ServerError: pods is forbidden: User "system:serviceaccount:foo:default" cannot list resource "pods" in API group "" in the namespace "default" |
xref kr8s-org/kr8s#532 |
@jacobtomlinson Thanks for the investigation! I'll follow your bug report on
I used a GKE cluster for my tests with a service account that has the proper permissions within the |
Describe the issue:
When using a dask operator deployment in k8s with the role/rolebinding defined at the namespace level (
rbac.cluster: false
), the creation of adaskclusters.kubernetes.dask.org
by a service account (dask
in the example) inside a pod within a namespace (myns
in the example) leads to the following error:Short Error Message:
Full Stacktrace:
Minimal Complete Verifiable Example:
Running this inside a pod:
Anything else we need to know?:
When running the exact same test with
2024.5.0
version, it works fine so I think this is due to an update made in the2024.8.0
release since it does not work since this version.To make this work with
2024.8.0
or later, I need to define thenamespace
option when instantiating theKubeCluster
(but I don't know the ns in advance in my use case):Environment:
The text was updated successfully, but these errors were encountered: