You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is brittle to assume that I want my application to have the same SA/kubeconfig as the one I am using to interact with the cluster. I want the ability to use a specific service account for my pods/controller.
We can let users specifying their SA in config.yaml and use that instead of Kubeconfig to authenticate.
The text was updated successfully, but these errors were encountered:
Update - the recommended way of doing this on a k8s cluster is to attach a service account with a Kubernetes pod at creation and use config.load_incluster_config.
However, when the controller is running outside the cluster (e.g., on a GCP VM), we would need to use a static kubeconfig file (generated with a script similar to the one in serve_k8s_playground branch) to authenticate.
Note that if the user wants to use their own SA, they need to provide the SA name, the token and the CA certificate in our config.yaml.
To keep things simple, I'm leaning towards always generating a static kubeconfig where necessary (e.g., if using exec based auth). If the SA, token and CA certificate are specified in config.yaml, we use those else we generate new ones.
User feedback (paraphrased):
We can let users specifying their SA in
config.yaml
and use that instead of Kubeconfig to authenticate.The text was updated successfully, but these errors were encountered: