-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make Kubernetes specific env variables available when joining a cluster via SSH #1
Make Kubernetes specific env variables available when joining a cluster via SSH #1
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this feature @hemildesai! There has been some updates in ssh-port-forward-beta1
branch. Would you mind merging and making the necessary changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hemildesai Are we having env var. issue only with port-forward
approach and nodeport
approach doesn't have the issue? I'm asking this since it seems like the PR is only resolving it for port-forward
approach by running kubectl exec $POD_NAME ...
in kubernetes-port-forward-proxy-command.yaml.j2
, which is only ran with the port-forward
approach.
The issue exists for both ssh approaches. Let me see a way to use it in the |
@landscapepainter Added support for nodeport in da940ff |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hemildesai Thanks for adding support the support for NodePort mode as well! The k8s specific env vars are now visible for both modes accessing the instances through jump-pod. Left some comments.
Lets merge ssh-port-forward-beta1
branch into this PR. Please note that there was some refactoring done on k8s_cloud_beta1
and the change is reflected in ssh-port-forward-beta1
.
} | ||
|
||
e home | ||
|
||
POD_NAME=$(kubectl get pods --selector=skypilot-cluster={{ cluster_name }},ray-node-type=head -o jsonpath='{.items[0].metadata.name}' 2> /dev/null) | ||
if ! [ -z "$POD_NAME" ]; then | ||
kubectl exec $POD_NAME -- bash -c "printenv > /home/sky/.k8s_env" 2> /dev/null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach with the symlink we create between .k8s_env
and /etc/environment
currently overwrites what's stored in /etc/environment
each time the user ssh
in to the k8s instance. The user may have set a a system-wide env var for their use in /etc/environment
, and this value will get wiped as running ssh <cluster-name>
with the current implementation. Is there a way to reserve the env vars users set in /etc/environment
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use /etc/profile.d/skypilot_k8s_envvars.sh
to store these environment variables? Reference
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also check if _update_envs_for_k8s
is required anymore after this (since we'd populate envvars on every ssh connection)
if ssh_setup_mode == 'nodeport': | ||
proxy_command = 'ssh -tt -i {privkey} -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -p {ingress} -W %h:%p sky@{ipaddress}'.format( | ||
privkey=PRIVATE_SSH_KEY_PATH, ingress=ingress, ipaddress=ipaddress) | ||
proxy_command = 'ssh -tt -i {privkey} -o PermitLocalCommand=yes -o LocalCommand=\'{port_forward_proxy_cmd_path}\' -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -p {ingress} -W %h:%p sky@{ipaddress}'.format( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we separately create a script for injecting k8s specific env vars out of port_forward_proxy_cmd
? We can write a separate script for handling env vars and use that script as LocalCommand
for both NodePort
andport-forward
mode.
Thanks @hemildesai! After you have merged the latest changes from upstream (which includes GPU support), it would be great to test and see if this PR resolves skypilot-org#2453. |
This enables K8s specific env vars to be available in an SSH session. It does this by:
kubectl exec {pod_name} -- env
as part of SSH Proxy command and saving env vars to the .k8s_env fileTested (run the relevant ones):
bash format.sh
pytest tests/test_smoke.py
pytest tests/test_smoke.py::test_fill_in_the_name
bash tests/backward_comaptibility_tests.sh