In this document you will find useful information for the debugging.
If you would like to ssh to the host and your cluster is running on AWS where you don't have your nodes exposed to the public network you can ssh to the hosts with the ssh bastion pod. See this repository for more details.
-
make sure you are logged to OCP or have exported
KUBECONFIG
-
optional step:
export SSH_BASTION_NAMESPACE=openshift-ssh-bastion
openshift-ssh-bastion is used by default
-
run the following command for deploy bastion pod:
curl https://raw.githubusercontent.com/eparis/ssh-bastion/master/deploy/deploy.sh | bash
-
the bastion address can be found by running:
oc get service -n openshift-ssh-bastion ssh-bastion -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
The address was also printed out by previous command.
-
find the ip of the node you would like to connect to by running:
oc get node -o wide
-
connect to the node by:
ssh -i ~/.ssh/openshift-dev.pem -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -i ~/.ssh/openshift-dev.pem -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@$(oc get service --all-namespaces -l run=ssh-bastion -o jsonpath="{.items[0].status.loadBalancer.ingress[0].hostname}")' [email protected] "sudo -i"
in case you didn’t use the same location of openshift-dev.pem key or you used different one during OCP deployment you have to change the path to the other one.
if you don't need to connect via ssh you can use oc debug
command.
oc debug node/NODE_NAME
This will create temporary pod on the specified node and you have to run following command after the previous mentioned one.
chroot /host
Now you can start running commands on the node.