-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After installing 1.6.0 I am unable to access any logs #3212
Comments
I could view the pods and logs for a sample app deployment.. Here are the steps I followed.
Once the pods came up, I opened the Dashboard and could see the pods, and view logs.. |
I see similar behavior in windows 10 starting with rancher desktop 1.6.0, 1.6.1, and 1.6.2. I'm using Containerd instead of Moby and I've seen this on kubernetes versions 1.25.2 and 1.25.3. I can consistently reproduce this if I have the dashboard open and import yaml or run a kubectl apply, typically i'm importing a deployment. Then I select execute shell on the deployment page or use execute shell or show logs from the pods page for the newly imported deployment or pods. Instead of a green "Connected" message I get a red "Disconnected" message for the status and if it is a shell it never connects or if it is a log then it remains empty. This is only for newly imported configurations. Any deployments and pods that were already running when I opened the dashboard I can still shell into and view logs for. In verifying this I've also just learned that if I scale a service that I can view logs / shell into down to 0 and then scale it back to 1 I get the same disconnected status on that pod. I've found that closing the dashboard and then reopening it through the systray icon restores the capability and then I can shell and view logs. |
I've installed rancher desktop 1.7.0 and I am still seeing the same behavior when trying to view logs or execute shell after importing a deployment or redeploying an existing deployment. After closing the dashboard and reopening it this is what i get now when i try and view logs. Now instead of a status of disconnected the status shows connected. |
Can confirm this happens to me too, since 1.6.0. |
I could reproduce the issue with below steps.
|
it's the same for me. |
Still same. Is anyone able to fix it? |
Still seeing this with 1.9.0-tech-preview on windows and mac |
FYI. I see this issue with Rancher Desktop 1.10.0, on Windows 11 home edition. If you need more details, please let me know. |
Same thing on Fedora 38. Rancher Desktop 1.10.0 (AppImage). |
We need to get the Rancher shell integration in to resolve this, so pushing this out of a milestone since we're blocked. #2822 is that epic. |
This is still an issue on Windows 11, Rancher Desktop 1.16.0 using containerd. Unable to view logs for new pods without reopening the dashboard. Very damaging to the user experience. |
This issue is blocked until we get help from the Rancher Manager team to upgrade the Cluster Dashboard to the latest upstream version. |
What help is needed? 1.16.0 has not long shipped, and this seems like such an important issue that it would be insane that 10 minor versions have passed and still not seen to be actively worked on. It's a shame because Rancher is such a useful tool, and I recommend it to everyone who wants to have an introduction to K8s, but Rancher Desktop is a feature whiplash compared to the cluster version. |
It needs somebody from the Rancher Manager team (what you call the cluster version, I think) to migrate the fork of the Cluster Dashboard to the latest upstream version. For various reasons this is not a trivial task, and the people working on Manager are too busy to help out on the Desktop project. The best way to get the latest dashboard running on Desktop is to install the full Rancher Manager via Docker or helm chart. Untested, but running it as a container should be something like
Should also work with Adjust ports to whatever works for you. You will need to run |
Thanks @jandubois . I did play around with that ahead of your message already, but at that rate might as well get minikube or k8s and run rancher manager skipping rancher desktop all together. I've started looking around the repositories to see how hard it would be to give it a crack myself and offer as merge request. In the mean time, I did find that by going to "More Resources > Core > Pods" that logs and shell were immediately available without having to relaunch the cluster dashboard, which feels a lot more usable. Odd that this doesn't work through "Workload > Pods" but does work from "More Resources > ...". This is what leads me to believe that there should be a trivial fix for this bug while waiting on Rancher Manager team to do a full upgrade to a later release. |
This would be much appreciated! |
Unfortunately, the suggested workaround does not work for me. In addition to Rancher Desktop, I also have a Rancher RKE2 cluster, and I am experiencing the same issue there. However, there is a temporary solution on the RKE2 cluster: refreshing the page allows me to see logs and connect to the shell. For Rancher Desktop, the situation is different. I have to completely close and reopen the dashboard to access the logs or connect to the shell, which is less convenient than just refreshing. |
Actual Behavior
When I check the logs of a container inside a running pod I do not see any logs. Reverted back to 1.5.1 and I can see the logs again.
Steps to Reproduce
Install or upgrade to Rancher Desktop 1.6.0 and check the logs of any running pod.
Result
No errors are shown and the logs window is just blank
Expected Behavior
Expected the logs window to have logs inside.
Additional Information
No response
Rancher Desktop Version
1.6.0
Rancher Desktop K8s Version
1.24.6
Which container engine are you using?
moby (docker cli)
What operating system are you using?
Windows
Operating System / Build Version
Windows 10 Pro Version 10.0.19044 Build 19044
What CPU architecture are you using?
x64
Linux only: what package format did you use to install Rancher Desktop?
No response
Windows User Only
N/A
The text was updated successfully, but these errors were encountered: