-
Notifications
You must be signed in to change notification settings - Fork 898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cockpit fails to launch in pods #20829
Comments
Note, see this comment showing how cockpit still doesn't work in pods, but at least it doesn't thrash the system anymore. |
cockpit will then run various services to administer this server. I don't think we want this running on our main server and not as a local custom service |
Wonder if it's mostly because we never merged ManageIQ/manageiq-pods#97 |
To be more accurate, we don't have apache in the pods either, so that makes sense. |
So I dug into this with @jrafanie and this is how it works, more or less: The cockpit integration is more or less like a remote console where users can proxy cockpit traffic to some other machine (in our case a Vm, Host, or a ContainerNode) through the appliance that has the cockpit role. When someone turns on the cockpit role, a thread is started (used to be a full blown worker, but now it's just a thread). The thread eventually checks that Apache is available [1], but that's mostly not important anymore because we have the apache config baked into our appliance [2]. In the past that configuration was actually dynamically generated, but not any longer. This is what is currently failing in pods, because Eventually it will try to start In the ManageIQ UI, a button is tied to the cockpit of a Vm, Node, or ContainerNode by presenting a URL that looks roughly like So, overall, the cockpit integration is, IMO, a glorified remote console, just instead of binary console traffic it's cockpit https traffic being proxied through the appliance that has the cockpit role. The rationale for it makes sense and was described in #12506
I sort of lump this together conceptually with other remote consoles. So, the question is, do we keep it or remove it? If we keep it, how can we do this in podified? In my opinion, since I see it like another remote console, whatever decision we make probably has similar rationales for keeping or removing other remote consoles as well. If we keep it, I think what we should do is either bake this into the remote console worker instead of in the manageiq-orhcestrator where it currently lives as a thread, or we should expose it as a separate worker that the httpd container can route to. Either way, that would allow us to keep some parity between podified and appliances. We will probably need to also investigate if |
Just to clarify,
|
This issue has been automatically marked as stale because it has not been updated for at least 3 months. If you can still reproduce this issue on the current release or on Thank you for all your contributions! More information about the ManageIQ triage process can be found in the triage process documentation. |
I'm not sure if this works in appliances but in pods, after #20827 #20823, opening here since the problematic code is in core..
I think it's failing in code that's expecting access to apachectl or apache in general:
manageiq/lib/miq_cockpit.rb
Line 46 in 90c2323
The monitor thread launches but the runner seems to fail. Thankfully, it doesn't look to constantly restart.
Related to ManageIQ/manageiq-pods#531
and ManageIQ/manageiq-pods#595
The text was updated successfully, but these errors were encountered: