Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the hypervisors to the inventory #2620

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

pablintino
Copy link
Collaborator

For use cases like ShiftOnStack the deployment may need to tweak the hypervisor.
We did not have a way to tell the deployment how to reach the hypervisor so this commit exposes the hypervisor Ansible instance to each host and creates a hypervisors group in the generated inventory.

@pablintino pablintino requested a review from a team December 20, 2024 12:02
Copy link
Contributor

openshift-ci bot commented Dec 20, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from pablintino. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

Thanks for the PR! ❤️
I'm marking it as a draft, once your happy with it merging and the PR is passing CI, click the "Ready for review" button below.

@github-actions github-actions bot marked this pull request as draft December 20, 2024 12:02
@pablintino pablintino force-pushed the add-hypervisors-to-inventory branch from d3aa624 to 8462e7d Compare December 20, 2024 12:02
@pablintino pablintino marked this pull request as ready for review December 20, 2024 12:02
@hjensas
Copy link
Contributor

hjensas commented Dec 20, 2024

So shift-stack needs to re-wire the datacenter networking, add install more RAM servers and/or modify the routers or the DNS configuration? This seems like a problem with separation of concerns? My worry here is that we it will be harder to move this automation to a different platform (cloud or hardware) in the future.

Would it not be possible to pre-seed whatever is required in the hypervisor with the "module" that creates the infrastructure? (libvirt-manager)

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/1e87b2c68bc74710b55c0fdd58fe8ed2

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 29m 21s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 17m 11s
cifmw-crc-podified-edpm-baremetal FAILURE in 40m 23s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 39s
✔️ cifmw-pod-pre-commit SUCCESS in 7m 57s
✔️ build-push-container-cifmw-client SUCCESS in 36m 06s
cifmw-molecule-libvirt_manager FAILURE in 11m 33s
cifmw-molecule-reproducer FAILURE in 8m 34s

@pablintino
Copy link
Collaborator Author

pablintino commented Dec 20, 2024

So shift-stack needs to re-wire the datacenter networking, add install more RAM servers and/or modify the routers or the DNS configuration? This seems like a problem with separation of concerns? My worry here is that we it will be harder to move this automation to a different platform (cloud or hardware) in the future.

Would it not be possible to pre-seed whatever is required in the hypervisor with the "module" that creates the infrastructure? (libvirt-manager)

@hjensas no, shift-stack needs to add a DNS entry in the hypervisor that can only be added after the payload OCP cluster is deployed, that's is, after they deploy OpenShift on top of RHOSO.
May it imply a separation of concerns problem? Sure, but I don't think there's a quick way to accomplish this in the framework that is not this one.

@pablintino pablintino closed this Dec 20, 2024
@pablintino pablintino reopened this Dec 20, 2024
@github-actions github-actions bot marked this pull request as draft December 20, 2024 14:20
Copy link

Thanks for the PR! ❤️
I'm marking it as a draft, once your happy with it merging and the PR is passing CI, click the "Ready for review" button below.

@hjensas
Copy link
Contributor

hjensas commented Dec 20, 2024

So shift-stack needs to re-wire the datacenter networking, add install more RAM servers and/or modify the routers or the DNS configuration? This seems like a problem with separation of concerns? My worry here is that we it will be harder to move this automation to a different platform (cloud or hardware) in the future.
Would it not be possible to pre-seed whatever is required in the hypervisor with the "module" that creates the infrastructure? (libvirt-manager)

@hjensas no, shift-stack needs to add a DNS entry in the hypervisor that can only be added after the payload OCP cluster is deployed, that's is, after they deploy OpenShift on top of RHOSO. May it imply a separation of concerns problem? Sure, but I don't think there's a quick way to accomplish this in the framework that is not this one.

ack, I am guessing but this must be for the API and Ingress addresses on a provider network. It would be possible to provide the addresses in install-config - but they may have a valid reason to explicitly test without setting them statically.

/lgtm

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/02acabc3ab3741b6bdc3c938a1612ebe

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 32m 48s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 19m 02s
cifmw-crc-podified-edpm-baremetal RETRY_LIMIT in 21m 05s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 43s
✔️ cifmw-pod-pre-commit SUCCESS in 7m 19s
✔️ build-push-container-cifmw-client SUCCESS in 38m 16s
cifmw-molecule-libvirt_manager FAILURE in 12m 01s
cifmw-molecule-reproducer FAILURE in 8m 33s

@eurijon
Copy link
Contributor

eurijon commented Dec 24, 2024

It's been tested from #2596 and the hostname is correctly retrieved:

2024-12-24 05:31:23,460 p=25595 u=zuul n=ansible | TASK [shiftstack : Debug hypervisor ansible_host msg=hypervisor ansible_host: {{ hostvars[hostvars['controller-0']['cifmw_hypervisor_host']]['ansible_host'] }} ] ***
2024-12-24 05:31:23,460 p=25595 u=zuul n=ansible | Tuesday 24 December 2024  05:31:23 -0500 (0:00:00.376)       1:04:03.311 ****** 
2024-12-24 05:31:23,527 p=25595 u=zuul n=ansible | ok: [localhost] => 
  msg: 'hypervisor ansible_host: titan19.lab.eng.tlv2.redhat.com '

For context, we set the API and Ingress FIPs in the install-config.yaml but those FIPs are created before cluster provisioning (outside ci-framework). The FIPs could be reserved, but that requires additional logic in ci-framework for FIP management, and significant changes in the shift-on-stack automation too. On the other hand, the Ingress hostnames depend on the cluster name, which can be different for each cluster. That's why adding the DNS entry in the hypervisor after shift-on-stack cluster deployment seems the most reasonable way.

@pablintino pablintino force-pushed the add-hypervisors-to-inventory branch from 8462e7d to 17673dd Compare December 26, 2024 15:46
Copy link
Contributor

openshift-ci bot commented Dec 26, 2024

New changes are detected. LGTM label has been removed.

@openshift-ci openshift-ci bot removed the lgtm label Dec 26, 2024
For use cases like ShiftOnStack the deployment may need to tweak the
hypervisor.
We did not have a way to tell the deployment how to reach the hypervisor
so this commit exposes the hypervisor Ansible instance to each host
and creates a hypervisors group in the generated inventory.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants