Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSD-28131: Deploying COO in place of OBO on SC clusters #667

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 35 additions & 1 deletion deploy/olm/syncselector-template.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,6 @@ objects:
operator: In
values:
- management-cluster
- service-cluster
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this will have to be a step-by-step approach.

First remove it here, then clean up the CSVs, then install COO.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, COO can be installed in parallel or even before OBO is uninstalled.
Of course while OBO is installed, COO install will fail... but the install will succeed as soon as OBO is uninstalled, that is to say once OBO subcription + CSV have been removed

Copy link
Collaborator

@apahim apahim Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, ok, if that’s not noise for SRE, then it’s fine.

Another topic… does it still make sense to control the installation from here? I believe OSDFM is the right choice for installing components in SCs/MCs, no?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, It won't be noise for us. We don't check whether COO or OBO is running correctly; maybe we should check that, but we don't for now. Remark that the Prometheus pods created from the MonitoringStack object will be replaced when the new operator is finally installed, but the outage should be quite minimal (few seconds).

Another topic… does it still make sense to control the installation from here? I believe OSDFM is the right choice for installing components in SCs/MCs, no?

That's a good point, indeed while the uninstallation of OBO must be done through this repository, it is probably suitable to have COO deployment controlled through OSDFM.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will add the subscription in this file:
https://gitlab.cee.redhat.com/service/osd-fleet-manager/-/blob/main/config/resources/service-cluster.yaml
Speaking of that: shouldn't we get rid of this template file after ALL clusters use COO in place of OBO? I mean I always found it strange to have a file in the operator codebase telling how the operator should be deployed on the infrastructure. To me, this should be decorrelated; remark that this operator is not the only operator suffering from this lack of boundaries.
WDYT?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. The attention point is the Namespace in the first SSS. If we delete it, even for a few seconds, the MonitoringStack CR in the SCs/MCs will be removed, as well as the Prom and AM instances and monitoring will be down for ROSA HCP :)

We will have to change the SSS to “Upsert” first, to them remove it, making sure it’s also deleted from Hive (delete:true in the saas-file target).

Then we can control the Namespace from somewhere else, like the OSDFM.

resourceApplyMode: Sync
resources:
- apiVersion: operators.coreos.com/v1alpha1
Expand Down Expand Up @@ -94,6 +93,41 @@ objects:
requests:
cpu: ${RESOURCE_REQUEST_CPU}
memory: ${RESOURCE_REQUEST_MEMORY}
- apiVersion: hive.openshift.io/v1
kind: SelectorSyncSet
metadata:
name: cluster-observability-operator-hypershift
spec:
clusterDeploymentSelector:
matchLabels:
api.openshift.com/managed: 'true'
matchExpressions:
- key: ext-hypershift.openshift.io/cluster-type
operator: In
values:
- service-cluster
resourceApplyMode: Sync
resources:
- apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
labels:
operators.coreos.com/cluster-observability-operator.openshift-operators: ""
name: cluster-observability-operator
namespace: openshift-operators
spec:
channel: development # This is the only channel available for now - To be replaced with ${CHANNEL} when possible
name: cluster-observability-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
config:
resources:
limits:
cpu: ${RESOURCE_LIMIT_CPU}
memory: ${RESOURCE_LIMIT_MEMORY}
requests:
cpu: ${RESOURCE_REQUEST_CPU}
memory: ${RESOURCE_REQUEST_MEMORY}
- apiVersion: hive.openshift.io/v1
kind: SelectorSyncSet
metadata:
Expand Down