-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
datasciencepipelines component refactor #1340
base: feature-operator-refactor
Are you sure you want to change the base?
datasciencepipelines component refactor #1340
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## feature-operator-refactor #1340 +/- ##
============================================================
Coverage ? 25.86%
============================================================
Files ? 55
Lines ? 4384
Branches ? 0
============================================================
Hits ? 1134
Misses ? 3111
Partials ? 139 ☔ View full report in Codecov by Sentry. |
you may also need to re-generate some additional metadata:
|
bundle/manifests/opendatahub-operator.clusterserviceversion.yaml
Outdated
Show resolved
Hide resolved
bc7f2c7
to
cb49d4b
Compare
controllers/components/datasciencepipelines/datasciencepipelines_controller.go
Show resolved
Hide resolved
911378c
to
836fa5d
Compare
836fa5d
to
6e1164c
Compare
controllers/components/datasciencepipelines/datasciencepipelines_controller.go
Outdated
Show resolved
Hide resolved
c2e67eb
to
f7178e9
Compare
func devFlags(ctx context.Context, rr *odhtypes.ReconciliationRequest) error { | ||
dsp, ok := rr.Instance.(*componentsv1.DataSciencePipelines) | ||
if !ok { | ||
return fmt.Errorf("resource instance %v is not a componentsv1.Ray)", rr.Instance) | ||
} | ||
|
||
if dsp.Spec.DevFlags == nil { | ||
return nil | ||
} | ||
|
||
// Implement devflags support logic | ||
// If dev flags are set, update default manifests path | ||
if len(dsp.Spec.DevFlags.Manifests) != 0 { | ||
manifestConfig := dsp.Spec.DevFlags.Manifests[0] | ||
if err := odhdeploy.DownloadManifests(ctx, componentsv1.DataSciencePipelinesComponentName, manifestConfig); err != nil { | ||
return err | ||
} | ||
if manifestConfig.SourcePath != "" { | ||
rr.Manifests[0].Path = odhdeploy.DefaultManifestPath | ||
rr.Manifests[0].ContextDir = componentsv1.DataSciencePipelinesComponentName | ||
rr.Manifests[0].SourcePath = manifestConfig.SourcePath | ||
} | ||
} | ||
|
||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know if there is anything additional needed here looking at the old component implementation I don't think so but I am not sure
f7178e9
to
138cd52
Compare
aae608e
to
5f72374
Compare
it seems this part https://github.com/opendatahub-io/opendatahub-operator/blob/incubation/components/datasciencepipelines/datasciencepipelines.go#L147-L168 has not been implemented, am I wrong ? |
func UnmanagedArgoWorkFlowExists(ctx context.Context, cli client.Client) error { | ||
workflowCRD := &apiextensionsv1.CustomResourceDefinition{} | ||
if err := cli.Get(ctx, client.ObjectKey{Name: ArgoWorkflowCRD}, workflowCRD); err != nil { | ||
if k8serr.IsNotFound(err) { | ||
return nil | ||
} | ||
return fmt.Errorf("failed to get existing Workflow CRD : %w", err) | ||
} | ||
// Verify if existing workflow is deployed by ODH with label | ||
odhLabelValue, odhLabelExists := workflowCRD.Labels[labels.ODH.Component(componentsv1.DataSciencePipelinesComponentName)] | ||
if odhLabelExists && odhLabelValue == "true" { | ||
return nil | ||
} | ||
|
||
return fmt.Errorf("%s CRD already exists but not deployed by this operator. "+ | ||
"Remove existing Argo workflows or set `spec.components.datasciencepipelines.managementState` to Removed to proceed ", ArgoWorkflowCRD) | ||
} | ||
|
||
func SetExistingArgoCondition(conditions *[]conditionsv1.Condition, reason, message string) { | ||
status.SetCondition(conditions, string(status.CapabilityDSPv2Argo), reason, message, corev1.ConditionFalse) | ||
status.SetComponentCondition(conditions, componentsv1.DataSciencePipelinesComponentName, status.ReconcileFailed, message, corev1.ConditionFalse) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lburgazzoli I added that here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but that is the used in the DSC controller ? I'm a little confused about if this is a DSP concern or a generat DSC concern. in the first case, the DSP controlelr should set its own condition, the DSC controller can eventually copy them.
@VaishnaviHire do you remember what is the logic here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jackdelahunt the UnmanagedArgoWorkFlowExists
is also called here: https://github.com/opendatahub-io/opendatahub-operator/blob/incubation/components/datasciencepipelines/datasciencepipelines.go#L108-L110
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am kind of confused but here is what I think you were saying?:
UnmanagedArgoWorkFlowExists
can be a new action in the DSP controller- We can also set the first condition in the DSP controller
- And if that condition is there we can set the component condition in the DSC controller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New commit has those changes with this logic translated into and action in the DSP controller but I have some questions:
- I have this at the top of the new action, in terms of upgrading does this work the same way or because now this is an action do things act a bit differently
// Check preconditions if this is an upgrade
if rr.Instance.GetStatus().Phase != status.PhaseReady {
return nil
}
-
I set the
CapabilityDSPv2Argo
condition for the DSC in this action which is what I think you were looking for, is updating the DSC from a component okay? Just wondering what are the rule around component/DSC interactions -
The second part of
SetExistingArgoCondition
status.SetComponentCondition(conditions, componentsv1.DataSciencePipelinesComponentName, status.ReconcileFailed, message, corev1.ConditionFalse)
I am not sure where to put this in the DSC, I think failing the new UnmanagedArgoWorkFlowExists
action already cause the component's reconcile to be in a failed state because it is erroring. So it this already done for us when we process the componentErrors
? Or maybe things are acting differently I am not sure
This is my first PR on the operator so my understanding is limited but hopefully growing 😃
e6cc628
to
68f4aba
Compare
@lburgazzoli rebasing and re-testing now |
4c12f0a
to
f6ffc95
Compare
controllers/components/datasciencepipelines/datasciencepipelines_controller_actions.go
Outdated
Show resolved
Hide resolved
controllers/components/datasciencepipelines/datasciencepipelines_controller_actions.go
Outdated
Show resolved
Hide resolved
c0e1c33
to
8acd4a2
Compare
8acd4a2
to
32989a4
Compare
32989a4
to
f4996a9
Compare
4330711
to
0619f16
Compare
/retest |
0619f16
to
165f799
Compare
/retest |
165f799
to
890e650
Compare
LGTM we would probably need to add a test for the preconditions step, but can be done as a follow up I let @zdtsw doing the final review |
DataSciencePipelinesComponentName = "data-science-pipelines-operator" | ||
// value should match whats set in the XValidation below | ||
DataSciencePipelinesInstanceName = "default-datasciencepipelines" | ||
DataSciencePipelinesKind = "DataSciencePipelines" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to use plural?
@@ -71,3 +76,11 @@ type DataSciencePipelinesList struct { | |||
func init() { | |||
SchemeBuilder.Register(&DataSciencePipelines{}, &DataSciencePipelinesList{}) | |||
} | |||
|
|||
// DSCDataSciencePipelines contains all the configuration exposed in DSC instance for DataSciencePipelines component | |||
type DSCDataSciencePipelines struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same for here
} | ||
|
||
if err = tc.testCtx.wait(func(ctx context.Context) (bool, error) { | ||
// Verify ray CR is deleted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:D
}) | ||
|
||
if err != nil { | ||
return fmt.Errorf("unable to find Ray CR instance: %w", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:D
@@ -115,6 +115,11 @@ package datasciencecluster | |||
// +kubebuilder:rbac:groups="user.openshift.io",resources=users,verbs=list;watch;patch;delete;get | |||
// +kubebuilder:rbac:groups="console.openshift.io",resources=consolelinks,verbs=create;get;patch;delete | |||
|
|||
// DataSciencePipelines |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
line 218
@@ -662,10 +656,10 @@ func (r *DataScienceClusterReconciler) getRequestName(ctx context.Context) (stri | |||
// argoWorkflowCRDPredicates filters the delete events to trigger reconcile when Argo Workflow CRD is deleted. | |||
var argoWorkflowCRDPredicates = predicate.Funcs{ | |||
DeleteFunc: func(e event.DeleteEvent) bool { | |||
if e.Object.GetName() == datasciencepipelines.ArgoWorkflowCRD { | |||
if e.Object.GetName() == datasciencepipelinesctrl.ArgoWorkflowCRD { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i am thinking, should not this argoWorkflowCRDPredicates
moved into component, instead of having in DSC controller?
Description
How Has This Been Tested?
Screenshot or short clip
Merge criteria