You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Right now we are not keeping the entire history of the CR status in the Nexus object, what we have is:
// NexusStatus defines the observed state of Nexus// +k8s:openapi-gen=truetypeNexusStatusstruct {
// Condition status for the Nexus deployment// +operator-sdk:gen-csv:customresourcedefinitions.statusDescriptors=true// +operator-sdk:gen-csv:customresourcedefinitions.statusDescriptors.displayName="appsv1.DeploymentStatus"DeploymentStatus v1.DeploymentStatus`json:"deploymentStatus,omitempty"`// Will be "OK" when this Nexus instance is up// +operator-sdk:gen-csv:customresourcedefinitions.statusDescriptors=trueNexusStatusNexusStatusType`json:"nexusStatus,omitempty"`// Gives more information about a failure status// +operator-sdk:gen-csv:customresourcedefinitions.statusDescriptors=trueReasonstring`json:"reason,omitempty"`// Route for external service access// +operator-sdk:gen-csv:customresourcedefinitions.statusDescriptors=trueNexusRoutestring`json:"nexusRoute,omitempty"`// Conditions reached during an update// +listType=atomic// +operator-sdk:gen-csv:customresourcedefinitions.statusDescriptors=true// +operator-sdk:gen-csv:customresourcedefinitions.statusDescriptors.displayName="Update Conditions"UpdateConditions []string`json:"updateConditions,omitempty"`// ServerOperationsStatus describes the general status for the operations performed in the Nexus server instanceServerOperationsStatusOperationsStatus`json:"serverOperationsStatus,omitempty"`
}
we should keep up to n previous conditions available, appending without end here will just clutter up the output of kubectl describe and others. We should remove older entries when adding new ones if this capacity is reached. This seems to be the behavior of native resources, at least.
this is a bit subjective, but IMO it makes sense to simply overwrite the timestamp of the most recent condition if we're about to report the same exact condition again. The rationale is that if all entries from the conditions slice are the same (imagine a failure loop) then we can extract little information from it and its purpose is defeated.
this is a bit subjective, but IMO it makes sense to simply overwrite the timestamp of the most recent condition if we're about to report the same exact condition again
Yup. The idea is to have each condition in the array with the latest status and the status set to true for those we currently are. It's a state diagram-like implementation.
Is your feature request related to a problem? Please describe.
Right now we are not keeping the entire history of the CR status in the Nexus object, what we have is:
Although is interesting to reflect the internal
Deployment
status, ideally we would carry the conditions array ourselves. See an example: https://medium.com/swlh/advanced-kubernetes-operators-development-988edad5f58a (Set Status Conditions section)Describe the solution you'd like
To add the
Status.Conditions[]
field to the Nexus CR.Describe alternatives you've considered
Right now we have only the latest "condition" described in our CR
Additional context
This article brings a glimpse about this implementation. But we can also see Knative CRs for other references.
The text was updated successfully, but these errors were encountered: