-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the Repo per app scenario possible with gitops-connector? #73
Comments
You need to have multiple instances of gitops-connector on your clusters. One instance per application/repo |
@eedorenko would a k8 operator approach be feasible here? To clarify, I mean one installation of gitops-connector (operator + custom Connection CRD) which then handles multiple subscribe-to-notification/publish-to-specific-repo Connection resources. |
@eedorenko I've added support for multiple configs in the one instance via a CRD if you are interested? It still supports the original env config via helm values type of setup, albeit it is a breaking change due to the values restructuring. The switch between modes of operation are determined by setting Whilst it works for ArgoCD notifications as-is, I haven't tested with flux as my environment isn't setup to deal with it. It's not a great deal of work and can explain if this goes further. My fork is here, and let me know if you want a PR opened. |
@markphillips100 Very interesting, I will try your approach by doing a test with Flux. I will get back to you as soon as possible. |
@cyberjpb1 For the I imagine in FluxV2 use case we would need to make use of the Alert's |
@cyberjpb1 I checked a previous PR you opened for insight into the eventMetadata and that gave me enough info to create the So in theory, the following should suffice:
NOTE: The helm chart creates a service account, role and role binding to support the connector watching and updating the gitopsconfig resource. The operator also automatically patches (hence the updating) a finalizer into the resource to ensure when it is deleted that a proper cleanup occurs before the manifest is removed from the cluster. |
@markphillips100 Hello, sorry for the delay of my response but my schedule did not allow me to do a test before today. I'm using Red Hat CodeReady Containers (crc) for my tests. Here is the log when the pod starts. [2024-11-24 00:21:37 +0000] [7] [INFO] Starting gunicorn 20.0.4 |
The last warning in your logs would point at some role/role-binding issue for the service account the operator is configured to run with. Maybe take a look there to ensure the watch permission is available on |
The helm chart should be setting that up for you here |
Yes, I use your Helm Chart, the only thing I changed is replacing "example.com" by "apps-crc.testing". |
Yeah was noticing that but wouldn't think it would be a problem....prior to the edit :-) |
Did you by any chance install the helm chart prior to making the api group changes? Just wondering if there are now 2 CRDs for the same Kind. The 3 event handlers here aren't currently checking for apiVersion so would be just chance which one is used if more than one CRD group exists. These handlers would need to change (and probably should anyway) to include the group name. |
It's as if the callback was never sent. |
Can you delete the operator pod (without uninstalling/reinstalling helm chart) and confirm if logs still show this warning please? It's possibly a race condition re permissions not set by time pod comes up so a restart of pod should negate that. |
OK, I deleted the pod and here is the log of the new pod: [2024-11-24 02:36:45 +0000] [7] [INFO] Starting gunicorn 20.0.4 |
So still a permission issue by the looks, preventing the operator watching for the gitopsconfig. I would expect to see some DEBUG lines pertaining to the The other messages in the previous log are just the raw event data coming from flux via the Any chance you can revert to a clean install of the original example.com CRD just so we can rule that out completely? |
OK, I'll try that tomorrow. |
No problem |
You were right, it was an old version of the GitOpsConfig that was stuck. Now I have this that keeps coming back in a loop in the log (I have hidden the sensitive values.) DEBUG:root:_should_update_abandoned_pr. should_update: False |
This is the abandoned PR status reconciliation which is the same as the original gitops-connector code. At least I don't recall making changes here other than more logging.
This statement indicates the PR was closed (abandoned) longer than the constant 72 hours defined in code so the processing just ignores further processing of the PR's status. Unfortunately there is no way in Azure Devops to delete abandoned PRs. |
Is the Repo per app scenario possible with gitops-connector?
Currently I have a repo for each application which also contains the manifests.
If the answer to my question is yes, how to configure it?
gitRepositoryType: AZDO
ciCdOrchestratorType: AZDO
gitOpsOperatorType: FLUX
azdoGitOpsRepoName:
azdoOrgUrl:
azdoPrRepoName:
gitOpsAppURL:
orchestratorPAT:
The text was updated successfully, but these errors were encountered: