-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is the Repo per app scenario possible with gitops-connector? #73
Comments
You need to have multiple instances of gitops-connector on your clusters. One instance per application/repo |
@eedorenko would a k8 operator approach be feasible here? To clarify, I mean one installation of gitops-connector (operator + custom Connection CRD) which then handles multiple subscribe-to-notification/publish-to-specific-repo Connection resources. |
@eedorenko I've added support for multiple configs in the one instance via a CRD if you are interested? It still supports the original env config via helm values type of setup, albeit it is a breaking change due to the values restructuring. The switch between modes of operation are determined by setting Whilst it works for ArgoCD notifications as-is, I haven't tested with flux as my environment isn't setup to deal with it. It's not a great deal of work and can explain if this goes further. My fork is here, and let me know if you want a PR opened. |
@markphillips100 Very interesting, I will try your approach by doing a test with Flux. I will get back to you as soon as possible. |
@cyberjpb1 For the I imagine in FluxV2 use case we would need to make use of the Alert's |
@cyberjpb1 I checked a previous PR you opened for insight into the eventMetadata and that gave me enough info to create the So in theory, the following should suffice:
NOTE: The helm chart creates a service account, role and role binding to support the connector watching and updating the gitopsconfig resource. The operator also automatically patches (hence the updating) a finalizer into the resource to ensure when it is deleted that a proper cleanup occurs before the manifest is removed from the cluster. |
@markphillips100 Hello, sorry for the delay of my response but my schedule did not allow me to do a test before today. I'm using Red Hat CodeReady Containers (crc) for my tests. Here is the log when the pod starts. [2024-11-24 00:21:37 +0000] [7] [INFO] Starting gunicorn 20.0.4 |
The last warning in your logs would point at some role/role-binding issue for the service account the operator is configured to run with. Maybe take a look there to ensure the watch permission is available on |
The helm chart should be setting that up for you here |
Yes, I use your Helm Chart, the only thing I changed is replacing "example.com" by "apps-crc.testing". |
Yeah was noticing that but wouldn't think it would be a problem....prior to the edit :-) |
Did you by any chance install the helm chart prior to making the api group changes? Just wondering if there are now 2 CRDs for the same Kind. The 3 event handlers here aren't currently checking for apiVersion so would be just chance which one is used if more than one CRD group exists. These handlers would need to change (and probably should anyway) to include the group name. |
It's as if the callback was never sent. |
Can you delete the operator pod (without uninstalling/reinstalling helm chart) and confirm if logs still show this warning please? It's possibly a race condition re permissions not set by time pod comes up so a restart of pod should negate that. |
OK, I deleted the pod and here is the log of the new pod: [2024-11-24 02:36:45 +0000] [7] [INFO] Starting gunicorn 20.0.4 |
So still a permission issue by the looks, preventing the operator watching for the gitopsconfig. I would expect to see some DEBUG lines pertaining to the The other messages in the previous log are just the raw event data coming from flux via the Any chance you can revert to a clean install of the original example.com CRD just so we can rule that out completely? |
OK, I'll try that tomorrow. |
No problem |
You were right, it was an old version of the GitOpsConfig that was stuck. Now I have this that keeps coming back in a loop in the log (I have hidden the sensitive values.) DEBUG:root:_should_update_abandoned_pr. should_update: False |
This is the abandoned PR status reconciliation which is the same as the original gitops-connector code. At least I don't recall making changes here other than more logging.
This statement indicates the PR was closed (abandoned) longer than the constant 72 hours defined in code so the processing just ignores further processing of the PR's status. Unfortunately there is no way in Azure Devops to delete abandoned PRs. |
Hello again @markphillips100, I keep trying to make gitops-connector work. I don't have the following message anymore:
In the log it seems to work fine but in Azure DevOps, it looks like the task in the pipeline is not receiving the Callback. Here is how the task is defined, is it OK?
|
It's possible the service account I used was supplied those extra permissions through a completely separate role binding. I'm not in a position to confirm that for a while. If you no longer get the error then I'd say you solved the permission issue at least. As for the task, the urlSuffix path looks different. Here's mine - don't know if the api version from 6.0 to 7.0 will also make a difference:
|
@markphillips100 Just to be sure, is it the "dev" branch I should use or is it "flux-multi-config-support"? |
For you (using flux and this testing) use the flux-multi-config-support. It only differs from dev in terms of checking for metadata in the flux operator side of things markphillips100/gitops-connector@dev...markphillips100:gitops-connector:flux-multi-config-support |
@markphillips100 Hello, I didn't realize that gitops-connector had to be running before creating/applying the gitopsconfig.yaml manifest because that's when the kopf create event is triggered and the configuration is initialized. There will possibly be a problem with that if for some reason I have to delete the pod of the gitops-connector, I will have to recreate or update all the gitopsconfig.yaml manifests of all application repos. Could the solution perhaps be, that when starting gitops-connector, get the list of all GitOpsConfig type names and perform a parse_config on each one. By the way, nice work. Are you going to make an official version? Happy Holidays! |
@markphillips100 For loading configurations on startup, I tried the following code and it seems to work. I added this just before the @kopf.on.create in the gitops_event_handler.py file (it would still be necessary to create a parameter for the group name).
|
@cyberjpb1 good find re the create event and the behaviour you were seeing. It definitely isn't desirable to have an order of initialisation for resource and operator. I haven't been in a position to focus on it lately but if memory serves I'm pretty sure my configs existed prior to connector running without problem. I can't be certain though so I'd need to test more in the new year (hopefully in Jan) to confirm the behaviour. Will most likely use your code addition is my guess. Just glad you have it working for now :-) As for an official version, I guess that's up to the owner @eedorenko |
Hello @eedorenko, Do you think you will accept the code proposal of @markphillips100 ? |
Yes, I will. Let's PR it |
I'm unavailable for a couple of weeks but happy for someone else to create the PR if anyone should so wish. Also happy to do it myself in March if that's the preferred action. |
I think it would be fair for the main author of the changes to make the PR, so I'll wait for @markphillips100 to do it in March. |
Is the Repo per app scenario possible with gitops-connector?
Currently I have a repo for each application which also contains the manifests.
If the answer to my question is yes, how to configure it?
gitRepositoryType: AZDO
ciCdOrchestratorType: AZDO
gitOpsOperatorType: FLUX
azdoGitOpsRepoName:
azdoOrgUrl:
azdoPrRepoName:
gitOpsAppURL:
orchestratorPAT:
The text was updated successfully, but these errors were encountered: