Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(14638): Set managed-by label on installed resources (Alpha) #14945

Open
wants to merge 403 commits into
base: master
Choose a base branch
from

Conversation

jmilic1
Copy link

@jmilic1 jmilic1 commented Aug 7, 2023

Note on DCO:

If the DCO action in the integration test fails, one or more of your commits are not signed off. Please click on the Details link next to the DCO action for instructions on how to resolve this.

Checklist:

  • Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this does not need to be in the release notes.
  • The title of the PR states what changed and the related issues number (used for the release note).
  • The title of the PR conforms to the Toolchain Guide
  • I've included "Closes [ISSUE #]" or "Fixes [ISSUE #]" in the description to automatically close the associated issue.
  • I've updated both the CLI and UI to expose my feature, or I plan to submit a second PR with them.
  • Does this PR require documentation updates?
  • I've updated documentation as required by this PR.
  • Optional. My organization is added to USERS.md.
  • I have signed off all my commits as required by DCO
  • I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
  • My build is green (troubleshooting builds).
  • My new feature complies with the feature status guidelines.
  • I have added a brief description of why this PR is necessary and/or what this PR solves.

Please see Contribution FAQs if you have questions about your pull-request.

@jmilic1 jmilic1 changed the title Add managed by label#14638 feat: Add managed by label#14638 Aug 7, 2023
@jmilic1 jmilic1 changed the title feat: Add managed by label#14638 feat(14638): Add managed by label Aug 7, 2023
@codecov
Copy link

codecov bot commented Aug 7, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 49.52%. Comparing base (7fe1263) to head (9ca4aca).
Report is 29 commits behind head on master.

❗ Current head 9ca4aca differs from pull request most recent head 6a3795d. Consider uploading reports for the commit 6a3795d to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #14945      +/-   ##
==========================================
- Coverage   49.57%   49.52%   -0.05%     
==========================================
  Files         273      269       -4     
  Lines       48314    47025    -1289     
==========================================
- Hits        23951    23291     -660     
+ Misses      22000    21449     -551     
+ Partials     2363     2285      -78     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jmilic1 jmilic1 changed the title feat(14638): Add managed by label feat(14638): Add managed by label (Alpha) Aug 8, 2023
@lindhe
Copy link
Contributor

lindhe commented Aug 9, 2023

If the new resource does not have the label set, it should now default to argocd

I think that if the user has opted in to use this feature, the label should be set always. If the label already exists with another value, overwrite it!

  1. What does the label indicate? It indicates who manages the resource. And who is managing the resource? Well, Argo is, for sure (or else this code does not matter). So why not make sure it's set correctly?
  2. It is very common that Helm charts set that label, which is usually a good thing but in our case it would be misleading to leave it unchanged.

@@ -61,6 +61,8 @@ spec:
namespace: guestbook
```

By default, all resources are generated with label `app.kubernetes.io/managed-by: argocd`. Current Status: Alpha (Since v2.8.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be a good idea to have this be opt-in rather than changing the default. That will make it a lot easier to merge, since it wouldn't be a breaking change.

reposerver/repository/testdata/managed-by/managed-by.yaml Outdated Show resolved Hide resolved
if labels == nil {
labels = make(map[string]string, 1)
}
if labels["app.kubernetes.io/managed-by"] == "" {
Copy link
Member

@blakepettersson blakepettersson Aug 9, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case of Helm applications app.kubernetes.io/managed-by will be Helm, even though de-facto it is Argo CD that is doing the management. I'm not sure what the right thing to do is here tbh. Either Argo CD should overwrite it and set argocd, or leave as-is. 🤔

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like what @lindhe suggested with making this opt-in. Other than making this PR a non-breaking change, anybody who uses this feature in the future will know what's going on because they will explicitly choose to use it.

This way, I feel like maintainers who want to be able to filter their k8s resources by ones managed by argo will want managed-by: helm to be overwritten with argocd.
Thoughts?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly why I think we should have this as an opt-in! If we can offload this design choice to the users, it's all for the better.

@blakepettersson
Copy link
Member

Apart from the issue above it generally LGTM 👍 It'd also be good to add an E2E test that checks that this label is set for all manifests managed by an application.

@lindhe
Copy link
Contributor

lindhe commented Aug 9, 2023

How hard would it be to make this enforced on a project level (i.e. "set this label for all resources deployed by applications in project x")?

@jmilic1
Copy link
Author

jmilic1 commented Aug 9, 2023

How hard would it be to make this enforced on a project level (i.e. "set this label for all resources deployed by applications in project x")?

I am assuming we are adding our new option to argocd-repo-server?

If so, I believe your suggestion is more than doable. I would go an extra mile and let users add multiple projects e.g. argocd-repo-server --managed-by-argo stringArray, but then it seems like a lot of work to name all of my projects if I want all of them to opt-in.

Do you think this is too tedious? If so, is it good or legit practice in these situations to interpret an empty array as "Write this label on all of my projects"?

@lindhe
Copy link
Contributor

lindhe commented Aug 10, 2023

If so, I believe your suggestion is more than doable.

Great to hear! It is my opinion that we should try to get the basic functionality merged first (as on opt-in 😉) and then we create a follow-up PR for making it possible on a project-basis. That keeps things agile and minimal.

@@ -20,6 +20,7 @@ argocd-repo-server [flags]
-h, --help help for argocd-repo-server
--logformat string Set the logging format. One of: text|json (default "text")
--loglevel string Set the logging level. One of: debug|info|warn|error (default "info")
--managed-by-argo Set managed-by label for every k8s resource generated by argo Current Status: Alpha (Since v2.8.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉

@blakepettersson
Copy link
Member

blakepettersson commented Aug 11, 2023

and then we create a follow-up PR for making it possible on a project-basis

IMO I think this should be all-or-nothing; if we are to have any flags at all then I think Jura's approach is the way to go. At some point in the future this should be the default for all manifests managed by Argo CD.

@jmilic1
Copy link
Author

jmilic1 commented Aug 13, 2023

I need help figuring out what happened with this test

It'd also be good to add an E2E test that checks that this label is set for all manifests managed by an application.

Can you point me to the bit of code which creates a new argocd repo server in e2e test setup or where the new flag can be set for an already existing repo server which these tests use?
I'm a bit lost on these e2e tests in general because I've never actually seen any yet in my career 😬, but if it won't take too much time out of the PR, with some guidance, I'd like to write some for experience sake.

@blakepettersson
Copy link
Member

I need help figuring out what happened with this test

I think you'll need to run make codegen for the corresponding documentation to be generated for the repo-server.

@blakepettersson
Copy link
Member

Can you point me to the bit of code which creates a new argocd repo server in e2e test setup or where the new flag can be set for an already existing repo server which these tests use?

Yeah this one is a bit tricky 😄

You'd need to do something like the following:

  1. In manifests/base/repo-server/argocd-repo-server-deployment.yaml you'll need to add ARGOCD_REPO_SERVER_MANAGED_BY_ARGO in the env params, so it can be managed from argocd-cmd-params-cm
  2. In e2e/fixture/fixture.go add a toggle, so that managed-by can be toggled from the e2e tests.
  3. The same file has a RestartRepoServer() method which you can use in the e2e tests

@@ -212,6 +214,7 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&allowOutOfBoundsSymlinks, "allow-oob-symlinks", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_ALLOW_OUT_OF_BOUNDS_SYMLINKS", false), "Allow out-of-bounds symlinks in repositories (not recommended)")
command.Flags().StringVar(&streamedManifestMaxTarSize, "streamed-manifest-max-tar-size", env.StringFromEnv("ARGOCD_REPO_SERVER_STREAMED_MANIFEST_MAX_TAR_SIZE", "100M"), "Maximum size of streamed manifest archives")
command.Flags().StringVar(&streamedManifestMaxExtractedSize, "streamed-manifest-max-extracted-size", env.StringFromEnv("ARGOCD_REPO_SERVER_STREAMED_MANIFEST_MAX_EXTRACTED_SIZE", "1G"), "Maximum size of streamed manifest archives when extracted")
command.Flags().BoolVar(&managedByArgo, "managed-by-argo", env.ParseBoolFromEnv("ARGOCD_REPO_SERVER_MANAGED_BY_ARGO", false), "Set managed-by label for every k8s resource generated by argo")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want Current Status: Alpha (Since v2.8.0) you'd have to add it here and run make codegen, so that the docs matches what is in the command.

You could also attempt to add the exact same string in docs/operator-manual/server-commands/argocd-repo-server.md without running make codegen, but if they differ you'll bump into the same issue as previously.

@jmilic1
Copy link
Author

jmilic1 commented Aug 14, 2023

I'm having trouble writing the e2e test and/or setting up test infrastructure to support the new env variable. Can someone take a look?

@blakepettersson
Copy link
Member

Seems like you're doing everything right @jmilic1. I took a look at RestartRepoServer() and it seems like the whole thing is wrapped in if IsRemote() {, which I don't think is necessary.. can you try removing the if block and see how it goes?

@jmilic1
Copy link
Author

jmilic1 commented Aug 14, 2023

Seems like you're doing everything right @jmilic1. I took a look at RestartRepoServer() and it seems like the whole thing is wrapped in if IsRemote() {, which I don't think is necessary.. can you try removing the if block and see how it goes?

Looking at all the uses of RestartRepoServer() in code, I'm noticing it is always used like:

doSomething(){
   if fixture.IsLocal(){
      ...
   } else {
      fixture.RestartRepoServer()
   }
}

It seems to me like fixture.RestartRepoServer() might only work if someone creates a test environment by following test/remote/README.md and creating a remote environment instead of creating a local one.

This way, tests complete no matter if you are running the environment locally or remotely.
The pipelines might create the environment locally, which means they cannot use RestartRepoServer() so they fail with error Error from server (NotFound): deployments.apps \"argocd-repo-server\" not found"

@blakepettersson
Copy link
Member

@jmilic1 I think the kubectl commands in RestartRepoServer are not running in the correct namespace. If you see how the kubectl commands elsewhere in fixture.go are running, they all run in TestNamespace() (the namespace where Argo CD is running in the E2E tests).. so we can probably keep the old behaviour for RestartRepoServer and add an else where it'll otherwise execute in the TestNamespace() context.

@jmilic1
Copy link
Author

jmilic1 commented Aug 15, 2023

@blakepettersson I think I did what you mentioned, but it's still failing. Can you take a look?

@jmilic1
Copy link
Author

jmilic1 commented Aug 19, 2023

Bump for the this. Not sure what to do to correct e2e tests

@blakepettersson
Copy link
Member

@jmilic1 sorry about the delay, I'm on paternity leave so I haven't had much focus re: Argo CD. I took a look anyways and the E2E test are indeed run with Goreman instead of k8s.

A workaround for this is to instead set the variable through the Makefile and not to attempt a restart of the repo-server.

When I did that locally I got this:

                Error Trace:    /go/src/github.com/argoproj/argo-cd/test/e2e/app_management_test.go:2833
                                                        /go/src/github.com/argoproj/argo-cd/test/e2e/fixture/app/consequences.go:47
                                                        /go/src/github.com/argoproj/argo-cd/test/e2e/app_management_test.go:2824
                Error:          Not equal:
                                expected: "argo-cd"
                                actual  : "argocd"

                                Diff:
                                --- Expected
                                +++ Actual
                                @@ -1 +1 @@

test/e2e/app_management_test.go Outdated Show resolved Hide resolved
test/e2e/fixture/fixture.go Outdated Show resolved Hide resolved
test/e2e/fixture/fixture.go Outdated Show resolved Hide resolved
test/e2e/fixture/fixture.go Outdated Show resolved Hide resolved
test/e2e/app_management_test.go Outdated Show resolved Hide resolved
test/e2e/app_management_test.go Outdated Show resolved Hide resolved
@blakepettersson
Copy link
Member

Add this line in the Makefile in the start-e2e-local task and you should be good to go

	ARGOCD_REPO_SERVER_MANAGED_BY_ARGO=true \

djerfy and others added 15 commits November 13, 2023 21:08
* chore: update gitops engine version

Signed-off-by: pashakostohrys <[email protected]>

* chore: update gitops engine version

Signed-off-by: pashakostohrys <[email protected]>

---------

Signed-off-by: pashakostohrys <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
…link to `../faq.md#why-is-my-app-out-of-sync-even-after-syncing` (argoproj#16236)

The broken link is visible in https://argo-cd.readthedocs.io/en/stable/user-guide/annotations-and-labels/#labels

Signed-off-by: Alexey Vazhnov <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
…rgoproj#16222)

* fix(16207): Fix name conflict in appset controller

Signed-off-by: gmuselli <[email protected]>

* fix(16207): e2e

Signed-off-by: gmuselli <[email protected]>

* Update test/e2e/fixture/applicationsets/utils/fixture.go

Signed-off-by: Ishita Sequeira <[email protected]>

---------

Signed-off-by: gmuselli <[email protected]>
Signed-off-by: Ishita Sequeira <[email protected]>
Co-authored-by: Ishita Sequeira <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
* feat: support components in kustomize

Signed-off-by: pashakostohrys <[email protected]>

* feat: support components in kustomize

Signed-off-by: pashakostohrys <[email protected]>

* feat: support components in kustomize

Signed-off-by: pashakostohrys <[email protected]>

* feat: support components in kustomize

Signed-off-by: pashakostohrys <[email protected]>

* fix: typo in kustomization word

Signed-off-by: pashakostohrys <[email protected]>

* fix: typo in kustomization word

Signed-off-by: pashakostohrys <[email protected]>

* fix: typo in kustomization word

Signed-off-by: pashakostohrys <[email protected]>

* fix: typo in kustomization word

Signed-off-by: pashakostohrys <[email protected]>

---------

Signed-off-by: pashakostohrys <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
Add Fortra to Official USers

Signed-off-by: Alex Jones <[email protected]>
Co-authored-by: Ishita Sequeira <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
* fix: check for double definition

As found in argoproj#13965 (and as a follow-up to argoproj#13999), we also need to
define what happens if _both_ managedNamespaceMetadata _and_ an
Application manifest are both defined for the same namespace.

The idea here is that if that happens, we emit an
`ApplicationConditionRepeatedResourceWarning`, and set the sync status
to `Unknown`, since it's unclear what is supposed to happen.

The user will then have the option of removing one of the two
definitions.

Signed-off-by: Blake Pettersson <[email protected]>

* fix: check for double definition

A simpler fix - don't add a managed namespace to the targetObjs list if
a namespace already exists in the application source.

Signed-off-by: Blake Pettersson <[email protected]>

* test: add test cases

This adds a test case showing that an ns manifest will override
`managedNamespaceMetadata` without failing horribly (as it currently
does on `HEAD`), as well as a "standard" mNMd diff vs live.

Signed-off-by: Blake Pettersson <[email protected]>

---------

Signed-off-by: Blake Pettersson <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
I found this being used in an application I was working on but had a
hard time discovering its meaning since it wasn't in the
specification[1] though it was in the `sync-options` docs[2] (see also
baa0f2e)

[1] https://argo-cd.readthedocs.io/en/stable/user-guide/application-specification/
[2] https://argo-cd.readthedocs.io/en/stable/user-guide/sync-options/#selective-sync

Signed-off-by: Matt Hughes <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
…j#12172) (argoproj#15903)

* fix(applicationset): prevent app deletion according to appset policy

Signed-off-by: mikutas <[email protected]>

* test: add unit test

Signed-off-by: mikutas <[email protected]>

* fix: unit test

Signed-off-by: mikutas <[email protected]>

* fix: remove TODO

Signed-off-by: mikutas <[email protected]>

---------

Signed-off-by: mikutas <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
Signed-off-by: Noam Gal <[email protected]>
Co-authored-by: Ishita Sequeira <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
Signed-off-by: Rafal Pelczar <[email protected]>
Signed-off-by: jmilic1 <[email protected]>
@jmilic1 jmilic1 force-pushed the add-managed-by-label#14638 branch from 23ac4b5 to a3804df Compare November 13, 2023 20:08
@jmilic1 jmilic1 requested a review from a team as a code owner November 13, 2023 20:08
# Conflicts:
#	cmd/argocd-repo-server/commands/argocd_repo_server.go
#	docs/operator-manual/argocd-cmd-params-cm.yaml
#	manifests/base/repo-server/argocd-repo-server-deployment.yaml
#	manifests/core-install.yaml
#	manifests/ha/install.yaml
#	manifests/ha/namespace-install.yaml
#	manifests/install.yaml
#	manifests/namespace-install.yaml
#	reposerver/repository/repository.go
Signed-off-by: jmilic1 <[email protected]>
@lindhe
Copy link
Contributor

lindhe commented Nov 18, 2023

My dudes, I just realized something!

In Helm, there are a handful of Built-in Objects. One of them is Release.Service and is described as "The service that is rendering the present template". Using that object is in-fact the recommended way to set the app.kubernetes.io/managed-by label, according to Helm.

I do not think we should set .Release.Service instead of putting the label there ourselves, considering Argo CD renders manifests from other sources than Helm charts too. But I wanted to bring it up here, because perhaps it would make sense to set that object in Argo CD so that "Helm knows who is rendering the chart". While that work would probably belong to another PR, maybe that is something we can take into consideration when implementing this PR.

It could possibly impact this feature, if for example it becomes an internal struggle between Argo CD trying to apply a Helm chart that sets managed-by=Helm and this feature trying to set managed-by=argocd. And if we can inform Helm that Release.Service is indeed argocd, then the two worlds would align.

I'm not sure this makes any sense, it was just a thought that stuck me that I wanted to bring up.

@lindhe
Copy link
Contributor

lindhe commented Feb 8, 2024

@jmilic1 and @blakepettersson: I notice this has stalled. Can I help somehow to power through the last bit so we can pass the finish line here? 🙂

@lets-call-n-walk
Copy link
Contributor

I am also interested in seeing this to completion for our company. Would be happy to help in any way that I can.

@lets-call-n-walk
Copy link
Contributor

@blakepettersson @lindhe @jmilic1 I fixed the checks in this PR. Just trying to get this past the 99% mark. Let me know if this is not proper etiquette or something.

@jmilic1
Copy link
Author

jmilic1 commented Mar 13, 2024

@lets-call-n-walk Nope, you're good. I failed to see where my code was failing and got overwhelmed from dealing with tests I had to wait on for half an hour. Still, it's my bad for not responding.

If you can get your tests working, be my guest on finishing this

@ChristianCiach
Copy link
Contributor

Maybe it would nice to make the value argocd configurable, for use-cases like this: #19531

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Optionally add app.kubernetes.io/managed-by label to all resources