-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: support per-secret files in S3 #60
Comments
Hey @jamestelfer - this sounds really interesting. We'll chat about it as a group and come back to you with our thoughts. |
Hi @jamestelfer 👋🏼 Your proposal is quite a nice idea, and if you're happy to assemble a PR we'd love to work through it with you. As for your questions, I'll answer them inline below.
I think the path you've proposed is fine. There's a mix of
It's definitely a good thing to match the pattern that is covered by the agent. Validating that the rest of the name is consistent is also a good thing and would make a lot of sense. I don't think this would need to be configurable, at least until there's a particular case that requires it becoming known.
I think your alternate idea of using an ENV with an agent hook to process it would be a sound way of handling the case. This is similar to how the agent handles ignored environment variables (see https://github.com/buildkite/agent/blob/a7662406a86189316675bc4b109bb13c1ff209a8/internal/job/executor.go#L758-L771 for an example of what I mean here). Please reach out here or via support if you have further questions or would like to discuss deeper on anything! |
Awesome! I have some time over the next couple of weeks that I can put towards this. I'll look to implement a relatively naïve first cut approach (that is, downloading the items 1 by 1 serially), and see where we need to go from there. One call out: the current code only uses |
At the moment, secrets can be added to a per-pipeline environment file that is stored in S3. This works quite well, however in our case we give engineers that own the pipeline write-only access to the secrets bucket.
This allows them to update secrets as necessary but denies the capacity to read existing secrets (for obvious reasons).
However, when all secrets are in a single file, it is easy to accidentally overwrite one of the secrets. For this reason, we implemented support in the agent
environment
hook for per-pipeline per-secret files.Files are downloaded from
${BUILDKITE_SECRETS_BUCKET}/${BUILDKITE_PIPELINE_SLUG}/secret-files
. For each of these files:_(SECRET|SECRET_KEY|PASSWORD|TOKEN|ACCESS_KEY)$
(correlating to the default obfuscation rules for the agent)For example, consider the following structure in S3:
environment
will be loaded firstNEWRELIC_TOKEN
andAMAZING_AI_TOOL_SECRET
will be created as environment variables, where the respective file contents become the variable valuesanother-file-name
would show an error in the pipeline log and write an annotation informing the user that the secret file is being ignored.Overall this has worked well for us. Engineers can upload their secrets without fear of unwanted side-effects, and less support intervention is required.
I propose that the secrets hook supports this strategy (or a variation thereof), taking advantage of parallel downloads to make it faster and providing the capability to other customers.
Questions:
Love to get some feedback. If it's accepted, we'd be happy to implement it.
The text was updated successfully, but these errors were encountered: