-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jenkins: preparations to upload and serve artifacts #161
Conversation
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Disabling the setup wizard is done in #160, this is unrelated to this PR. |
Drafting until #160 is merged. |
Adds per-environment storage account and storage container for storing Jenkins artifacts. Signed-off-by: Florian Klink <[email protected]>
This adds two rclone services, one for uploads, one for browsing. Both listen on unix sockets. The first one is used by (and limited to) Jenkins to upload artifacts, the second one is exposed (over HTTPS) through caddy, exposing artifacts at the /artifacts/* subpath. Signed-off-by: Florian Klink <[email protected]>
rclone also doesn't support connecting to a webdav endpoint that's exposed via an HTTP socket, so apply that patch too. The feature also has been sent upstream, we link to the PR, but vendor a patch, as it doesn't apply cleanly on our rclone version. Signed-off-by: Florian Klink <[email protected]>
We'll shell out to our patched rclone from the pipeline definition. Signed-off-by: Florian Klink <[email protected]>
With #160 merged, I rebased this. |
Corresponding pipeline PR: tiiuae/ghaf-jenkins-pipeline#25 |
We don't want to cache directory listings for 5 minutes (the default), as that will mean artifacts won't be visible for way too long. Set to 5 secs, which will still give us some request deduplication, but not too much. We cannot apply this to the binary cache configs yet, as rclone seems to always want to fetch a listing, which is very slow for that. Signed-off-by: Florian Klink <[email protected]>
Reviewed and verified working by deploying to a private environment. I guess once this is merged, and all the Jenkins pipelines are changed to no longer use the |
Yeah, that's blocked on everything being switched over, and we might need to schedule some time to properly sync some existing state (if we want to keep the disk contents) |
Depends on #160, will be rebased once merged (so ignore the first 2 commits).
This adds the necessary services running on the jenkins-controller machine to upload and serve artifacts from Azure storage.
$PATH
.