-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persist and Serve TaskRun Logs #198
Comments
Credit to @CathalOConnorRH who did a lot of research on our end with fluentd and Loki that led to this feature request. cc @wlynch - this follows up the "Logs with Tekton Results" item discussed during the Tekton Community Summit. |
ELK stack is optimized for storing logs longtime. We are using it in Openshift Logging. At present, we can view pipelineruns and taskruns logs in Openshift Logging. There are some issues with this UX.
The problem with storing logs in Postgres/MySQL is that they aren't build for this. I will start working on this problem next week in two phases. We had a discussion on this in slack but unfortunately, it got lost. First phase: A kube rest api service/proxy which gives us data from Tekton results. |
@khrm we discussed this a bit offline as well. There is two things that could be done as part of
|
Yes, that's what I am planning to do after adding a proxy service. |
@khrm I have added a REST proxy for the existing GRPC server, as part of some changes required to work with KCP. This branch has the proxy changes without the KCP changes. If you are thinking of implementing something like this, then I can create a PR next week to merge this. |
One note, the dashboard team has a minio walkthrough for log persistence. I only bring this up as with this change in context of #82 |
I have submitted #203 as an initial proof of concept. There is a lot here - @khrm @vdemeester do you think this warrants a TEP? |
From Allianz Direct we have created a solution to get logs from S3 and show them in Tekton Dashboard for long-term logs when you need to delete a task in your cluster. |
@afrittoli also pointed out that Tekton's dogfooding CI manually forwards logs to GCS with Tekton Tasks. It looks like we have minimally the following use cases:
|
Update: this feature was captured in TEP-0117, which was approved as a provisional proposal. |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
/area roadmap |
@tektoncd/results-maintainers I think we can call this "done" and mark TEP-0117 as implemented. Thoughts? |
/close This was implemented in v0.5.0 |
@adambkaplan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Feature request
Enhance Results to do the following:
TaskRun
step logs.TaskRun
step can be served.Use case
CI/CD users expect to view the full logs of any given step in a build/pipeline process.
This is primarily driven by two use cases:
This is done in Tekton today by serving the underlying container logs from a
TaskRun
pod. These are stored on the host node and can be lost due toTaskRun
pruning, cluster maintenance, or other mechanisms that delete the underlying pod.For auditing purposes, build logs may need to be retained for the life a particular version of software is supported.
The most common means of persisting Kubernetes logs today is with log forwarding tools like fluentd and analysis engines like ElasticSearch, Amazon CloudWatch, and Grafana Loki.
These stacks are optimized to stream logs across systems for analysis in real time (this is a good thing!).
They are not built to retain and serve individual log files.
This feature request proposes that the Results watcher and apiserver be extended to store logs for
TaskRun
steps.These logs can then be fetched by the apiserver from an API endpoint.
The text was updated successfully, but these errors were encountered: