-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Lifecycle hooks configuration to Tenant #1835
Conversation
Hi @jiuker, I was wondering if you had any thoughts on this change? |
@mctoohey would you mind tell us how you are planning to use the lifecycle hooks?, how can it be useful? |
Created a PR to your branch to add the CRD docs update @mctoohey mctoohey#1 |
@pjuarezd thank you for you response. Sure, I can provide some more context. |
That case should be minio's issue. @mctoohey |
The other use case is a |
I might be able to set |
Nice ! |
Having a Default minio shutdown timeout is 5 seconds, extending the Extending the pod Now, when a pod enters I don't think additional measures to ensure minio can finish requests are really needed, minio is already prepared to gracefully exit on SIGTERM in a short time fashion, but the |
My thought is that the preStop hook adds a period of time where the MinIO process will not receive any new connections and is still processing existing ones. So that when it does receive the SIGTERM it will be able to shutdown very quickly as there is nothing left to do.
In my experience when restarting one MinIO pod in a multi node setup. Some client requests are always disrupted (not a huge deal since clients can retry) and some event notifications are dropped. My understanding from previous reading was that MinIO does not provide the guarantee that bucket notifications are delivered on shutdown. For example if I make a PUT request, a SIGTERM is sent to MinIO, I get 200 response back but the notification may not have been sent (or been added to the queue if a queue is being used). |
That's right, there is no garantee that all event notifications will be delivered before the process shutdowns, but you should not lost events because of that, the right way should be to configure the notification target to persist the undelivered events, so that those are replayed once the process comes back online, ie for the Elasticsearch target set MINIO_NOTIFY_ELASTICSEARCH_QUEUE_DIR |
I should clarify that in my experience even when using a queue_dir sometimes events can be lost. I just did a test with a kafka event notification with queue_dir set. I had a loop set to write a 1000 objects to MinIO (test_object1 to test_object1000). Restarted MinIO. After the restart I checked MinIO and saw that 456 had been written to MinIO but only 455 PUT object events had been received by Kafka and there were no events remaining in the queue directory. I will continue to look into some of your suggestions but this is getting a bit off topic for this change. This change is really about whether we should be able to configure |
This should be treated as a bug, and not do workarounds around it. Are you seeing that you did 1000 puts with 1000 success and you were doing a restart of the server after that? |
@harshavardhana I've opened minio/minio#18404 with more detail on the what I have observed. |
could you rebase @mctoohey? |
Signed-off-by: pjuarezd <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still belive this is good to have, empower final user in standard features is a good mantra
Would I be able to get another review on this? I think allowing end users to configure standard Kubernetes features on the MinIO pods is a reasonable thing to do. |
Hi team, Is anyone still reviewing this pull request? I find this change to be very useful and it would be great if it can be merged. Thanks, |
@mctoohey or @fangzhouxie please help us resolve conflicts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but it would be nice to have an integration test for this.
Adds Lifecycle hooks configuration to Tenant. This is parsed down to the MinIO container of each pod in the pool.
The tenant already allows liveliness, readiness and startup probes to be configured. Similar to those, the usage of lifecycle hooks is entirely optional. Adding configuration for lifecycle hooks (https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/) allows custom postStart/preStop hooks can be added to the MinIO container if desired.