-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
data is lost after petset update #11
Comments
I think that untill data gravity problem is resolved on kubernetes (
|
I think you're right. There is no solution with hostPath, bar mechanisms to force pod or pv to a specific node. I'd try to avoid such requirements anyway, to keep nodes disposable. Me too have been evaluating NFS and gluster, but for now I'm sticking with big cloud providers' volume types that do follow pods around. Ok if I close this? I don't think it's a flaw with Stateful Set or the clustering in this repo. |
I hope I'm not hijacking this conversation but the issue description seems relevant to what I'm seeing when testing your solution. I'm also using big cloud provider volume types but it looks like the way the kafka configuration is set up, none of the log data will be persisted on the volumes specified in the PetSet. Specifically the I've modified this setting to be Let me know if this is at all accurate and if you want me to create a new issue/PR. |
@allenj
also petset doesn't enforce anty-affinity between pods and nodes, so adding something like
on pod templates should be useful too.. @solsson Thank you, |
Thanks @allenj. I did the image change you suggested, but avoided the @fvigotti I like your suggestions. Will you crate two PRs, or should I? I've postponed work on the PetSet kafka setup because in production we had to use https://github.com/Yolean/kubernetes-kafka/tree/nopetset-kafka due to PetSet being alpha. I will most likely revisit and re-document master as StatefulSet after the release of Kubernetes 1.5. |
I would love to give you a PR but I'm struggling of too much works :( and with no time to do it , I would not be very elegant in my PR code.. If I find some spare time in next days I'll try to do something.. but anyway feel free to do by yourself or leave as it is now :) we will get a better solution anyway once data-gravity issue gets resolved by k8s :) |
Why I can not see the pv path on the machine? /opt/deployment/zk-kafka-petset# zk/pv.sh persistentvolume "datadir-zoo-0" created It claims that data path will be "/opt/deployment/zk-kafka-petset/data", and I can see this path is taking effective on the kubernetes dashboard. But why I can not see the path in my local machine> /opt/deployment/zk-kafka-petset# cd /opt/deployment/zk-kafka-petset/data |
It might be worth mentioning that if you're using a AWS EBS for your volume and you've formatted it ext4 there will be a lost+found directory at the root level. If you mount kafka logs to be at that root the lost+found directory will confuse kafka. |
Yes, I discovered the same with Google's persistent volumes. |
Hi I'm playing with your solution and everything works fine,
except when you want to update something in the petset aside from replicas count,
when you want to update something ( ie: node-affinity annotation ) the petset must be deleted and recreated , this force cluster downtime which is not good and also isn't the biggest issue,
backing storage in my cluster is hostpath (I have no other options, cinder/gluster are too slow for my cluster), ( I've used your same pv and pvc setup )
but pv and pvc cannot be bound to specific node, in this current setup also petset distinct images cannot be bound to specific nodes
so during a petset update, all containers gets deleted, and when are recreated if containers gets placed on the same previous pod you get your data back ( with datadir-$n ) if the container $n goes on another pod ( compared to the previous run ) you will get 2 directory datadir-$n and datadir-$n(of previous run) in each pod and data for the whole cluster is reinitialized in this new pvc->nodes assignment
a solution could be something like node-binding pv|pvc ( I don't think it's yet possible on kube 1.4.x ) or a way to update the petset group without deleting all pods and recreate them , also I don't think that's yet possible ..
I am missing something?
The text was updated successfully, but these errors were encountered: