Replies: 11 comments 1 reply
-
To share storage amongst multiple pods, the solution from DigitalOcean seems to be to run your own cephs implementation on a Kubernetes node, that pods claim their volumes against bypassing the limitation imposed by DigitalOceans CSI driver. |
Beta Was this translation helpful? Give feedback.
-
I wonder if the attraction of a single place to be able to delete packages (which happens maybe once a year) is distracting us from keeping things simple, and resilient (we don't want to be able to delete stuff easily!) We currently have all CPAN data on all Physical servers and use I'm trying to get my head around pods so if we have one or more webservers dockers and several cron jobs we could run all those containers together as a But the actually the pod must be We could run 3
We want a SINGLE container/cron that actually creates jobs in response to RRRWatcher (we call this Does that make sense? |
Beta Was this translation helpful? Give feedback.
-
Now that I understand what you're looking for in storage a little more, we'll definitely want to implement to doks-rook-ceph example from above. Pods combine multiple containers, but you can't have things like a cronjob in the same pod, a cronjob creates its own pod. We could have a separate container with an application that does a sleep for set amount of time as part of a different container in a pod, but we lose some of the features. Using the doks-rook-ceph implementation is definitely cleaner, and with it we can allow multiple pods to write to the same filesystem (as well as only 1 to write and many others to read, which can be desirable when updating things with cron). |
Beta Was this translation helpful? Give feedback.
-
Ways of managing storage in k8'sRook/ceph Object Storage DaemonRook/ceph is setup with access to a volume (local disk.. OR mounted) on each node (vm or physical server) in the cluster. It then automatically replicates files between every node in the cluster and it allows multiple containers to then access the file system.. something like this... graph TD
subgraph Node 3
B[Rook/Ceph Object Storage Daemon]
C[Client Container 1]
D[Client Container 2]
B --> G[Rook/Ceph MON]
G --> B
B --> F[File Share]
F <--> C
F <--> D
end
subgraph Node 2
I[Rook/Ceph Object Storage Daemon]
J[Client Container 3]
K[Client Container 4]
I --> M[Rook/Ceph MON]
M --> I
I --> N[File Share]
N <--> J
N <--> K
end
subgraph Node 1
P[Rook/Ceph Object Storage Daemon]
Q[Client Container 5]
R[Client Container 6]
P --> T[Rook/Ceph MON]
T --> P
P --> U[File Share]
Q <--> U
R <--> U
end
F -- AUTOMATED DATA SYNC --> N[File Share]
N -- AUTOMATED DATA SYNC --> U[File Share]
U -- AUTOMATED DATA SYNC --> F[File Share]
Share volume(s)A volume can be setup on a node and then volumeMount e.g. in grep node into each container that needs access. This would be how something like ElasticSearch (which does it's own replication between nodes in the cluster) would be setup. graph TD;
Cluster --> Node1;
Cluster --> Node2;
Cluster --> Node3;
Node1 --> Volume1;
Node2 --> Volume2;
Node3 --> Volume3;
Volume1 -->|Mount| Container1A;
Volume1 -->|Mount| Container1B;
Volume2 -->|Mount| Container2A;
Volume2 -->|Mount| Container2B;
Volume3 -->|Mount| Container3A;
Volume3 -->|Mount| Container3B;
Next I will look at various options for actual volumes we could add to the nodes which can then be shared with the cluster |
Beta Was this translation helpful? Give feedback.
-
Volume options
Hetzner VM Pricing, we are currently on CX31's + 3 x 35G volumes (for grep.mc)Note: you can go beyond the table below, by selecting the |
Beta Was this translation helpful? Give feedback.
-
Current servers mounted disk usage (this is bm-mc-01)
We may need several types of storage...
Suggestion:Short term
Longer term (once we are ready to move
|
Beta Was this translation helpful? Give feedback.
-
Each instance in the Hetzner cluster should have a volume mounted of the same size Another volume on each node to hold ElasticSearch (should we not move to hosted ElasticSearch) The |
Beta Was this translation helpful? Give feedback.
-
Grep is using storageClassName: local-path in it'sPersistenVolumeClaim Looking at https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner this seems it'll be what we will always use So if I have this right.. the pvc.yaml is saying "This service should get some disk space". But what I can't quite get my head around is each container wanting disk space has to specify the mountPath which seems like duplication all over the place (cronjob deployment.yaml, deployment.yaml 2nd time. Is this just how it's done or have I missed something? |
Beta Was this translation helpful? Give feedback.
-
Yes, each container has to specify the mountPath as that's where the container wants the volume mounted within the container. For example You can think of the storage class as the driver to use when accessing the disk. The current grep container is using local disk, which mounts a volume from the kubernetes node to the container. There are others that mount volumes in different ways. When rook is installed, it will allow for a cephs storage class, which will allow a container to mount a filesystem regardless of which node it is on. So no, going forward we will only use |
Beta Was this translation helpful? Give feedback.
-
DO: accessModes must be set to ReadWriteOnce. The other parameters, ReadOnlyMany and ReadWriteMany, are not supported by DigitalOcean volumes. See the Kubernetes documentation for more about accessModes. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
DigitalOcean offers both Object (think S3) and Block (shared filesystem) storage.
Block storage can contain multiple volumes, each volume can only be mounted to one pod at a time via the Kubernetes [PersistentVolumeClaim}(https://kubernetes.io/docs/concepts/storage/persistent-volumes/) declaration. This allows the volume to follow the pod no matter which kubernetes node the pod happens to be scheduled to.
For temporary storage, the ephemeral volume
emptyDir
declaration can be used on the pod. These volumes need to be populated on pod creation.Beta Was this translation helpful? Give feedback.
All reactions