- Edit config to match your GKE project/zone
- Source helpers
- Generate
k8s
(rungke_glusterfs_heketi_generate_k8s
) - Build docker image (run
gke_glusterfs_heketi_build_image
) - Push docker image (run
gke_glusterfs_heketi_push_image
) - Create a cluster (run
gke_glusterfs_heketi_create_cluster
if you want) - Configure cluster permissions (RBAC) (run
gke_glusterfs_heketi_configure_rbac
) - Deploy
Job
within the cluster (rungke_glusterfs_heketi_deploy_glusterfs_heketi
) - Wait for it to finish (tail the logs if you want with:
gke_glusterfs_heketi_tail_job_logs
) NOTE: This takes forever. The script that runsgk-deploy -g
to deploy glusterfs runs a job and the script has to wait for the job to time out right now before proceeding to create the necessary firewall rules and storage class.
You can deploy the example k8s (mariadb statefulset) to test that everything works.
kubectl apply -f k8s-example
If the mariadb
pod gets stuck in "pending", you may need to recreate the storage class (I'm not sure why this is a bug).
This can be taken care of with gke_glusterfs_heketi_if_storage_class_not_found_during_k8s_example_run_me
- Run
gke_glusterfs_heketi_delete_cluster_and_disks
NOTE: All of this is automated. This is included purely for documentation purposes.
- Create a cluster with at least 3 nodes
- Create persistent disks and attach to the nodes
- Load necessary kernel modules for GlusterFS, install
glusterfs-client
on host machines - Generate storage network topology
- Create necessary firewall rules
- Run
gk-deploy -g
to deploy the glusterfs daemonset and heketi - Change heketi service from
ClusterIP
toNodePort
(will i/o timeout otherwise during persistent volume claim) - Update firewall rules to allow new heketi node port
- Deploy heketi/glusterfs storage class using
<any node ip>:<heketi nodeport>