-
Notifications
You must be signed in to change notification settings - Fork 6
Running on a native Google (GCP) Kubernetes Clusters
jbradbury edited this page Aug 1, 2018
·
2 revisions
To run on a GCP native Kubernetes:
- Create k8s cluster from Google console or through CLI
- Go to the connect part on GCP console, it will provide a gcloud CLI command to setup kubectl to connect to the clusters
- Create following cluster-binding-role for helm (
kubectl create -f
)<fileWithFollowingContent>
:
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: ""
-
helm init
on a machine where you have previously addedgalaxy-helm-repo
helm repository. - You might need:
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
- Create single node file system on GCP: https://console.cloud.google.com/launcher/details/click-to-deploy-images/singlefs?project=phenomenal-gcp-testing&folder&organizationId checking that NFS serving is activated.
- More documentation here: https://cloud.google.com/launcher/docs/single-node-fileserver?hl=en_GB&_ga=2.9020275.-756831591.1515154233
- Once deployed, add the no-root-squash preference to /etc/exports (you will need to ssh into the machine through google console or cli, and sudo su).
- Create PV pointing to that NFS created:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-nfs
spec:
storageClassName: standard
capacity:
storage: 20Gi
# volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /data
server: singlefs-1-vm
- Run our helm install process.
Funded by the EC Horizon 2020 programme, grant agreement number 654241 |
---|