This guide will describe how to get set up for local development of the Flink Operator. This is most likely useful for people actually developing the operator, but may also be useful for developers looking to develop their applications locally.
Install Minikube
You will want to start minikube on >1.16 <=1.24, for example:
minikube start --kubernetes-version=v1.24.17
This can be a handy complement to the CLI, especially for new users
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml
$ kubectl proxy &
$ open http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview
$ export GOPATH=~/src/go
(should probably go into your shell's profile)
$ mkdir -p $GOPATH/src/github.com/lyft
$ cd $GOPATH/src/github.com/lyft
$ git clone [email protected]:lyft/flinkk8soperator.git
$ cd flinkk8soperator
$ kubectl create -f deploy/crd.yaml
$ kubectl create -f deploy/role.yaml
$ kubectl create -f deploy/role-binding.yaml
In this mode, we run the operator locally (on our mac) or inside the IDE and configure it to talk to the docker-for-mac kubernetes cluster. This is very convinient for development, as we can iterate quickly, use a debugger, etc.
$ go mod download
$ KUBERNETES_CONFIG="$HOME/.kube/config" go run ./cmd/flinkk8soperator/main.go --config=local_config.yaml
This mode more realistically emulates how the operator will run in production, however the turn-around time for changes is much longer.
First we need to build the docker container for the operator:
$ docker build -t flinkk8soperator .
Then create the operator cluster resources:
$ kubectl create -f deploy/flinkk8soperator_local.yaml
$ kubectl create -f examples/wordcount/flink-operator-custom-resource.yaml
Now you should be able to see two pods (one for the jobmanager and one for the taskmanager) starting:
$ kubectl get pods
You should also be able to access the jobmanager UI at:
http://localhost:8001/api/v1/namespaces/default/services/{APP_NAME}-jm:8081/proxy/#/overview
(note you will need to be running kubectl proxy
for this to work)
You can tail the logs for the jobmanager (which may be useful for debugging failures) via:
$ kubectl logs -f service/{APP_NAME}-jm
You can SSH into the jobmanager by running
$ kubectl exec -it $(kubectl get pods -o=custom-columns=NAME:.metadata.name | grep "\-jm\-") -- /bin/bash