Volcano-global
is deployed on the basis of Karmada
. After deploying
Karmada
, you need to deploy Volcano
on the worker cluster and deploy three components of Volcano-global
in the Karmada control plane
.
This installation document will provide examples based on deploying Karmada
using ./hack/local-up-karmada.sh
.
You can modify the deployment method according to different environments.
Suggest Karmada
Version: master
Follow the karmada get started guide to deploy Karmada
.
# Clone the karmada repo
git clone https://github.com/karmada-io/karmada.git
cd karmada
# Deploy the karmada environment
./hack/local-up-karmada.sh
Suggest Volcano
Version: 1.10.0
Follow the volcano installation guide to deploy Volcano
to the member clusters.
You can install Volcano
to all member cluster like:
# Switch to the member clusters, you need install the Volcano to the all member cluster.
export KUBECONFIG=$HOME/.kube/members.config
# Deploy Volcano to the member clusters.
kubectl --context member1 apply -f https://raw.githubusercontent.com/volcano-sh/volcano/release-1.10/installer/volcano-development.yaml
kubectl --context member2 apply -f https://raw.githubusercontent.com/volcano-sh/volcano/release-1.10/installer/volcano-development.yaml
kubectl --context member3 apply -f https://raw.githubusercontent.com/volcano-sh/volcano/release-1.10/installer/volcano-development.yaml
3. Deploy the Kubernetes Reflector
to share the Karmada's kubeconfig secret to volcano-global namespace
The Karmada control plane
is a standalone apiserver,
so we need the kubeconfig secret in the karmada-system
to access it.
However, since secrets are namespace-scoped resources,
we need a plugin to share the target secret with the volcano-global
namespace.
# Switch to Karmada host kubeconfig.
export KUBECONFIG=$HOME/.kube/karmada.config
# Deploy the Kubernetes Reflector and share the karmada-webhook-config secret from karmada-system namespace, it includes the kubeconfig of Karmada control plane.
kubectl --context karmada-host -n kube-system apply -f https://github.com/emberstack/kubernetes-reflector/releases/download/v7.1.262/reflector.yaml
kubectl --context karmada-host annotate secret karmada-webhook-config \
reflector.v1.k8s.emberstack.com/reflection-allowed="true" \
reflector.v1.k8s.emberstack.com/reflection-auto-namespaces="volcano-global" \
reflector.v1.k8s.emberstack.com/reflection-auto-enabled="true" \
--namespace=karmada-system
You need to build the images on the root direction of the project.
git clone https://github.com/volcano-sh/volcano-global.git
cd volcano-global
# Build the components.
TAG=1.0 make images
# Load the image to karmada host cluster.
kind load docker-image --name karmada-host volcanosh/volcano-global-controller-manager:1.0
kind load docker-image --name karmada-host volcanosh/volcano-global-webhook-manager:1.0
# Switch to Karmada host kubeconfig.
export KUBECONFIG=$HOME/.kube/karmada.config
# Create volcano-global namespace first in karmada APIServer to used by leader election.
kubectl --context karmada-apiserver apply -f docs/deploy/volcano-global-namespace.yaml
# Apply the component deployment yaml.
kubectl --context karmada-host apply -f docs/deploy/volcano-global-namespace.yaml
kubectl --context karmada-host apply -f docs/deploy/volcano-global-controller-manager.yaml
kubectl --context karmada-host apply -f docs/deploy/volcano-global-webhook-manager.yaml
# Apply the webhook configuration.
kubectl --context karmada-apiserver apply -f docs/deploy/volcano-global-webhooks.yaml
In addition to using Karmada
CRDs, volcano-global
also requires
the introduction of some Volcano
CRDs to enable the queue capability for the volcano-global dispatcher
.
Required Volcano
CRD List:
- batch.volcano.sh_jobs
- scheduling.volcano.sh_queues
- bus.volcano.sh_commands
# Switch to Karmada host kubeconfig.
export KUBECONFIG=$HOME/.kube/karmada.config
# Apply the required CRD to Karmada control plane.
kubectl --context karmada-apiserver apply -f https://github.com/volcano-sh/volcano/raw/release-1.10/installer/helm/chart/volcano/crd/bases/batch.volcano.sh_jobs.yaml
kubectl --context karmada-apiserver apply -f https://github.com/volcano-sh/volcano/raw/release-1.10/installer/helm/chart/volcano/crd/bases/scheduling.volcano.sh_queues.yaml
We need to add a custom resource interpreter
for the Volcano
job to synchronize
the job status to the Karmada control plane
.
# Switch to Karmada host kubeconfig.
export KUBECONFIG=$HOME/.kube/karmada.config
# Apply the volcano job and queue resource interpreters customization configuration.
kubectl --context karmada-apiserver apply -f docs/deploy/vcjob-resource-interpreter-customization.yaml
kubectl --context karmada-apiserver apply -f docs/deploy/queue-resource-interpreter-customization.yaml
By default, we distribute all Queues
from the control plane to every Worker Cluster
to prevent tasks from being dispatched to a Worker Cluster
without a corresponding Queue
.
You can modify this PropagationPolicy
according to your own needs.
It should be noted
that this PropagationPolicy
will be protected in the form of labels
to prevent unintended consequences due to accidental deletion.
# Switch to Karmada host kubeconfig.
export KUBECONFIG=$HOME/.kube/karmada.config
# Apply the volcano job resource interpreter customization configuration.
kubectl --context karmada-apiserver apply -f docs/deploy/volcano-global-all-queue-propagation.yaml
# Protect the ClusterPropagationPolicy.
kubectl --context karmada-apiserver label clusterpropagationpolicy volcano-global-all-queue-propagation resourcetemplate.karmada.io/deletion-protected=Always
# Switch to Karmada host kubeconfig.
export KUBECONFIG=$HOME/.kube/karmada.config
# Apply the example job, try to care the status of member clusters.
kubectl --context karmada-apiserver apply -f docs/deploy/exmaple/.
You will see like:
➜ deploy git:(main) ✗ kubectl --context karmada-apiserver get vcjob
NAME STATUS MINAVAILABLE RUNNINGS AGE
mindspore-cpu Running 1 6 4m4s
➜ deploy git:(main) ✗ kubectl --context member1 get pods
NAME READY STATUS RESTARTS AGE
mindspore-cpu-pod-0 1/1 Running 0 2m24s
mindspore-cpu-pod-1 1/1 Running 0 2m24s
mindspore-cpu-pod-2 1/1 Running 0 2m24s
mindspore-cpu-pod-3 1/1 Running 0 2m24s
mindspore-cpu-pod-4 1/1 Running 0 2m24s
mindspore-cpu-pod-5 1/1 Running 0 2m24s
mindspore-cpu-pod-6 1/1 Running 0 2m24s
mindspore-cpu-pod-7 1/1 Running 0 2m24s