- Download CDI
- Lint, Test, Build
- Submit PRs
- Releases
- Vendoring Dependencies
- S3-compatible client setup:
To download the source directly, simply
$ go get -u kubevirt.io/containerized-data-importer
GnuMake is used to drive a set of scripts that handle linting, testing, compiling, and containerizing. Executing the scripts directly is not supported at present.
NOTE: Standard builds require a running Docker daemon!
The standard workflow is performed inside a helper container to normalize the build and test environment for all devs. Building in the host environment is supported by the Makefile, but is not recommended.
Docker builds may be disabled by setting DOCKER=0; e.g.
$ make all DOCKER=0
$ make all
executes the full workflow. For granular control of the workflow, several Make targets are defined:
all
: cleans up previous build artifacts, compiles all CDI packages and builds containersapidocs
: generate client-go code (same as 'make generate') and swagger docs.build
: compile all CDI binary artifacts and generate controller and operator manifestsclean
: cleans up previous build artifactscluster-up
: start a default Kubernetes or Open Shift cluster. set KUBEVIRT_PROVIDER environment variable to either 'k8s-1.18' or 'os-3.11.0-crio' to select the type of cluster. set KUBEVIRT_NUM_NODES to something higher than 1 to have more than one node.cluster-down
: stop the cluster, doing a make cluster-down && make cluster-up will basically restart the cluster into an empty fresh state.cluster-down-purge
: cluster-down and cleanup all cached images from docker registry. Accepts make variables DOCKER_PREFIX. Removes all images of the specified repository. If not specified removes localhost repository of current cluster instance.cluster-sync
: 'make cluster-sync-cdi' followed by 'make cluster-sync-test-infra'cluster-sync-cdi
: builds the controller/importer/cloner, and pushes it into a running cluster. The cluster must be up before running a cluster sync. Also generates a manifest and applies it to the running cluster after pushing the images to it.cluster-sync-test-infra
: pushes the test-infra pods into a running cluster.cluster-clean-cdi
: cleans all cdi resources, but not test-infra (cdi.kubevirt.io/testing labeled), cdi namespace and operator manifest.cluster-clean-test-infra
: cleans all test-infra resources (cdi.kubevirt.io/testing labeled).deps-update
: runs 'go mod tidy' and 'go mod vendor'format
: executeshfmt
,goimports
, andgo vet
on all CDI packages. Writes back to the source files.generate
: generate client-go deepcopy functions, clientset, listers and informers.generate-verify
: generate client-go deepcopy functions, clientset, listers and informers and validate codegen.goveralls
: run code coverage tracking system.manifests
: generate a cdi-controller and operator manifests in_out/manifests/
. Accepts make variables DOCKER_TAG, DOCKER_PREFIX, VERBOSITY, PULL_POLICY, CSV_VERSION, QUAY_REPOSITORY, QUAY_NAMESPACEpublish
: CI ONLY - this recipe is not intended for use by developerspush
: compiles, builds, and pushes to the repo passed inDOCKER_PREFIX=<my repo>
release-description
: generate a release announcement detailing changes between 2 commits (typically tags). ExpectsRELREF
andPREREF
to be set- e.g.
$ make release-description RELREF=v1.1.1 PREREF=v1.1.1-alpha.1
- e.g.
test
: execute all tests (NOTE:WHAT
is expected to match the go cli pattern for paths e.g../pkg/...
. This differs slightly from rest of themake
targets)test-unit
: execute all tests under./pkg/...
test-functional
: execute functional tests under./tests/...
. Additional test flags can be passed to the test binary via the TEST_ARGS variable, see below for an example and restrictions.test-lint
runsgofmt
andgolint
tests against src files
vet
: lint all CDI packages
Several variables are provided to alter the targets of the above Makefile
recipes.
These may be passed to a target as $ make VARIABLE=value target
-
WHAT
: The path from the repository root to a target directory (e.g.make test WHAT=pkg/importer
) -
DOCKER_PREFIX
: (default: kubevirt) Set repo globally for image and manifest creation -
DOCKER_TAG
: (default: latest) Set global version tags for image and manifest creation -
VERBOSITY
: (default: 1) Set global log level verbosity -
PULL_POLICY
: (default: IfNotPresent) Set global CDI pull policy -
TEST_ARGS
: A variable containing a list of additional ginkgo flags to be passed to functional tests. The string "--test-args=" must prefix the variable value. For example:`make TEST_ARGS="--test-args=-ginkgo.noColor=true" test-functional >& foo`.
Note: the following extra flags are not supported in TEST_ARGS: -kubeurl, -cdi-namespace, -kubeconfig, -kubectl-path since these flags are overridden by the hack/build/run-functional-tests.sh script. To change the default settings for these values the KUBE_URL, CDI_NAMESPACE, KUBECONFIG, and KUBECTL variables, respectively, must be set.
-
RELREF
: Required byrelease-description
. Must be a commit or tag. Should be the more recent thanPREREF
-
PREREF
: Required byrelease-description
. Must also be a commit or tag. Should be the later thanRELREF
If using a standard bare-metal/local laptop rhel/kvm environment where nested virtualization is supported then the standard kubevirtci framework can be used.
Environment Variables and Supported Values
Env Variable | Default | Additional Values |
---|---|---|
KUBEVIRT_PROVIDER | k8s-1.18 | k8s-1.17, os-3.11.0-crio, |
KUBEVIRT_STORAGE* | none | ceph, hpp, nfs |
KUBEVIRT_PROVIDER_EXTRA_ARGS | ||
NUM_NODES | 1 | 2-5 |
To Run Standard cluster-up/kubevirtci Tests
# make cluster-up
# make cluster-sync
# make test-functional
To run specific functional tests, you can leverage ginkgo command line options as follows:
# make TEST_ARGS="--test-args=-ginkgo.focus=<test_suite_name>" test-functional
E.g. to run the tests in transport_test.go:
# make TEST_ARGS="--test-args=-ginkgo.focus=Transport" test-functional
Clean Up
# make cluster-down
Clean Up with docker container cache cleanup To cleanup all container images from local registry and to free a considerable amount of disk space. Note: caveat - cluser-sync will take longer since will have to fetch all the data again
# make cluster-down-purge
If running in a non-standard environment such as Mac or Cloud where the kubevirtci framework is not supported, then you can use the following example to run Functional Tests.
-
Stand-up a Kubernetes cluster (local-up-cluster.sh/kubeadm/minikube/etc...)
-
Clone or get the kubevirt/containerized-data-importer repo
-
Run the CDI controller manifests
- To generate latest manifests
# make manifests
To customize environment variables see make targets
- Run the generated latest manifests There are two options to deploy cdi directly via cdi-controller.yaml or to deploy it via operator
#kubectl create -f ./_out/manifests/cdi-controller.yaml namespace/cdi created customresourcedefinition.apiextensions.k8s.io/datavolumes.cdi.kubevirt.io created customresourcedefinition.apiextensions.k8s.io/cdiconfigs.cdi.kubevirt.io created clusterrole.rbac.authorization.k8s.io/cdi created clusterrolebinding.rbac.authorization.k8s.io/cdi-sa created clusterrole.rbac.authorization.k8s.io/cdi-apiserver created clusterrolebinding.rbac.authorization.k8s.io/cdi-apiserver created clusterrolebinding.rbac.authorization.k8s.io/cdi-apiserver-auth-delegator created serviceaccount/cdi-sa created deployment.apps/cdi-deployment created configmap/cdi-insecure-registries created serviceaccount/cdi-apiserver created rolebinding.rbac.authorization.k8s.io/cdi-apiserver created role.rbac.authorization.k8s.io/cdi-apiserver created rolebinding.rbac.authorization.k8s.io/cdi-extension-apiserver-authentication created role.rbac.authorization.k8s.io/cdi-extension-apiserver-authentication created service/cdi-api created deployment.apps/cdi-apiserver created service/cdi-uploadproxy created deployment.apps/cdi-uploadproxy created
#./cluster-up/kubectl.sh apply -f "./_out/manifests/release/cdi-operator.yaml" namespace/cdi created customresourcedefinition.apiextensions.k8s.io/cdis.cdi.kubevirt.io created configmap/cdi-operator-leader-election-helper created clusterrole.rbac.authorization.k8s.io/cdi.kubevirt.io:operator created serviceaccount/cdi-operator created clusterrole.rbac.authorization.k8s.io/cdi-operator-cluster-permissions created clusterrolebinding.rbac.authorization.k8s.io/cdi-operator created deployment.apps/cdi-operator created #./cluster-up/kubectl.sh apply -f "./_out/manifests/release/cdi-cr.yaml" cdi.cdi.kubevirt.io/cdi created
-
Build and run the func test servers In order to run fucntional tests the below servers have to be run
- host-file-server is required by the functional tests and provides an endpoint server for image files and s3 buckets
- registry-server is required by the functional tests and provides an endpoint server for container images.
Note: for this server to run the follwoing setting is required in each cluster node
systemctl -w user.max_user_namespaces=1024
Build and Push to registry
# DOCKER_PREFIX=<repo> DOCKER_TAG=<tag> make docker-functest-images
Generate manifests
# DOCKER_PREFIX=<repo> DOCKER_TAG=<docker tag> PULL_POLICY=<pull policy> VERBOSITY=<verbosity> make manifests
Run servers
# ./cluster-up/kubectl.sh apply -f ./_out/manifests/bad-webserver.yaml # ./cluster-up/kubectl.sh apply -f ./_out/manifests/test-proxy.yaml # ./cluster-up/kubectl.sh apply -f ./_out/manifests/file-host.yaml # ./cluster-up/kubectl.sh apply -f ./_out/manifests/registry-host.yaml # ./cluster-up/kubectl.sh apply -f ./_out/manifests/imageio.yaml
-
Run the tests
# make test-functional
- If you encounter test errors and are following the above steps try:
# make clean && make docker
redeploy the manifests above, and re-run the tests.
All PRs should originate from forks of kubevirt.io/containerized-data-importer. Work should not be done directly in the upstream repository. Open new working branches from main/HEAD of your forked repository and push them to your remote repo. Then submit PRs of the working branch against the upstream main branch.
Release practices are described in the release doc.
This project uses go modules
as it's dependency manager. At present, all project dependencies are vendored; using go mod
is unnecessary in the normal work flow.
go modules
automatically scans and vendors in dependencies during the build process, you can also manually trigger go modules by running 'make dep-update'.
$HOME/.aws/credentials
[default]
aws_access_key_id = <your-access-key>
aws_secret_access_key = <your-secret>
$HOME/.mc/config.json:
{
"version": "8",
"hosts": {
"s3": {
"url": "https://s3.amazonaws.com",
"accessKey": "<your-access-key>",
"secretKey": "<your-secret>",
"api": "S3v4"
}
}
}