Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tracking issue for e2e testing framework and implementation #1732

Closed
11 of 23 tasks
chuckha opened this issue Nov 7, 2019 · 7 comments
Closed
11 of 23 tasks

Tracking issue for e2e testing framework and implementation #1732

chuckha opened this issue Nov 7, 2019 · 7 comments
Assignees
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@chuckha
Copy link
Contributor

chuckha commented Nov 7, 2019

Testing framework

Based on the testing proposal we have a lot of work to do! This issue tracks the work to be done

Initial work to set patterns

  • Build framework
    • Collect feedback
    • Merge framework
  • Implement CAPD tests

Parallel work after patterns are in place (in order of priority)

Tests to write in the framework that can be reused

  • There is a Kubernetes cluster with one node that passes a healthcheck after creating a properly configured Cluster, InfraCluster, Machine, InfraMachine and BootstrapConfig. (https://github.com/kubernetes-sigs/cluster-api/blob/master/test/framework/one_node_cluster.go)
  • Creating the resources necessary for three control planes will create a three node cluster. (https://github.com/kubernetes-sigs/cluster-api/blob/master/test/framework/multi_node_control_plane.go)
  • Deleting a cluster deletes all resources associated with that cluster including Machines, BootstrapConfigs, InfraMachines, InfraCluster, and generated secrets. (🏃E2e delete resources #1915)
  • Creating a cluster with one control plane and one worker node will result in a cluster with two nodes.
  • The version fields in Machines are respected within the bounds of the Kubernetes skew policy.
  • Creating a control plane machine and a MachineDeployment with two replicas will create a three node cluster with one control plane node and two worker nodes.
  • MachineDeployments do their best to keep Machines in an expected state. For example:
    • Modifying a replica count on a MachineDeployment will modify the number of worker nodes and Machines in running state the cluster has.
    • Deleting a machine that is managed by a MachineDeployment will be recreated by the MachineDeployment
  • Optionally, Machines report failures when their underlying InfraMachines report failures.
  • Manage multiple workload clusters in different namespaces.
  • Workload Clusters created pass Kubernetes Conformance. Please see CAPA's implementation.

/milestone v0.3.0
/priority important-soon
/assign

@k8s-ci-robot k8s-ci-robot added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Nov 7, 2019
@k8s-ci-robot k8s-ci-robot added this to the v0.3.0 milestone Nov 7, 2019
@chuckha
Copy link
Contributor Author

chuckha commented Nov 14, 2019

Some ideas that came out of #1742

reduce boilerplate

consistency

cc @detiber @wfernandes, thanks for the feedback

@sb1975
Copy link

sb1975 commented Nov 15, 2019

What does this mean
"Creating the resources necessary for three control planes will create a three node cluster."

Since MachineDeployments do not support control-plane nodes yet, does this mean presently the three control-plane nodes will be created individually using just "kind: Machine" and not MachineDeployments ?

@sb1975
Copy link

sb1975 commented Nov 15, 2019

"Creating a control plane machine and a MachineDeployment with two replicas will create a three node cluster with one control plane node and two worker nodes."
^I guess I already tried this.
In the MachineDeployment yaml file, I just changed the replicas: 2.
It resulted in the target(or workload) cluster with 3 nodes ( 1 master + 2 worker).
I also validated that I can run a nginx pod inside the cluster and it works as expected.

apiVersion: cluster.x-k8s.io/v1alpha2 kind: MachineDeployment metadata: name: capi-quickstart-worker labels: cluster.x-k8s.io/cluster-name: capi-quickstart # Labels beyond this point are for example purposes, # feel free to add more or change with something more meaningful. # Sync these values with spec.selector.matchLabels and spec.template.metadata.labels. nodepool: nodepool-0 spec: replicas: **2** selector: matchLabels: cluster.x-k8s.io/cluster-name: capi-quickstart nodepool: nodepool-0 template: metadata: labels: cluster.x-k8s.io/cluster-name: capi-quickstart nodepool: nodepool-0 spec: version: v1.15.3 bootstrap: configRef: name: capi-quickstart-worker apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2 kind: KubeadmConfigTemplate infrastructureRef: name: capi-quickstart-worker apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2 kind: DockerMachineTemplate
root@capd:~# kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
capi-quickstart-capi-quickstart-controlplane-0 Ready master 4h6m v1.15.3
capi-quickstart-capi-quickstart-worker-85cbf8fd8c-mpr2l Ready 3h45m v1.15.3
capi-quickstart-capi-quickstart-worker-85cbf8fd8c-wx75g Ready 3h42m v1.15.3

root@capd:~# kubectl --kubeconfig=./capi-quickstart.kubeconfig get pods -n mynamespace -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINAT ED NODE READINESS GATES
nginx 1/1 Running 0 21m 192.168.61.193 capi-quickstart-capi-quickstart-worker-85cbf8fd8c-mpr2l

Can I do something more to help this one please ?

@chuckha
Copy link
Contributor Author

chuckha commented Nov 23, 2019

@sb1975 I updated the issue to link to the actual tests in the framework as examples to get folks started. Take a look at the existing tests and try to follow the existing patterns or if you find something that needs improving, improve it.

@wfernandes
Copy link
Contributor

I'm planning on working on this framework test: Deleting a cluster deletes all resources associated with that cluster including Machines, BootstrapConfigs, InfraMachines, InfraCluster, and generated secrets. I'm hoping to pair that work as I accomplish this story #1775.

@chuckha
Copy link
Contributor Author

chuckha commented Jan 23, 2020

Closing this in favor of #2141

Other cases are not important right now and will be added in future as importance rises.

/close

@k8s-ci-robot
Copy link
Contributor

@chuckha: Closing this issue.

In response to this:

Closing this in favor of #2141

Other cases are not important right now and will be added in future as importance rises.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

4 participants