-
Notifications
You must be signed in to change notification settings - Fork 24
/
kubernetes_versions.robot
142 lines (118 loc) · 5.55 KB
/
kubernetes_versions.robot
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
#
# Copyright The Helm Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
*** Settings ***
Documentation Verify Helm functionality on multiple Kubernetes versions.
...
... Fresh new kind-based clusters will be created for each
... of the Kubernetes versions being tested. An existing
... kind cluster can be used by specifying it in an env var
... representing the version, for example:
...
... export KIND_CLUSTER_1_16_1="helm-ac-keepalive-1.16.1"
... export KIND_CLUSTER_1_15_4="helm-ac-keepalive-1.15.4"
... export KIND_CLUSTER_1_14_7="helm-ac-keepalive-1.14.7"
...
Library String
Library OperatingSystem
Library ../lib/ClusterProvider.py
Library ../lib/Kubectl.py
Library ../lib/Helm.py
Library ../lib/Sh.py
Suite Setup Suite Setup
Suite Teardown Suite Teardown
*** Test Cases ***
#Helm works with Kubernetes 1.16.1
# Test Helm on Kubernetes version 1.16.1
#Helm works with Kubernetes 1.15.3
# Test Helm on Kubernetes version 1.15.3
#
[HELM-001] Helm works with Kubernetes
@{versions} = Split String %{CLUSTER_VERSIONS} ,
FOR ${i} IN @{versions}
Set Global Variable ${version} ${i}
Test Helm on Kubernetes version ${version}
END
*** Keyword ***
Test Helm on Kubernetes version
Require cluster True
${helm_version} = Get Environment Variable ROBOT_HELM_V3 "v2"
Pass Execution If ${helm_version} == 'v2' Helm v2 not supported. Skipping test.
[Arguments] ${kube_version}
Create test cluster with kube version ${kube_version}
# Add new test cases here
Verify --wait flag works as expected
ClusterProvider.Delete test cluster
Create test cluster with kube version
[Arguments] ${kube_version}
ClusterProvider.Create test cluster with Kubernetes version ${kube_version}
ClusterProvider.Wait for cluster
Should pass kubectl get nodes
Should pass kubectl get pods --namespace=kube-system
Verify --wait flag works as expected
# Install nginx chart in a good state, using --wait flag
Sh.Run helm delete wait-flag-good
Helm.Install test chart wait-flag-good nginx --wait --timeout=60s
Helm.Return code should be 0
# Make sure everything is up-and-running
Sh.Run kubectl get pods --namespace=default
Sh.Run kubectl get services --namespace=default
Sh.Run kubectl get pvc --namespace=default
Kubectl.Service has IP default wait-flag-good-nginx
Kubectl.Return code should be 0
Kubectl.Persistent volume claim is bound default wait-flag-good-nginx
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-ext- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-fluentd-es- 1
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1beta1- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-v1beta2- 3
Kubectl.Return code should be 0
Kubectl.Pods with prefix are running default wait-flag-good-nginx-web- 3
Kubectl.Return code should be 0
# Delete good release
Should pass helm delete wait-flag-good
# Install nginx chart in a bad state, using --wait flag
Sh.Run helm delete wait-flag-bad
Helm.Install test chart wait-flag-bad nginx --wait --timeout=60s --set breakme=true
# Install should return non-zero, as things fail to come up
Helm.Return code should not be 0
# Make sure things are NOT up-and-running
Sh.Run kubectl get pods --namespace=default
Sh.Run kubectl get services --namespace=default
Sh.Run kubectl get pvc --namespace=default
Kubectl.Persistent volume claim is bound default wait-flag-bad-nginx
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-ext- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-fluentd-es- 1
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1beta1- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-v1beta2- 3
Kubectl.Return code should not be 0
Kubectl.Pods with prefix are running default wait-flag-bad-nginx-web- 3
Kubectl.Return code should not be 0
# Delete bad release
Should pass helm delete wait-flag-bad
Suite Setup
ClusterProvider.Cleanup all test clusters
Suite Teardown
ClusterProvider.Cleanup all test clusters