Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy apps on both primary and secondary cluster #9123

Merged
merged 9 commits into from
Sep 4, 2024

Conversation

Shrivaibavi
Copy link
Contributor

No description provided.

@Shrivaibavi Shrivaibavi requested a review from a team as a code owner January 8, 2024 15:54
@pull-request-size pull-request-size bot added the size/L PR that changes 100-499 lines label Jan 8, 2024
@Shrivaibavi Shrivaibavi added the Needs Testing Run tests and provide logs link label Jan 8, 2024
ocs_ci/ocs/dr/dr_workload.py Show resolved Hide resolved
tests/conftest.py Outdated Show resolved Hide resolved
ocs_ci/ocs/dr/dr_workload.py Outdated Show resolved Hide resolved
ocs_ci/ocs/dr/dr_workload.py Show resolved Hide resolved
tests/conftest.py Show resolved Hide resolved
ocs_ci/helpers/dr_helpers.py Outdated Show resolved Hide resolved
@Shrivaibavi Shrivaibavi requested a review from a team as a code owner February 19, 2024 12:27
@pull-request-size pull-request-size bot added size/XL and removed size/L PR that changes 100-499 lines labels Feb 19, 2024
ocs_ci/helpers/dr_helpers.py Outdated Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
ocs_ci/ocs/dr/dr_workload.py Outdated Show resolved Hide resolved
tests/conftest.py Outdated Show resolved Hide resolved
tests/conftest.py Outdated Show resolved Hide resolved
@Shrivaibavi Shrivaibavi changed the title [WIP] Deploy apps on both primary and secondary cluster Deploy apps on both primary and secondary cluster Aug 6, 2024
tests/conftest.py Outdated Show resolved Hide resolved
@Shrivaibavi Shrivaibavi requested a review from am-agrawa August 20, 2024 10:59
ocs_ci/helpers/dr_helpers.py Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
tests/conftest.py Outdated Show resolved Hide resolved
@Shrivaibavi
Copy link
Contributor Author

Failed to fetch auth.yaml from ocs-ci-data
========================================================= test session starts =========================================================
platform linux -- Python 3.9.19, pytest-6.2.5, py-1.11.0, pluggy-1.5.0
rootdir: /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci, configfile: pytest.ini
plugins: flaky-3.7.0, repeat-0.9.3, progress-1.2.5, order-1.2.0, metadata-1.11.0, logger-0.5.1, jira-0.3.21, marker-bugzilla-0.9.4, html-3.1.1
collected 1 item                                                                                                                      

tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py::TestMultipleApplicationFailoverAndRelocate::test_application_failover_and_relocate[Subscription] 
----------------------------------------------------------- live log setup ------------------------------------------------------------
13:35:26 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - testrun_name: OCS4-16-Downstream-OCP4-16-VSPHERE-UPI-1AZ-RHCOS-3M-3W-tier1
13:35:26 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - testrun_name: OCS4-16-Downstream-OCP4-16-VSPHERE-UPI-1AZ-RHCOS-3M-3W-tier1
13:35:26 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:35:26 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n open-cluster-management get csv  -n open-cluster-management -o yaml
13:35:29 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n openshift get csv  -n openshift -o yaml
13:35:31 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n open-cluster-management get csv  -n open-cluster-management -o yaml
13:35:34 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:35:34 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:35:34 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n openshift-adp get csv  -n openshift-adp -o yaml
13:35:37 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n openshift get csv  -n openshift -o yaml
13:35:40 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n openshift get csv  -n openshift -o yaml
13:35:43 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n submariner-operator get csv  -n submariner-operator -o yaml
13:35:44 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n submariner-operator get csv  -n submariner-operator -o yaml
13:35:45 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:35:45 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Retrieving the authentication config dictionary
13:35:45 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n openshift-storage get lvmcluster  -n openshift-storage -o yaml
13:35:47 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n openshift-storage get cephcluster  -n openshift-storage -o yaml
13:35:50 - MainThread - tests.conftest - INFO - C[sraghave-a1] - All logs located at /tmp/ocs-ci-logs-1724918714
13:35:50 - MainThread - tests.conftest - INFO - C[sraghave-a1] - Skipping client download
13:35:50 - MainThread - tests.conftest - INFO - C[sraghave-a1] - Skipping version reporting for development mode.
13:35:50 - MainThread - tests.conftest - INFO - C[sraghave-a1] - PagerDuty service is not created because platform from ['openshiftdedicated', 'rosa', 'fusion_aas'] is not used
13:35:50 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - testrun_name: OCS4-16-Downstream-OCP4-16-VSPHERE-UPI-1AZ-RHCOS-3M-3W-tier1
13:35:50 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:35:51 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-a1] - Get URL of OCP console
13:35:51 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc -n openshift-storage get consoles.config.openshift.io cluster -ojsonpath='{.status.consoleURL}'
13:35:54 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-a1] - OCP URL: https://console-openshift-console.apps.sraghave-a1.qe.rh-ocs.com
13:35:54 - MainThread - ocs_ci.ocs.acm.acm - INFO - C[sraghave-a1] - URL: https://console-openshift-console.apps.sraghave-a1.qe.rh-ocs.com/multicloud/infrastructure/clusters/managed
13:35:54 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - Get password of OCP console
13:35:54 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - chrome browser
13:35:54 - MainThread - WDM - INFO - C[sraghave-a1] - ====== WebDriver manager ======
13:35:54 - MainThread - WDM - INFO - C[sraghave-a1] - Get LATEST chromedriver version for google-chrome
13:35:55 - MainThread - WDM - INFO - C[sraghave-a1] - Get LATEST chromedriver version for google-chrome
13:35:55 - MainThread - WDM - INFO - C[sraghave-a1] - Driver [/home/sraghave/.wdm/drivers/chromedriver/linux64/128.0.6613.86/chromedriver-linux64/chromedriver] found in cache
13:36:25 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - UI logs directory function /tmp/ui_logs_dir_1724918714
13:36:27 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - UI logs directory function /tmp/ui_logs_dir_1724918714
13:36:28 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - Copy DOM file: /tmp/ui_logs_dir_1724918714/dom/test_application_failover_and_relocate[Subscription]/2024-08-29T13-36-28.232560_login_DOM.txt
13:36:28 - MainThread - ocs_ci.ocs.ui.base_ui - WARNING - C[sraghave-a1] - Login with my_htpasswd_provider or kube:admin text not found, trying to login
13:37:16 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - Skip tour element not found. Continuing without clicking.
13:37:16 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - You are on * AcmPageNavigator Web Page *
13:37:16 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - UI logs directory class /tmp/ui_logs_dir_1724918714
13:37:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc get clusterversion -n openshift-storage -o yaml
13:37:18 - MainThread - ocs_ci.ocs.acm.acm - INFO - C[sraghave-a1] - page title: Red Hat OpenShift
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - ERROR - C[sraghave-a1] - upgrade mark does not exist
13:37:18 - MainThread - tests.conftest - INFO - C[sraghave-a1] - Skipping health checks for development mode
13:37:18 - MainThread - tests.conftest - INFO - C[sraghave-a1] - Skipping alert check for development mode
13:37:18 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:37:19 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:37:19 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc2] - Switched to cluster: sraghave-oc2
13:37:20 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:37:20 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get node  -o yaml
13:37:23 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:37:23 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:37:26 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc2] - Switched to cluster: sraghave-oc2
13:37:26 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get node  -o yaml
13:37:29 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - C[sraghave-oc2] - duration reported by tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py::TestMultipleApplicationFailoverAndRelocate::test_application_failover_and_relocate[Subscription] immediately after test execution: 122.85
------------------------------------------------------------ live log call ------------------------------------------------------------
13:37:29 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc2] - You are on * AcmAddClusters Web Page *
13:37:29 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc2] - UI logs directory class /tmp/ui_logs_dir_1724918714
13:37:29 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc get clusterversion -n openshift-storage -o yaml
13:37:30 - MainThread - ocs_ci.ocs.dr.dr_workload - INFO - C[sraghave-oc2] - Repo used: https://github.com/red-hat-storage/ocs-workloads.git
13:37:30 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:37:30 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get DRPolicy  -A -o yaml
13:37:32 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:37:32 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get DRCluster  -o yaml
13:37:33 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-a1] - The DRClusters are ['sraghave-oc1', 'sraghave-oc2']
13:37:33 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:37:33 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Repository already cloned at ocs-workloads, skipping clone
13:37:33 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Fetching latest changes from repository
13:37:33 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: git fetch --all
13:37:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Checking out repository to specific branch: master
13:37:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: git checkout master
13:37:35 - MainThread - ocs_ci.utility.utils - WARNING - C[sraghave-a1] - Command stderr: Already on 'master'

13:37:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Reset branch: master with latest changes
13:37:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: git reset --hard origin/master
13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - apiVersion: v1
kind: Namespace
metadata:
  name: namespace-busybox-workloads-86a6ea3d852c

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
metadata:
  labels:
    app: busybox-sample
  name: placementrule-busybox-0ebfdb0e185647bca1
spec:
  clusterConditions:
  - status: 'True'
    type: ManagedClusterConditionAvailable
  clusterReplicas: 1
  schedulerName: ramen

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - apiVersion: ramendr.openshift.io/v1alpha1
kind: DRPlacementControl
metadata:
  labels:
    app: busybox-sample
  name: drpc-busybox-c7bee5c2af654f3685e329c9676
spec:
  drPolicyRef:
    name: odr-policy-mdr
  placementRef:
    kind: PlacementRule
    name: placementrule-busybox-0ebfdb0e185647bca1
  preferredCluster: sraghave-oc1
  pvcSelector:
    matchLabels:
      appname: busybox_app2

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - apiVersion: apps.open-cluster-management.io/v1
kind: Channel
metadata:
  name: channel-ramen-gitops-fe5bc9d9db134ddf833
spec:
  pathname: https://github.com/red-hat-storage/ocs-workloads.git
  type: GitHub

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - apiVersion: v1
kind: Namespace
metadata:
  name: namespace-ramen-busybox-322f5d8668af444a

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - apiVersion: apps.open-cluster-management.io/v1
kind: Subscription
metadata:
  annotations:
    apps.open-cluster-management.io/github-branch: master
    apps.open-cluster-management.io/github-path: mdr/subscriptions/busybox-app-2/resources
  labels:
    app: busybox-sample
  name: subscription-busybox-758cb3e5b6864100835
spec:
  channel: namespace-ramen-busybox-322f5d8668af444a/channel-ramen-gitops-fe5bc9d9db134ddf833
  placement:
    placementRef:
      kind: PlacementRule
      name: placementrule-busybox-0ebfdb0e185647bca1

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
  name: app-busybox-a5aae4dda81e4c3e8e57a77b3671
spec:
  componentKinds:
  - group: apps.open-cluster-management.io
    kind: Subscription
  descriptor: {}
  selector:
    matchExpressions:
    - key: app
      operator: In
      values:
      - busybox-sample

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - namespace: namespace-busybox-workloads-86a6ea3d852c
resources:
- namespace.yaml
- placementrule.yaml
- subscription.yaml
- drpc.yaml
- app.yaml

13:37:35 - MainThread - ocs_ci.utility.templating - INFO - C[sraghave-a1] - namespace: namespace-ramen-busybox-322f5d8668af444a
resources:
- channel.yaml
- namespace.yaml

13:37:35 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:37:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc create -k ocs-workloads/mdr/subscriptions/busybox-app-2/subscriptions
13:37:37 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc create -k ocs-workloads/mdr/subscriptions/busybox-app-2/subscriptions/busybox
13:37:40 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:37:40 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc1] - Waiting for 2 PVCs to reach Bound state
13:37:40 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc1] - Waiting for a resource(s) of kind PersistentVolumeClaim identified by name '' using selector None at column name STATUS to reach desired condition Bound
13:37:40 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:37:42 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim busybox-cephfs-pvc-2 -n namespace-busybox-workloads-86a6ea3d852c
13:37:43 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:37:45 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim busybox-rbd-pvc-2 -n namespace-busybox-workloads-86a6ea3d852c
13:37:46 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc1] - 2 resources already reached condition!
13:37:46 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc1] - Waiting for 2 pods to reach Running state
13:37:46 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc1] - Waiting for a resource(s) of kind Pod identified by name '' using selector None at column name STATUS to reach desired condition Running
13:37:46 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:37:48 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-cephfs-pod-2-5888795b94-tvcng -n namespace-busybox-workloads-86a6ea3d852c
13:37:50 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:37:52 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-rbd-pod-2-6955f9c594-bgb9w -n namespace-busybox-workloads-86a6ea3d852c
13:37:53 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc1] - status of  at column STATUS - item(s) were ['ContainerCreating', 'ContainerCreating'], but we were waiting for all 2 of them to be Running
13:37:53 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 5 seconds before next iteration
13:37:58 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:00 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-cephfs-pod-2-5888795b94-tvcng -n namespace-busybox-workloads-86a6ea3d852c
13:38:01 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-rbd-pod-2-6955f9c594-bgb9w -n namespace-busybox-workloads-86a6ea3d852c
13:38:03 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc1] - 2 resources already reached condition!
13:38:03 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc1] - Waiting for VRG to be created
13:38:03 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get VolumeReplicationGroup  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:04 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc1] - Waiting for VRG to reach primary state
13:38:04 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get VolumeReplicationGroup  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:06 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc1] - VRG: drpc-busybox-c7bee5c2af654f3685e329c9676 desired state is primary, current state is Primary
13:38:06 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:06 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:06 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:08 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl drpc-busybox-c7bee5c2af654f3685e329c9676 -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:09 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:38:09 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:38:09 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:09 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:09 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:11 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl drpc-busybox-c7bee5c2af654f3685e329c9676 -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:12 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:38:12 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:12 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:12 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:14 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl drpc-busybox-c7bee5c2af654f3685e329c9676 -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:16 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:38:16 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:16 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:17 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl drpc-busybox-c7bee5c2af654f3685e329c9676 -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:19 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPolicy odr-policy-mdr -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:38:20 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:38:20 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc1] - Edit the DRCluster resource for sraghave-oc1 cluster on the Hub cluster
13:38:20 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:20 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-a1] - Command: patch DRCluster sraghave-oc1 -n None -p '{"spec":{"clusterFence":"Fenced"}}' --type merge
13:38:20 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc patch DRCluster sraghave-oc1 -n None -p '{"spec":{"clusterFence":"Fenced"}}' --type merge
13:38:22 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-a1] - Successfully fenced DRCluster: sraghave-oc1
13:38:22 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:38:22 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:38:22 - MainThread - test_multiple_apps_failover_and_relocate - INFO - C[sraghave-a1] - Start the process of Failover of subscription based apps from ACM UI
13:38:22 - MainThread - test_multiple_apps_failover_and_relocate - INFO - C[sraghave-a1] - Failing over app app-busybox-a5aae4dda81e4c3e8e57a77b3671 
13:38:22 - MainThread - ocs_ci.ocs.ui.acm_ui - INFO - C[sraghave-a1] - Navigate to Disaster recovery page on ACM console
13:38:53 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Verify status of DRPolicy on ACM UI
13:38:54 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - DRPolicy status on ACM UI is Validated
13:38:54 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Verify Replication policy on ACM UI
13:38:54 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - DRPolicy and replication policy successfully validated on ACM UI
13:38:54 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Navigate back to Disaster recovery Overview page
13:38:56 - MainThread - ocs_ci.ocs.ui.acm_ui - INFO - C[sraghave-a1] - Navigate into Applications Page
13:39:10 - MainThread - ocs_ci.ocs.ui.base_ui - WARNING - C[sraghave-a1] - Locator xpath (//button[@type='button'][normalize-space()='Clear all filters'])[2] did not find text Clear all filters
13:39:10 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Apply filter for workload type Subscription
13:39:19 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on search bar
13:39:19 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Clear existing text from search bar if any
13:39:19 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Enter the workload to be searched
13:39:20 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on kebab menu option
13:39:22 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Selecting action as Failover from ACM UI
13:39:24 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on policy dropdown
13:39:26 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Select policy from policy dropdown
13:39:28 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on target cluster dropdown
13:39:31 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Select target cluster on ACM UI
13:39:33 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Check operation readiness
13:39:33 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on subscription dropdown
13:39:35 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on Initiate button to failover/relocate
13:39:37 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Failover trigerred from ACM UI
13:39:42 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - page not loaded yet: https://console-openshift-console.apps.sraghave-a1.qe.rh-ocs.com/multicloud/applications?type=subscription
13:39:44 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - page not loaded yet: https://console-openshift-console.apps.sraghave-a1.qe.rh-ocs.com/multicloud/applications?type=subscription
13:39:47 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - page not loaded yet: https://console-openshift-console.apps.sraghave-a1.qe.rh-ocs.com/multicloud/applications?type=subscription
13:39:50 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-a1] - page not loaded yet: https://console-openshift-console.apps.sraghave-a1.qe.rh-ocs.com/multicloud/applications?type=subscription
13:39:53 - MainThread - ocs_ci.ocs.ui.base_ui - ERROR - C[sraghave-a1] - Current URL did not finish loading in 10
13:39:55 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Close the action modal
13:39:57 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Action modal successfully closed for Subscription type workload
13:39:57 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:39:57 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:39:57 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:39:58 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get DRPlacementControl drpc-busybox-c7bee5c2af654f3685e329c9676 -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:00 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:40:00 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc2] - Switched to cluster: sraghave-oc2
13:40:00 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc2] - Waiting for 2 PVCs to reach Bound state
13:40:00 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc2] - Waiting for a resource(s) of kind PersistentVolumeClaim identified by name '' using selector None at column name STATUS to reach desired condition Bound
13:40:00 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:02 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim busybox-cephfs-pvc-2 -n namespace-busybox-workloads-86a6ea3d852c
13:40:03 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:05 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get PersistentVolumeClaim busybox-rbd-pvc-2 -n namespace-busybox-workloads-86a6ea3d852c
13:40:06 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc2] - 2 resources already reached condition!
13:40:06 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc2] - Waiting for 2 pods to reach Running state
13:40:06 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc2] - Waiting for a resource(s) of kind Pod identified by name '' using selector None at column name STATUS to reach desired condition Running
13:40:06 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:07 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc2] - status of  at column STATUS - item(s) were [], but we were waiting for all 2 of them to be Running
13:40:07 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Going to sleep for 5 seconds before next iteration
13:40:12 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:15 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-cephfs-pod-2-5888795b94-d4t7p -n namespace-busybox-workloads-86a6ea3d852c
13:40:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:18 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-rbd-pod-2-6955f9c594-zr9tc -n namespace-busybox-workloads-86a6ea3d852c
13:40:19 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc2] - status of  at column STATUS - item(s) were ['Running', 'ContainerCreating'], but we were waiting for all 2 of them to be Running
13:40:19 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Going to sleep for 5 seconds before next iteration
13:40:24 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:26 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-cephfs-pod-2-5888795b94-d4t7p -n namespace-busybox-workloads-86a6ea3d852c
13:40:27 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get Pod busybox-rbd-pod-2-6955f9c594-zr9tc -n namespace-busybox-workloads-86a6ea3d852c
13:40:29 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-oc2] - 2 resources already reached condition!
13:40:29 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc2] - Waiting for VRG to be created
13:40:29 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get VolumeReplicationGroup  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:30 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc2] - Waiting for VRG to reach primary state
13:40:30 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig -n namespace-busybox-workloads-86a6ea3d852c get VolumeReplicationGroup  -n namespace-busybox-workloads-86a6ea3d852c -o yaml
13:40:32 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc2] - VRG: drpc-busybox-c7bee5c2af654f3685e329c9676 desired state is primary, current state is Primary
13:40:32 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:40:32 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on drpolicy hyperlink under Data policy column on Applications page
13:40:34 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Click on View more details
13:40:40 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Verifying failover/relocate status on ACM UI
13:40:41 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Failover successfully verified on ACM UI, status is FailedOver
13:40:41 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Close button found
13:40:41 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-a1] - Data policy modal page closed
13:40:41 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - C[sraghave-a1] - duration reported by tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py::TestMultipleApplicationFailoverAndRelocate::test_application_failover_and_relocate[Subscription] immediately after test execution: 192.56
PASSED
---------------------------------------------------------- live log teardown ----------------------------------------------------------
13:40:41 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:40:41 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get node  -o yaml
13:41:30 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get node  -o yaml
13:41:33 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - Waiting for nodes ['compute-0', 'compute-1', 'compute-2', 'control-plane-0', 'control-plane-1', 'control-plane-2'] to reach status Ready
13:41:33 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get node  -o yaml
13:41:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node compute-0
13:41:36 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node  -o yaml
13:41:39 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - Node compute-0 reached status Ready
13:41:39 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node compute-1
13:41:41 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node  -o yaml
13:41:43 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - Node compute-1 reached status Ready
13:41:43 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node compute-2
13:41:45 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node  -o yaml
13:41:47 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - Node compute-2 reached status Ready
13:41:47 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node control-plane-0
13:41:49 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node  -o yaml
13:41:56 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - Node control-plane-0 reached status Ready
13:41:56 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node control-plane-1
13:41:58 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node  -o yaml
13:42:01 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - Node control-plane-1 reached status Ready
13:42:01 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node control-plane-2
13:42:03 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get Node  -o yaml
13:42:06 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - Node control-plane-2 reached status Ready
13:42:06 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-a1] - The following nodes reached status Ready: ['compute-0', 'compute-1', 'compute-2', 'control-plane-0', 'control-plane-1', 'control-plane-2']
13:42:06 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:42:06 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:42:49 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:42:52 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-0', 'compute-1', 'compute-2', 'control-plane-0', 'control-plane-1', 'control-plane-2'] to reach status Ready
13:42:52 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:42:55 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-0
13:42:56 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:43:03 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-0 reached status Ready
13:43:03 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-1
13:43:05 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:43:07 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-1 reached status Ready
13:43:07 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-2
13:43:09 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:43:13 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-2 reached status Ready
13:43:13 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-0
13:43:15 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:43:20 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-0 reached status Ready
13:43:20 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-1
13:43:22 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:43:26 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-1 reached status Ready
13:43:26 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-2
13:43:29 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:43:32 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-2 reached status Ready
13:43:32 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['compute-0', 'compute-1', 'compute-2', 'control-plane-0', 'control-plane-1', 'control-plane-2']
13:43:32 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc2] - Switched to cluster: sraghave-oc2
13:43:32 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get node  -o yaml
13:44:15 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get node  -o yaml
13:44:18 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - Waiting for nodes ['compute-0', 'compute-1', 'compute-2', 'control-plane-0', 'control-plane-1', 'control-plane-2'] to reach status Ready
13:44:18 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get node  -o yaml
13:44:21 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node compute-0
13:44:22 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node  -o yaml
13:44:25 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - Node compute-0 reached status Ready
13:44:25 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node compute-1
13:44:26 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node  -o yaml
13:44:34 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - Node compute-1 reached status Ready
13:44:34 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node compute-2
13:44:36 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node  -o yaml
13:44:45 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - Node compute-2 reached status Ready
13:44:45 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node control-plane-0
13:44:46 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node  -o yaml
13:44:49 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - Node control-plane-0 reached status Ready
13:44:49 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node control-plane-1
13:44:50 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node  -o yaml
13:44:54 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - Node control-plane-1 reached status Ready
13:44:54 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node control-plane-2
13:44:55 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc2] - Executing command: oc --kubeconfig /home/sraghave/clusters/c2/auth/kubeconfig get Node  -o yaml
13:44:58 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - Node control-plane-2 reached status Ready
13:44:58 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc2] - The following nodes reached status Ready: ['compute-0', 'compute-1', 'compute-2', 'control-plane-0', 'control-plane-1', 'control-plane-2']
13:44:58 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:44:58 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c3/auth/kubeconfig get DRCluster sraghave-oc1 -o yaml
13:44:59 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc2] - Switched to cluster: sraghave-oc2
13:44:59 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-oc2] - Edit the DRCluster resource for sraghave-oc1 cluster on the Hub cluster
13:44:59 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
13:44:59 - MainThread - ocs_ci.ocs.ocp - INFO - C[sraghave-a1] - Command: patch DRCluster sraghave-oc1 -n None -p '{"spec":{"clusterFence":"Unfenced"}}' --type merge
13:44:59 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-a1] - Executing command: oc patch DRCluster sraghave-oc1 -n None -p '{"spec":{"clusterFence":"Unfenced"}}' --type merge
13:45:01 - MainThread - ocs_ci.helpers.dr_helpers - INFO - C[sraghave-a1] - Successfully unfenced DRCluster: sraghave-oc1
13:45:01 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc2] - Switched to cluster: sraghave-oc2
13:45:01 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
13:45:01 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:45:04 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Unscheduling nodes compute-0
13:45:04 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm cordon compute-0
13:45:07 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-0'] to reach status Ready,SchedulingDisabled
13:45:07 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:45:12 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-0
13:45:14 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:45:17 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-0 reached status Ready,SchedulingDisabled
13:45:17 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready,SchedulingDisabled: ['compute-0']
13:45:17 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Draining nodes compute-0
13:45:17 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm drain compute-0 --force=true --ignore-daemonsets --delete-emptydir-data --disable-eviction
13:46:45 - MainThread - ocs_ci.utility.utils - WARNING - C[sraghave-oc1] - Command stderr: Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/vmware-vsphere-csi-driver-node-qwtqh, openshift-cluster-node-tuning-operator/tuned-whrz2, openshift-dns/dns-default-nvdxm, openshift-dns/node-resolver-2kr4x, openshift-image-registry/node-ca-gpt6r, openshift-ingress-canary/ingress-canary-xfzdl, openshift-machine-config-operator/machine-config-daemon-gs6jt, openshift-monitoring/node-exporter-pzmh8, openshift-multus/multus-69bm6, openshift-multus/multus-additional-cni-plugins-9dhg2, openshift-multus/network-metrics-daemon-q48dx, openshift-network-diagnostics/network-check-target-sbrdk, openshift-network-operator/iptables-alerter-6gtwp, openshift-ovn-kubernetes/ovnkube-node-4dw4d, openshift-storage/csi-cephfsplugin-jlm2l, openshift-storage/csi-rbdplugin-qb77z

13:47:07 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc get events -A --field-selector involvedObject.name=compute-0,reason=Rebooted -o yaml
13:47:09 - MainThread - ocs_ci.utility.vsphere - INFO - C[sraghave-oc1] - Rebooting VMs: ['compute-0']
13:47:13 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for 30 seconds
13:47:43 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm uncordon compute-0
13:47:46 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Scheduling nodes compute-0
13:47:46 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-0'] to reach status Ready
13:47:46 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:47:49 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-0
13:47:51 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:47:54 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:47:57 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:48:00 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-0
13:48:01 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:48:04 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:48:07 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:48:12 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-0
13:48:13 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:48:16 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-0 reached status Ready
13:48:16 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['compute-0']
13:48:16 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-0'] to reach status Ready
13:48:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:48:23 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-0
13:48:24 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:48:28 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-0 reached status Ready
13:48:28 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['compute-0']
13:48:28 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Unscheduling nodes compute-1
13:48:28 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm cordon compute-1
13:48:30 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-1'] to reach status Ready,SchedulingDisabled
13:48:30 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:48:32 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-1
13:48:36 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:48:42 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-1 reached status Ready,SchedulingDisabled
13:48:42 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready,SchedulingDisabled: ['compute-1']
13:48:42 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Draining nodes compute-1
13:48:42 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm drain compute-1 --force=true --ignore-daemonsets --delete-emptydir-data --disable-eviction
13:50:16 - MainThread - ocs_ci.utility.utils - WARNING - C[sraghave-oc1] - Command stderr: Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/vmware-vsphere-csi-driver-node-rmpcf, openshift-cluster-node-tuning-operator/tuned-zwbnp, openshift-dns/dns-default-khlrc, openshift-dns/node-resolver-lxhhc, openshift-image-registry/node-ca-nz4fm, openshift-ingress-canary/ingress-canary-6nfrd, openshift-machine-config-operator/machine-config-daemon-6jzzf, openshift-monitoring/node-exporter-x9gsn, openshift-multus/multus-additional-cni-plugins-lqn6h, openshift-multus/multus-s4tvh, openshift-multus/network-metrics-daemon-wl8lb, openshift-network-diagnostics/network-check-target-m7xd8, openshift-network-operator/iptables-alerter-ntb96, openshift-ovn-kubernetes/ovnkube-node-7gwk4, openshift-storage/csi-cephfsplugin-sp88t, openshift-storage/csi-rbdplugin-v2w2j

13:50:37 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc get events -A --field-selector involvedObject.name=compute-1,reason=Rebooted -o yaml
13:50:45 - MainThread - ocs_ci.utility.vsphere - INFO - C[sraghave-oc1] - Rebooting VMs: ['compute-1']
13:50:47 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for 30 seconds
13:51:17 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm uncordon compute-1
13:51:23 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Scheduling nodes compute-1
13:51:23 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-1'] to reach status Ready
13:51:23 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:51:27 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-1
13:51:28 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:51:31 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:51:34 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:51:37 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-1
13:51:39 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:51:41 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:51:44 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:51:47 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-1
13:51:49 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:51:52 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:51:55 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:51:58 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-1
13:51:59 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:52:02 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-1 reached status Ready
13:52:02 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['compute-1']
13:52:02 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-1'] to reach status Ready
13:52:02 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:52:05 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-1
13:52:06 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:52:09 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-1 reached status Ready
13:52:09 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['compute-1']
13:52:09 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Unscheduling nodes compute-2
13:52:09 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm cordon compute-2
13:52:11 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-2'] to reach status Ready,SchedulingDisabled
13:52:11 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:52:13 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-2
13:52:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:52:19 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-2 reached status Ready,SchedulingDisabled
13:52:19 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready,SchedulingDisabled: ['compute-2']
13:52:19 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Draining nodes compute-2
13:52:19 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm drain compute-2 --force=true --ignore-daemonsets --delete-emptydir-data --disable-eviction
13:53:52 - MainThread - ocs_ci.utility.utils - WARNING - C[sraghave-oc1] - Command stderr: Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/vmware-vsphere-csi-driver-node-kq4h2, openshift-cluster-node-tuning-operator/tuned-8w6dv, openshift-dns/dns-default-d8th8, openshift-dns/node-resolver-l9689, openshift-image-registry/node-ca-6w5g5, openshift-ingress-canary/ingress-canary-274jj, openshift-machine-config-operator/machine-config-daemon-6mkh5, openshift-monitoring/node-exporter-8fzmx, openshift-multus/multus-additional-cni-plugins-7wssl, openshift-multus/multus-p6l22, openshift-multus/network-metrics-daemon-k8bs4, openshift-network-diagnostics/network-check-target-9cl75, openshift-network-operator/iptables-alerter-dk95f, openshift-ovn-kubernetes/ovnkube-node-tsrk9, openshift-storage/csi-cephfsplugin-w4nzw, openshift-storage/csi-rbdplugin-x6wp8

13:54:12 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc get events -A --field-selector involvedObject.name=compute-2,reason=Rebooted -o yaml
13:54:13 - MainThread - ocs_ci.utility.vsphere - INFO - C[sraghave-oc1] - Rebooting VMs: ['compute-2']
13:54:17 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for 30 seconds
13:54:47 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm uncordon compute-2
13:54:49 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Scheduling nodes compute-2
13:54:49 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-2'] to reach status Ready
13:54:49 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:54:54 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-2
13:54:56 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:54:59 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:55:02 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:55:05 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-2
13:55:06 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:55:09 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:55:12 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:55:15 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-2
13:55:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:55:19 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Going to sleep for 3 seconds before next iteration
13:55:22 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:55:28 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-2
13:55:30 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:55:33 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-2 reached status Ready
13:55:33 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['compute-2']
13:55:33 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['compute-2'] to reach status Ready
13:55:33 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:55:36 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node compute-2
13:55:37 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:55:40 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node compute-2 reached status Ready
13:55:40 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['compute-2']
13:55:40 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Unscheduling nodes control-plane-0
13:55:40 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm cordon control-plane-0
13:55:42 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-0'] to reach status Ready,SchedulingDisabled
13:55:42 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:55:45 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-0
13:55:46 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:55:52 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-0 reached status Ready,SchedulingDisabled
13:55:52 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready,SchedulingDisabled: ['control-plane-0']
13:55:52 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Draining nodes control-plane-0
13:55:52 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm drain control-plane-0 --force=true --ignore-daemonsets --delete-emptydir-data --disable-eviction
13:57:16 - MainThread - ocs_ci.utility.utils - WARNING - C[sraghave-oc1] - Command stderr: Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/vmware-vsphere-csi-driver-node-pgqqz, openshift-cluster-node-tuning-operator/tuned-x6t8n, openshift-dns/dns-default-zqfk7, openshift-dns/node-resolver-k85c6, openshift-image-registry/node-ca-c9xbh, openshift-machine-config-operator/machine-config-daemon-c8mt6, openshift-machine-config-operator/machine-config-server-k2tzx, openshift-monitoring/node-exporter-kj55r, openshift-multus/multus-8bvbt, openshift-multus/multus-additional-cni-plugins-vc7gf, openshift-multus/network-metrics-daemon-4q6mg, openshift-network-diagnostics/network-check-target-jdkc9, openshift-network-node-identity/network-node-identity-w55wz, openshift-network-operator/iptables-alerter-9sbgf, openshift-ovn-kubernetes/ovnkube-node-kv4n6; deleting Pods that declare no controller: openshift-etcd/etcd-guard-control-plane-0, openshift-kube-apiserver/kube-apiserver-guard-control-plane-0, openshift-kube-controller-manager/kube-controller-manager-guard-control-plane-0, openshift-kube-scheduler/openshift-kube-scheduler-guard-control-plane-0

13:57:36 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc get events -A --field-selector involvedObject.name=control-plane-0,reason=Rebooted -o yaml
13:57:38 - MainThread - ocs_ci.utility.vsphere - INFO - C[sraghave-oc1] - Rebooting VMs: ['control-plane-0']
13:57:41 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for 30 seconds
13:58:11 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm uncordon control-plane-0
13:58:13 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Scheduling nodes control-plane-0
13:58:13 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-0'] to reach status Ready
13:58:13 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:58:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-0
13:58:17 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:58:20 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-0 reached status Ready
13:58:20 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['control-plane-0']
13:58:20 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-0'] to reach status Ready
13:58:20 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:58:27 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-0
13:58:29 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:58:31 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-0 reached status Ready
13:58:31 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['control-plane-0']
13:58:31 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Unscheduling nodes control-plane-1
13:58:31 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm cordon control-plane-1
13:58:35 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-1'] to reach status Ready,SchedulingDisabled
13:58:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
13:58:37 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-1
13:58:39 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
13:58:47 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-1 reached status Ready,SchedulingDisabled
13:58:47 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready,SchedulingDisabled: ['control-plane-1']
13:58:47 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Draining nodes control-plane-1
13:58:47 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm drain control-plane-1 --force=true --ignore-daemonsets --delete-emptydir-data --disable-eviction
14:00:14 - MainThread - ocs_ci.utility.utils - WARNING - C[sraghave-oc1] - Command stderr: Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/vmware-vsphere-csi-driver-node-vzf52, openshift-cluster-node-tuning-operator/tuned-r6z5w, openshift-dns/dns-default-tr9pb, openshift-dns/node-resolver-26wxm, openshift-image-registry/node-ca-47vlm, openshift-machine-config-operator/machine-config-daemon-sffnw, openshift-machine-config-operator/machine-config-server-dmvkr, openshift-monitoring/node-exporter-kq72j, openshift-multus/multus-additional-cni-plugins-zq4wm, openshift-multus/multus-qctlv, openshift-multus/network-metrics-daemon-2j9mj, openshift-network-diagnostics/network-check-target-vsb5v, openshift-network-node-identity/network-node-identity-bjvdf, openshift-network-operator/iptables-alerter-4dclj, openshift-ovn-kubernetes/ovnkube-node-hvxh7; deleting Pods that declare no controller: openshift-etcd/etcd-guard-control-plane-1, openshift-kube-apiserver/kube-apiserver-guard-control-plane-1, openshift-kube-controller-manager/kube-controller-manager-guard-control-plane-1, openshift-kube-scheduler/openshift-kube-scheduler-guard-control-plane-1

14:00:38 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc get events -A --field-selector involvedObject.name=control-plane-1,reason=Rebooted -o yaml
14:00:40 - MainThread - ocs_ci.utility.vsphere - INFO - C[sraghave-oc1] - Rebooting VMs: ['control-plane-1']
14:00:44 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for 30 seconds
14:01:14 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm uncordon control-plane-1
14:01:16 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Scheduling nodes control-plane-1
14:01:16 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-1'] to reach status Ready
14:01:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
14:01:22 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-1
14:01:25 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
14:01:28 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-1 reached status Ready
14:01:28 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['control-plane-1']
14:01:28 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-1'] to reach status Ready
14:01:28 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
14:01:31 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-1
14:01:32 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
14:01:35 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-1 reached status Ready
14:01:35 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['control-plane-1']
14:01:35 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Unscheduling nodes control-plane-2
14:01:35 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm cordon control-plane-2
14:01:38 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-2'] to reach status Ready,SchedulingDisabled
14:01:38 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
14:01:41 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-2
14:01:42 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
14:01:45 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-2 reached status Ready,SchedulingDisabled
14:01:45 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready,SchedulingDisabled: ['control-plane-2']
14:01:45 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Draining nodes control-plane-2
14:01:45 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm drain control-plane-2 --force=true --ignore-daemonsets --delete-emptydir-data --disable-eviction
14:03:14 - MainThread - ocs_ci.utility.utils - WARNING - C[sraghave-oc1] - Command stderr: Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/vmware-vsphere-csi-driver-node-kbncl, openshift-cluster-node-tuning-operator/tuned-ltftg, openshift-dns/dns-default-ngtb9, openshift-dns/node-resolver-s4gl7, openshift-image-registry/node-ca-dhc72, openshift-machine-config-operator/machine-config-daemon-bwgs7, openshift-machine-config-operator/machine-config-server-4pr64, openshift-monitoring/node-exporter-27g5v, openshift-multus/multus-additional-cni-plugins-wfkbk, openshift-multus/multus-lhgrs, openshift-multus/network-metrics-daemon-fhnkr, openshift-network-diagnostics/network-check-target-lf92w, openshift-network-node-identity/network-node-identity-5mq7k, openshift-network-operator/iptables-alerter-nhpp2, openshift-ovn-kubernetes/ovnkube-node-rkr2l; deleting Pods that declare no controller: openshift-etcd/etcd-guard-control-plane-2, openshift-kube-apiserver/kube-apiserver-guard-control-plane-2, openshift-kube-controller-manager/kube-controller-manager-guard-control-plane-2, openshift-kube-scheduler/openshift-kube-scheduler-guard-control-plane-2

14:03:33 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc get events -A --field-selector involvedObject.name=control-plane-2,reason=Rebooted -o yaml
14:03:36 - MainThread - ocs_ci.utility.vsphere - INFO - C[sraghave-oc1] - Rebooting VMs: ['control-plane-2']
14:03:40 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for 30 seconds
14:04:10 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc adm uncordon control-plane-2
14:04:12 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Scheduling nodes control-plane-2
14:04:12 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-2'] to reach status Ready
14:04:12 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
14:04:15 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-2
14:04:16 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
14:04:20 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-2 reached status Ready
14:04:20 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['control-plane-2']
14:04:20 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Waiting for nodes ['control-plane-2'] to reach status Ready
14:04:20 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get node  -o yaml
14:04:23 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node control-plane-2
14:04:24 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc --kubeconfig /home/sraghave/clusters/c1/auth/kubeconfig get Node  -o yaml
14:04:27 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - Node control-plane-2 reached status Ready
14:04:27 - MainThread - ocs_ci.ocs.node - INFO - C[sraghave-oc1] - The following nodes reached status Ready: ['control-plane-2']
14:04:27 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc1] - You are on * AcmAddClusters Web Page *
14:04:27 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc1] - UI logs directory class /tmp/ui_logs_dir_1724918714
14:04:27 - MainThread - ocs_ci.utility.utils - INFO - C[sraghave-oc1] - Executing command: oc get clusterversion -n openshift-storage -o yaml
14:04:29 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - workload_to_delete app-busybox-a5aae4dda81e4c3e8e57a77b3671
14:04:29 - MainThread - ocs_ci.ocs.ui.acm_ui - INFO - C[sraghave-oc1] - Navigate into Applications Page
14:04:30 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Click on search bar
14:04:30 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Clear existing text from search bar if any
14:04:30 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Enter the workload to be searched
14:05:39 - MainThread - ocs_ci.ocs.ui.base_ui - WARNING - C[sraghave-oc1] - Locator xpath //*[text()='No results found'] did not find text No results found
14:05:39 - MainThread - ocs_ci.helpers.dr_helpers_ui - ERROR - C[sraghave-oc1] - One or more matches found, Incorrect workload name
14:05:39 - MainThread - ocs_ci.ocs.ui.acm_ui - INFO - C[sraghave-oc1] - Navigate into Applications Page
14:05:40 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Click on search bar
14:05:40 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Clear existing text from search bar if any
14:05:40 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Enter the workload to be searched
14:05:41 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Click on kebab menu option
14:05:48 - MainThread - ocs_ci.ocs.ui.acm_ui - INFO - C[sraghave-oc1] - Navigate into Applications Page
14:05:48 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Click on search bar
14:05:48 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Clear existing text from search bar if any
14:05:48 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Enter the workload to be searched
14:05:49 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Successfully verified on ACM UI, Application Not Found, fetch status is No results found
14:05:49 - MainThread - ocs_ci.helpers.dr_helpers_ui - INFO - C[sraghave-oc1] - Application app-busybox-a5aae4dda81e4c3e8e57a77b3671 got deleted successfully
14:05:49 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc1] - Close browser
14:05:49 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc1] - UI logs directory function /tmp/ui_logs_dir_1724918714
14:05:50 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc1] - UI logs directory function /tmp/ui_logs_dir_1724918714
14:05:51 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc1] - Copy DOM file: /tmp/ui_logs_dir_1724918714/dom/test_application_failover_and_relocate[Subscription]/2024-08-29T14-05-51.919763_close_browser_DOM.txt
14:06:03 - MainThread - urllib3.connectionpool - WARNING - C[sraghave-oc1] - Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7dea7a2ee0>: Failed to establish a new connection: [Errno 111] Connection refused')': /session/c02edecd2685465dbbad54552ccbd2ad
14:06:03 - MainThread - urllib3.connectionpool - WARNING - C[sraghave-oc1] - Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7deacc6b50>: Failed to establish a new connection: [Errno 111] Connection refused')': /session/c02edecd2685465dbbad54552ccbd2ad
14:06:03 - MainThread - urllib3.connectionpool - WARNING - C[sraghave-oc1] - Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7deb3b1df0>: Failed to establish a new connection: [Errno 111] Connection refused')': /session/c02edecd2685465dbbad54552ccbd2ad
14:06:03 - MainThread - ocs_ci.ocs.ui.base_ui - INFO - C[sraghave-oc1] - SeleniumDriver instance attr not found
14:06:03 - MainThread - tests.conftest - INFO - C[sraghave-oc1] - Switching back to the initial cluster context
14:06:03 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.ocscilib - INFO - C[sraghave-a1] - Peak rss consumers during tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py::TestMultipleApplicationFailoverAndRelocate::test_application_failover_and_relocate[Subscription]:
+--------------+---------------------+---------------------+------------+
| name         | proc_start          | proc_end            | rss_peak   |
+==============+=====================+=====================+============+
| chrome       | 2024-08-29 13:35:56 | 2024-08-29 14:05:50 | 591.7M     |
+--------------+---------------------+---------------------+------------+
| run-ci       | 2024-08-29 13:35:29 | 2024-08-29 14:06:02 | 292.2M     |
+--------------+---------------------+---------------------+------------+
| oc           | 2024-08-29 13:35:29 | 2024-08-29 14:04:27 | 116.7M     |
+--------------+---------------------+---------------------+------------+
| chromedriver | 2024-08-29 13:35:56 | 2024-08-29 14:05:50 | 21.9M      |
+--------------+---------------------+---------------------+------------+
| jq           | 2024-08-29 13:35:50 | 2024-08-29 13:35:50 | 3.3M       |
+--------------+---------------------+---------------------+------------+
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.ocscilib - INFO - C[sraghave-a1] - Peak vms consumers during tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py::TestMultipleApplicationFailoverAndRelocate::test_application_failover_and_relocate[Subscription]:
+--------------+---------------------+---------------------+------------+
| name         | proc_start          | proc_end            | vms_peak   |
+==============+=====================+=====================+============+
| chrome       | 2024-08-29 13:35:56 | 2024-08-29 14:05:50 | 1.1T       |
+--------------+---------------------+---------------------+------------+
| chromedriver | 2024-08-29 13:35:56 | 2024-08-29 14:05:50 | 32.3G      |
+--------------+---------------------+---------------------+------------+
| oc           | 2024-08-29 13:35:29 | 2024-08-29 14:04:27 | 4.3G       |
+--------------+---------------------+---------------------+------------+
| run-ci       | 2024-08-29 13:35:29 | 2024-08-29 14:06:02 | 1.2G       |
+--------------+---------------------+---------------------+------------+
| jq           | 2024-08-29 13:35:50 | 2024-08-29 13:35:50 | 224.0M     |
+--------------+---------------------+---------------------+------------+
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.ocscilib - INFO - C[sraghave-a1] - Free RAM gain at the end of the test: 46.8M
14:06:03 - MainThread - ocs_ci.utility.memory - INFO - C[sraghave-a1] - Peak total ram memory consumption: 1.7G at 2024-08-29 13:57:01
14:06:03 - MainThread - ocs_ci.utility.memory - INFO - C[sraghave-a1] - Peak total virtual memory consumption: 2.5T at 2024-08-29 13:36:59

____________________________ 1 of 1 completed, 1 Pass, 0 Fail, 0 Skip, 0 XPass, 0 XFail, 0 Error, 0 ReRun _____________________________
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - C[sraghave-a1] - duration reported by tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py::TestMultipleApplicationFailoverAndRelocate::test_application_failover_and_relocate[Subscription] immediately after test execution: 1521.58
------------------------------------------------------- live log sessionfinish --------------------------------------------------------
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - C[sraghave-a1] - Test Time report saved to '/tmp/ocs-ci-logs-1724918714/session_test_time_report_file.csv'
14:06:03 - MainThread - ocs_ci.framework - INFO - C[sraghave-a1] - Switched to cluster: sraghave-a1
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - C[sraghave-a1] - Dump of the consolidated config is located here: /tmp/run-1724918714-cl0-config-end.yaml
14:06:03 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc1] - Switched to cluster: sraghave-oc1
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - C[sraghave-oc1] - Dump of the consolidated config is located here: /tmp/run-1724918714-cl1-config-end.yaml
14:06:03 - MainThread - ocs_ci.framework - INFO - C[sraghave-oc2] - Switched to cluster: sraghave-oc2
14:06:03 - MainThread - ocs_ci.framework.pytest_customization.reports - INFO - C[sraghave-oc2] - Dump of the consolidated config is located here: /tmp/run-1724918714-cl2-config-end.yaml

========================================================== warnings summary ===========================================================
new39/lib64/python3.9/site-packages/google/rpc/__init__.py:18
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/google/rpc/__init__.py:18: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
    import pkg_resources

new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
  Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
    declare_namespace(pkg)

new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.cloud')`.
  Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
    declare_namespace(pkg)

new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2348
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2348
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2348
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2348: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google')`.
  Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
    declare_namespace(parent)

new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.logging')`.
  Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
    declare_namespace(pkg)

new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/pkg_resources/__init__.py:2868: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('zope')`.
  Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
    declare_namespace(pkg)

new39/lib64/python3.9/site-packages/google/rpc/__init__.py:20
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/google/rpc/__init__.py:20: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('google.rpc')`.
  Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages
    pkg_resources.declare_namespace(__name__)

tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py: 268 warnings
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/selenium/webdriver/remote/remote_connection.py:418: DeprecationWarning: HTTPResponse.getheader() is deprecated and will be removed in urllib3 v2.1.0. Instead use HTTPResponse.headers.get(name, default).
    if resp.getheader('Content-Type') is not None:

tests/functional/disaster-recovery/metro-dr/test_multiple_apps_failover_and_relocate.py: 268 warnings
  /home/sraghave/home_dir/home/shrivaibaviraghaventhiran/automation_ocs4/new_vai/ocs-ci/new39/lib64/python3.9/site-packages/selenium/webdriver/remote/remote_connection.py:419: DeprecationWarning: HTTPResponse.getheader() is deprecated and will be removed in urllib3 v2.1.0. Instead use HTTPResponse.headers.get(name, default).
    content_type = resp.getheader('Content-Type').split(';')

-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================================ 1 passed, 552 warnings in 1837.17s (0:30:37) =============================================```

@Shrivaibavi Shrivaibavi added Verified Mark when PR was verified and log provided DR Metro and Regional DR related PRs labels Aug 29, 2024
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
bool: True if the application is not present, false otherwise

"""
if workload_to_check:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This condition will always return True based upon the current param in function if workload_to_check

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now workload_to_check is a list, hence it will return false if its empty

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Shrivaibavi Aren't you returning True in line 684 if the list is empty?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Workload_to_check becomes a necessary argument, if its empty ideally it wont enter the loop. I hope this answer helps.
And sorry if my above answer is confusing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am just asking what would it return if the list is empty?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing... Ideally it shouldn't be an empty list. Validating if its an empty list or contains element and raising exception wouldn't make much difference as workloads_to_check had been added as necessary param IMO.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, ideally we should raise an exception or fail the test if the list is empty if what I feel.

ocs_ci/helpers/dr_helpers_ui.py Outdated Show resolved Hide resolved
log.info(f"Application {workload_to_delete} got deleted successfully")
return True
else:
log.error("Application not present to delete")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this is right error message?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Shrivaibavi Could you pls respond to this query?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes IMO.
if verify_application_present_in_ui(
acm_obj, workloads_to_check=workloads_to_delete, timeout=timeout
):
else:
log.error("Applications not present to delete from UI")
return False

Please check the modified version

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we have multiple apps with the same name and it fails the test? That's why I suggested to apply the filter for the type of workload in one of my other comments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main intention of the PR is to deploy apps on both the clusters and failover multiple apps from C1 to C2 and slowly got into delete app from UI as teardown.

Its a very nice suggestion, But can only be incorporated in the upcoming PR. I believe there is no point in holding this PR for the above small correction. Please add your comment in this open issue #10365

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack

ocs_ci/ocs/ui/acm_ui.py Outdated Show resolved Hide resolved
ocs_ci/ocs/ui/acm_ui.py Outdated Show resolved Hide resolved
git_kustomization_yaml_data, self.git_repo_kustomization_yaml_file
)

workload_namespaces.append(self._get_workload_namespace())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workload_namespaces variable is populated with values within the loop but is not used anywhere else in the code. Please check if there might be a missing use case.

Test to deploy applications on both the managed clusters and test Failover and Relocate actions on them

"""
self.workload_namespaces = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This variable appears to be unused. Please check if it's required or if it can be removed.

@@ -307,7 +483,7 @@ def delete_workload(self, force=False, rbd_name="rbd", switch_ctx=None):
try:
config.switch_ctx(switch_ctx) if switch_ctx else config.switch_acm_ctx()
run_cmd(
f"oc delete -k {self.workload_subscription_dir}/{self.workload_name}"
f"oc delete -k {self.workload_subscription_dir}/{self.workload_name} -n {self.workload_namespace}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this change still required, given that you are using delete_application_ui for workload deletion?

@@ -356,7 +532,7 @@ def delete_workload(self, force=False, rbd_name="rbd", switch_ctx=None):

finally:
config.switch_ctx(switch_ctx) if switch_ctx else config.switch_acm_ctx()
run_cmd(f"oc delete -k {self.workload_subscription_dir}")
run_cmd(f"oc delete -k {self.workload_subscription_dir} -n {ramen_ns}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, is this still required?

Comment on lines +276 to +212
clusters = [self.preferred_primary_cluster] if primary_cluster else []
if secondary_cluster:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I want to deploy workloads on secondary_cluster which is set to False by default, the function would fail because clusters will be equal to [] empty list and we can not iterate over it. We should handle this use case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you will set the parameters like:

primary_cluster=False
secondary_cluster=True

than it will set the clusters variable to empty list on the first line (276) and then add the secondary cluster to the empty list in clusters, which then will be used in the following for loop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But where are we passing the secondary cluster to the empty list?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on the following line:

clusters.append(self.preferred_secondary_cluster)

Signed-off-by: Shrivaibavi Raghaventhiran <[email protected]>
Signed-off-by: Shrivaibavi Raghaventhiran <[email protected]>
Signed-off-by: Shrivaibavi Raghaventhiran <[email protected]>
Signed-off-by: Shrivaibavi Raghaventhiran <[email protected]>
Signed-off-by: Shrivaibavi Raghaventhiran <[email protected]>
Signed-off-by: Shrivaibavi Raghaventhiran <[email protected]>
@Shrivaibavi
Copy link
Contributor Author

Copy link

openshift-ci bot commented Sep 4, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: am-agrawa, dahorak, Shrivaibavi, sidhant-agrawal

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@dahorak dahorak merged commit 3f6c5a5 into red-hat-storage:master Sep 4, 2024
5 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DR Metro and Regional DR related PRs lgtm size/XL Verified Mark when PR was verified and log provided
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants