Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding mirror suite with sequential upgrade #4196

Merged
merged 1 commit into from
Oct 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
301 changes: 301 additions & 0 deletions suites/squid/cephfs/tier-0_cephfs_mirrror_upgrade_seq_6x_to_8x.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,301 @@
---
tests:
- test:
name: setup install pre-requisistes
desc: Setup phase to deploy the required pre-requisites for running the tests.
module: install_prereq.py
abort-on-fail: true
- test:
abort-on-fail: true
clusters:
ceph1:
config:
verify_cluster_health: true
steps:
- config:
command: bootstrap
service: cephadm
args:
registry-url: registry.redhat.io
mon-ip: node1
rhcs-version: 6.1
release: rc
orphan-initial-daemons: true
skip-monitoring-stack: true
- config:
command: add_hosts
service: host
args:
attach_ip_address: true
labels: apply-all-labels
- config:
command: apply
service: mgr
args:
placement:
label: mgr
- config:
command: apply
service: mon
args:
placement:
label: mon
- config:
command: apply
service: osd
args:
all-available-devices: true
- config:
command: shell
args:
- ceph fs volume create cephfs
- config:
command: apply
service: mds
base_cmd_args:
verbose: true
pos_args:
- cephfs
args:
placement:
nodes:
- node4
- node5
- config:
command: apply
service: cephfs-mirror
args:
placement:
nodes:
- node6
ceph2:
config:
verify_cluster_health: true
steps:
- config:
command: bootstrap
service: cephadm
args:
registry-url: registry.redhat.io
mon-ip: node1
rhcs-version: 6.1
release: rc
orphan-initial-daemons: true
skip-monitoring-stack: true
- config:
command: add_hosts
service: host
args:
attach_ip_address: true
labels: apply-all-labels
- config:
command: apply
service: mgr
args:
placement:
label: mgr
- config:
command: apply
service: mon
args:
placement:
label: mon
- config:
command: apply
service: osd
args:
all-available-devices: true
- config:
command: shell
args:
- ceph fs volume create cephfs
- config:
command: apply
service: mds
base_cmd_args:
verbose: true
pos_args:
- cephfs
args:
placement:
nodes:
- node4
- node5
desc: CephFS Mirror cluster deployment using cephadm
destroy-clster: false
module: test_cephadm.py
polarion-id: CEPH-83574114
name: deploy cephfs-mirror
- test:
abort-on-fail: true
clusters:
ceph1:
config:
command: add
copy_admin_keyring: true
id: client.1
install_packages:
- ceph-common
- ceph-fuse
node: node7
ceph2:
config:
command: add
copy_admin_keyring: true
id: client.1
install_packages:
- ceph-common
- ceph-fuse
node: node6
desc: Configure the Cephfs client system 1
destroy-cluster: false
module: test_client.py
name: configure client
- test:
abort-on-fail: true
desc: Configure CephFS Mirroring
clusters:
ceph1:
config:
name: Validate the Synchronisation is successful upon enabling fs mirroring
module: cephfs_mirror_upgrade.configure_cephfs_mirroring.py
name: Validate the Synchronisation is successful upon enabling fs mirroring.
polarion-id: CEPH-83574099
- test:
abort-on-fail: false
desc: "Validate snapshot synchronisation perf improvements"
clusters:
ceph1:
config:
name: Validate snapshot synchronisation perf improvements
source_fs: "cephfs_snapdiff_pre"
target_fs: "cephfs_rem_snapdiff_pre"
result_file: "snapshot_sync_info_v7.csv"
module: cephfs_mirroring.snapdiff_perf_improvements_across_releases.py
name: Validate snapshot synchronisation perf improvements
polarion-id: "CEPH-83595260"
- test:
name: Upgrade along with IOs
module: test_parallel.py
clusters:
ceph1:
config:
name: Validate the Synchronisation is successful upon enabling fs mirroring
parallel:
- test:
abort-on-fail: false
config:
timeout: 30
client_upgrade: 1
client_upgrade_node: 'node8'
desc: Runs IOs in parallel with upgrade process
module: cephfs_upgrade.cephfs_io.py
name: "creation of Prerequisites for Upgrade"
polarion-id: CEPH-83575315
- test:
name: Upgrade ceph
desc: Upgrade cluster to latest version
module: cephadm.test_cephadm_upgrade.py
polarion-id: CEPH-83574638
clusters:
ceph1:
config:
command: start
service: upgrade
base_cmd_args:
verbose: true
benchmark:
type: rados
pool_per_client: true
pg_num: 128
duration: 10
verify_cluster_health: false
destroy-cluster: false
desc: Running upgrade, mds Failure and i/o's parallelly
abort-on-fail: true
- test:
abort-on-fail: false
desc: Validate the Synchronisation is successful upon upgrade of primary cluster
clusters:
ceph1:
config:
name: Validate the Synchronisation is successful upon upgrade of primary cluster
clean_up: false
module: cephfs_mirror_upgrade.post_upgrade_validate.py
name: Validate the Synchronisation is successful upon upgrade.
polarion-id: CEPH-83575336
- test:
name: Upgrade along with IOs
module: test_parallel.py
clusters:
ceph2:
config:
name: Validate the Synchronisation is successful upon enabling fs mirroring
parallel:
- test:
abort-on-fail: false
config:
timeout: 30
client_upgrade: 1
client_upgrade_node: 'node8'
desc: Runs IOs in parallel with upgrade process
module: cephfs_upgrade.cephfs_io.py
name: "creation of Prerequisites for Upgrade"
polarion-id: CEPH-83575315
- test:
name: Upgrade ceph
desc: Upgrade cluster to latest version
module: cephadm.test_cephadm_upgrade.py
polarion-id: CEPH-83574638
clusters:
ceph2:
config:
command: start
service: upgrade
base_cmd_args:
verbose: true
benchmark:
type: rados
pool_per_client: true
pg_num: 128
duration: 10
verify_cluster_health: false
destroy-cluster: false
desc: Running upgrade, mds Failure and i/o's parallelly
abort-on-fail: true
- test:
abort-on-fail: false
desc: Validate the Synchronisation is successful upon upgrade of second cluster
clusters:
ceph1:
config:
name: Validate the Synchronisation is successful upon upgrade of second cluster
clean_up: true
module: cephfs_mirror_upgrade.post_upgrade_validate.py
name: Validate the Synchronisation is successful upon upgrade.
polarion-id: CEPH-83575336
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Amar,

Can you also include the recent addition [snapdiff tests] to mirror upgrade suite. ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated Hemanth.
Execution in progress

- test:
abort-on-fail: false
desc: "Validate snapshot synchronisation perf improvements"
clusters:
ceph1:
config:
name: Validate snapshot synchronisation perf improvements
source_fs: "cephfs_snapdiff_post"
target_fs: "cephfs_rem_snapdiff_post"
result_file: "snapshot_sync_info_v8.csv"
module: cephfs_mirroring.snapdiff_perf_improvements_across_releases.py
name: Validate snapshot synchronisation perf improvements
polarion-id: "CEPH-83595260"
- test:
abort-on-fail: false
desc: "Validate sync duration results by comparing between 2 releases"
clusters:
ceph1:
config:
name: Validate sync duration results by comparing between 2 releases
result_filev7: "snapshot_sync_info_v7.csv"
result_filev8: "snapshot_sync_info_v8.csv"
module: cephfs_mirroring.validate_snapdiff_perf_results.py
name: Validate sync duration results by comparing between 2 releases
polarion-id: "CEPH-83595260"
Loading
Loading