diff --git a/README.md b/README.md new file mode 100644 index 0000000..9a76975 --- /dev/null +++ b/README.md @@ -0,0 +1,1564 @@ +# OceanStor DJ Ansible Playbooks + +## 1 - Global variables + +Edit global.yml, set the global variables: + +```shell + +BASE_DIR: ~/ansible/playbook # playbook base directory + +DJ: + host: 192.168.2.10 # DJ host name or ip address + port: 26335 # DJ northbond api port, default: 26335 + user: nbiuser # DJ user name, the user type must be 'Third-party user', the role 'NBI User Group' must be assigned to the user + pswd: xxxxx # DJ user password + lang: en_US # DJ language setting, options: zh_CN, en_US + token: # do not change this, user/login.yml will automaticly update this when login success + +STORAGES: + - name: storage1 # Storage device name + sn: "12345678901234567890" # Storage device SN + ipList: # Storage management IP addresses + - 192.168.2.11 + - 192.168.2.12 + port: 8088 # Storage DeviceManager port, default: 8088 + user: admin # Storage user name + pswd: xxxxx # Storage user password +``` + +## 2 - User Actions + +### 2.1 - Login DJ + +```shell +# Include this tasks at the beginning of playbooks to login to DJ +# +# Required to load var file ../global.yml +# +# Examples: + + vars_files: + - ../global.yml + tasks: + - import_tasks: ../user/login.yml + +``` + +### 2.2 - Logout DJ + +```shell +# Include this tasks at the end of playbooks if need to logout +# +# Required to load var file ../global.yml +# +# Examples: +# + vars_files: + - ../global.yml + tasks: + - import_tasks: ../user/login.yml + + - import_tasks: ../user/logout.yml +``` + +## 3 - AZ Actions + +### 3.1 - List AZs + +```shell +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: name +# sortDir: sort direction, default: desc, options: desc, asc +# azName: availability zone name +# +# Examples: + +ansible-playbook az/list_azs.yml + +ansible-playbook az/list_azs.yml --extra-vars "azName='AT' sortKey='name' sortDir='desc'" + +``` + +### 3.2 - Get AZ by name + +```shell +# Required Parameters: +# azName: availability zone name +# +# Examples: + +ansible-playbook az/get_az_by_name.yml --extra-vars "azName='AT'" + +``` + +## 4 - Project Actions + +### 4.1 - List Projects + +```shell +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# projectName: project name +# +# Examples: + +ansible-playbook project/list_projects.yml + +ansible-playbook project/list_projects.yml --extra-vars "projectName='FR'" + +``` + +### 4.2 - Get Project by Name + +```shell +# Required Parameters: +# projectName: project name +# +# Examples: + +ansible-playbook project/get_project_by_name.yml --extra-vars "projectName='FR'" + +``` + +## 5 - Tier Actions + +### 5.1 - List Tiers + +```shell +# Optional Parameters: +# detail: show detail, options: true, false +# sortKey: sort key, options: name, total_capacity, created_at +# sortDir: sort direction, default: asc, options: desc, asc +# tierName: service level name +# azName: availability zone name +# projectName: project name +# +# Examples: + +ansible-playbook tier/list_tiers.yml + +ansible-playbook tier/list_tiers.yml --extra-vars "tierName='Gold'" + +ansible-playbook tier/list_tiers.yml --extra-vars "sortKey='total_capacity' sortDir='desc'" + +ansible-playbook tier/list_tiers.yml --extra-vars "azName='room1' projectName='project1'" + +``` + +### 5.2 - Get Tier by Name + +```shell +# Required Parameters: +# tierName: service level name +# +# Examples: + +ansible-playbook tier/get_tier_by_name.yml --extra-vars "tierName='Gold'" + +``` + +## 6 - Host Actions + + +### 6.1 - List Hosts + +```shell +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: initiator_count +# sortDir: sort direction, options: desc, asc +# hostName: host name +# ip: ip address +# osType: os type, options: LINUX, WINDOWS, SUSE, EULER, REDHAT, CENTOS, WINDOWSSERVER2012, SOLARIS, HPUX, AIX, XENSERVER, MACOS, VMWAREESX, ORACLE, OPENVMS +# displayStatus: display status, options: OFFLINE, NOT_RESPONDING, NORMAL, RED, GRAY, GREEN, YELLOW +# managedStatus: a list of managed status, options: NORMAL, TAKE_OVERING, TAKE_ERROR, TAKE_OVER_ALARM, UNKNOWN +# accessMode: access mode, options: ACCOUNT, NONE, VCENTER +# azName: availability zone name +# projectName: project name +# +# Examples: + +ansible-playbook host/list_hosts.yml + +ansible-playbook host/list_hosts.yml --extra-vars "hostName='test'" + +ansible-playbook host/list_hosts.yml --extra-vars "accessMode='NONE' displayStatus='NORMAL' managedStatus=['NORMAL']" + +ansible-playbook host/list_hosts.yml --extra-vars "azName='room1' projectName='project1'" + +ansible-playbook host/list_hosts.yml --extra-vars "sortKey='initiator_count' sortDir='desc'" + +# Generated Parameters (can be overwritten): +# azId: availability zone ID +# projectId: project ID +# +# Examples: + +ansible-playbook host/list_hosts.yml --extra-vars "azId='B2012FF2ECB03CCCA03FFAAD4BA590F1' projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" + +``` + +### 6.2 - Get Hosts by Fuzzy Name + +```shell +# Required Parameters: +# hostName: host name +# +# Examples: + +ansible-playbook host/get_hosts_by_fuzzy_name.yml --extra-vars "hostName='test'" + +``` + +### 6.3 - Show Host + +```shell +# Required Parameters: +# hostName: host name, can be replaced with hostId +# +# Examples: + +ansible-playbook host/show_host.yml --extra-vars "hostName='ansible1'" + +# Optional Parameters: +# hostId: host ID +# showPort: show host ports, default: true +# portName: port wwn or iqn +# portType: port type, options: UNKNOWN, FC, ISCSI +# portStatus: port status, options: UNKNOWN, ONLINE, OFFLINE, UNBOUND +# +# Examples: + +ansible-playbook host/show_host.yml --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" + +ansible-playbook host/show_host.yml --extra-vars '{"hostName": "ansible1", "showPort": false}' + +ansible-playbook host/show_host.yml --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b' portName='10000090fa1b623e'" + +ansible-playbook host/show_host.yml --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b' portType='ISCSI'" + +ansible-playbook host/show_host.yml --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b' portStatus='ONLINE'" + +``` + +### 6.4 - List Host Groups + +```shell +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: host_count +# sortDir: sort direction, options: desc, asc +# hostGroupName: host group name +# managedStatus: a list of managed status, options: NORMAL, TAKE_OVERING, TAKE_ERROR, TAKE_OVER_ALARM, UNKNOWN +# azName: availability zone name +# projectName: project name +# +# Examples: + +ansible-playbook host/list_hostgroups.yml + +ansible-playbook host/list_hostgroups.yml --extra-vars "hostGroupName='test'" + +ansible-playbook host/list_hostgroups.yml --extra-vars "azName='room1' projectName='project1'" + +# Generated Parameters (can be overwritten): +# azIds: a list of availability zone IDs +# projectId: project ID +# +# Examples: +# --extra-vars "azIds=['B2012FF2ECB03CCCA03FFAAD4BA590F1'] projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" + +``` + +### 6.5 - Get Host Groups by Fuzzy Name + +```shell +# Required Parameters: +# hostGroupName: host group name +# +# Examples: + +ansible-playbook host/get_hostgroups_by_fuzzy_name.yml --extra-vars "hostGroupName='test'" + +``` + +### 6.6 - Show Host Group + +```shell +# Required Parameters: +# hostGroupName: host group name, can be replaced with hostId +# +# Examples: + +ansible-playbook host/show_hostgroup.yml --extra-vars "hostGroupName='group1'" + +# Optional Parameters: +# hostGroupId: host group ID +# showHost: show hosts, default: true +# hostName: host name +# ip: ip address +# osType: os type, options: LINUX, WINDOWS, SUSE, EULER, REDHAT, CENTOS, WINDOWSSERVER2012, SOLARIS, HPUX, AIX, XENSERVER, MACOS, VMWAREESX, ORACLE, OPENVMS +# displayStatus: a list of display status, options: OFFLINE, NOT_RESPONDING, NORMAL, RED, GRAY, GREEN, YELLOW +# managedStatus: a list of managed status, options: NORMAL, TAKE_OVERING, TAKE_ERROR, TAKE_OVER_ALARM, UNKNOWN +# +# Examples: +ansible-playbook host/show_hostgroup.yml --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" +ansible-playbook host/show_hostgroup.yml --extra-vars '{"hostGroupName":"group1","showHost":false}' +ansible-playbook host/show_hostgroup.yml --extra-vars '{"hostGroupName":"group1","displayStatus":["NORMAL"],"managedStatus":["NORMAL"]}' +``` + + +## 7 - Volume Actions + +### 7.1 - Create Volume + +```shell +# Required Parameters: +# volumes: a list of volumes: [{ +# name: volume name or prefix, +# capacity: capacity in GiB, +# count: number of volumes, +# start_suffix: suffix start number, default 0 +# }] +# tierName: service level name, can be instead with tierId +# +# Examples: + +# create a batch of volumes +ansible-playbook volume/create_volume.yml \ + --extra-vars "tierName='AT_Class_B'" \ + --extra-vars '{"volumes": [{"name": "ansible0_", "capacity": 10, "count": 2}] }' + +# set suffix start number +ansible-playbook volume/create_volume.yml \ + --extra-vars "tierName='Gold'" \ + --extra-vars '{"volumes": [{"name": "ansible1_", "capacity": 10, "count": 2, "start_suffix": 2}] }' + +# create multiple batch of volumes +ansible-playbook volume/create_volume.yml \ + --extra-vars "tierName='Gold'" \ + --extra-vars '{"volumes": [{"name": "ansible2_", "capacity": 10, "count": 2}, {"name": "ansible3_", "capacity": 10, "count": 2}] }' +# +# Optional Parameters: +# projectName: project name +# azName: availability zone name +# affinity: create multiple volumes on 1 storage, default: true, options: true, false +# affinityVolume: create target volume on the same storage of this affinityVolume +# hostName: map to host +# hostGroupName: map to host group +# +# Examples: + +# set project name +ansible-playbook volume/create_volume.yml \ + --extra-vars "projectName='project1'" \ + --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible4_", "capacity": 10, "count": 2}] }' + +# set AZ name +ansible-playbook volume/create_volume.yml \ + --extra-vars "azName='room1'" \ + --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible5_", "capacity": 10, "count": 2}] }' + +# set affinity +ansible-playbook volume/create_volume.yml \ + --extra-vars "affinity='false'" \ + --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible6_", "capacity": 10, "count": 2}] }' + +# set affinity volume +ansible-playbook volume/create_volume.yml \ + --extra-vars "affinityVolume='ansible1_0000'" \ + --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible7_", "capacity": 10, "count": 2}] }' + +# map to host +ansible-playbook volume/create_volume.yml \ + --extra-vars "hostName='79rbazhs'" \ + --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible8_", "capacity": 10, "count": 2}] }' + +# map to host group +ansible-playbook volume/create_volume.yml \ + --extra-vars "hostGroupName='exclusive-df06cf7456dc485d'" \ + --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible9_", "capacity": 10, "count": 2}] }' + +# Generated Parameters (can be overwritten): +# tierId: service level ID +# projectId: project ID +# azId: az ID +# affinityVolumeId affinity volume ID +# hostId: host ID +# hostGroupId host group ID +# +# Examples: + +# set tier ID instead of tierName +ansible-playbook volume/create_volume.yml \ + --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleA_", "capacity": 10, "count": 2}] }' + +# set project ID instead of projectName +ansible-playbook volume/create_volume.yml \ + --extra-vars "projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" \ + --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleB_", "capacity": 10, "count": 2}] }' + +# set AZ ID instead of azName +ansible-playbook volume/create_volume.yml \ + --extra-vars "azId='02B770926FCB3AE5A413E8A74F9A576B'" \ + --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleC_", "capacity": 10, "count": 2}] }' + +# set affinity volume ID instant of affinityVolume +ansible-playbook volume/create_volume.yml \ + --extra-vars "affinityVolumeId='cfe7eb0f-73f8-4110-bff4-07cb46121566'" \ + --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleD_", "capacity": 10, "count": 2}] }' + +# set map host ID instead of hostName +ansible-playbook volume/create_volume.yml \ + --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" \ + --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleE_", "capacity": 10, "count": 2}] }' + +# set map host group ID instead of hostGroupName +ansible-playbook volume/create_volume.yml \ + --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" \ + --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleF_", "capacity": 10, "count": 2}] }' + +``` + +### 7.2 - Attach Volumes to Host + +```shell +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostName: host name, can be instead with hostId +# +# Examples: + +ansible-playbook volume/attach_volumes_to_host.yml --extra-vars "volumeName='ansibleC_' hostName='79rbazhs'" + +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostId: host ID +# +# Examples: + +ansible-playbook volume/attach_volumes_to_host.yml \ + --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ + --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" + +``` + + +### 7.3 - Attach Volumes to Host Group + +```shell +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostGroupName: host group name, can be instead with hostGroupId +# +# Examples: + +ansible-playbook volume/attach_volumes_to_hostgroup.yml \ + --extra-vars "volumeName='ansibleC_' hostGroupName='exclusive-df06cf7456dc485d'" + +# +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostGroupId: host group ID +# +# Examples: + +ansible-playbook volume/attach_volumes_to_hostgroup.yml \ + --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ + --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" + +``` + + +### 7.4 - Detach Volumes from Host + +```shell +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostName: host name, can be instead with hostId +# +# Examples: + +ansible-playbook volume/detach_volumes_from_host.yml --extra-vars "volumeName='ansibleC_' hostName='79rbazhs'" + +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostId: host ID +# +# Examples: + +ansible-playbook volume/detach_volumes_from_host.yml \ + --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ + --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" + +``` + +### 7.5 - Detach Volumes from Host Group + +```shell +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostGroupName: host group name, can be instead with hostGroupId +# +# Examples: + +ansible-playbook volume/detach_volumes_from_hostgroup.yml \ + --extra-vars "volumeName='ansibleC_' hostGroupName='exclusive-df06cf7456dc485d'" + +# +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostGroupId: host group ID +# +# Examples: + +ansible-playbook volume/detach_volumes_from_hostgroup.yml \ + --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ + --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" + +``` + +### 7.6 - List Volumes + +```shell +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: size +# sortDir: sort direction, default: asc, options: desc, asc +# volumeName: volume name +# volumeWwn: volume WWN +# status: volume status, options: creating, normal, mapping, unmapping, deleting, error, expanding +# allocType: allocate type, options: thin, thick +# attached: is attached, options: true, false +# mode: service mode, options: service, non-service, all +# tierName: service level name +# projectName: project name +# hostName: host name +# hostGroupName: host group name +# deviceName: storage device name +# poolName: storage pool name +# +# Examples: + +ansible-playbook volume/list_volumes.yml +ansible-playbook volume/list_volumes.yml --extra-vars "pageNo=1 pageSize=2" +ansible-playbook volume/list_volumes.yml --extra-vars "sortKey='size' sortDir=desc" +ansible-playbook volume/list_volumes.yml --extra-vars "volumeName='ansible'" +ansible-playbook volume/list_volumes.yml --extra-vars "volumeWwn='6002c03100dffcaa01142ac40000259a'" +ansible-playbook volume/list_volumes.yml --extra-vars "status='normal'" +ansible-playbook volume/list_volumes.yml --extra-vars "allocType='thin'" +ansible-playbook volume/list_volumes.yml --extra-vars "attached='true'" +ansible-playbook volume/list_volumes.yml --extra-vars "mode='service'" +ansible-playbook volume/list_volumes.yml --extra-vars "tierName='Gold'" +ansible-playbook volume/list_volumes.yml --extra-vars "projectName='project1'" +ansible-playbook volume/list_volumes.yml --extra-vars "hostName='79rbazhs'" +ansible-playbook volume/list_volumes.yml --extra-vars "hostGroupName='exclusive-df06cf7456dc485d'" +ansible-playbook volume/list_volumes.yml --extra-vars "deviceName='A'" +ansible-playbook volume/list_volumes.yml --extra-vars "deviceName='A' poolName='StoragePool001'" + +# Generated Parameters (can be overwritten): +# tierId: service level ID +# projectId: project ID +# hostId: host ID +# hostGroupId: host group ID +# deviceId: storage device ID +# poolId: storage pool ID +# +# Examples: + +ansible-playbook volume/list_volumes.yml --extra-vars "tierId='bdd129e1-6fbf-4456-91d8-d1fe426bf8e0'" +ansible-playbook volume/list_volumes.yml --extra-vars "projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" +ansible-playbook volume/list_volumes.yml --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" +ansible-playbook volume/list_volumes.yml --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" +ansible-playbook volume/list_volumes.yml --extra-vars "deviceId='9da73b78-3054-11ea-9855-00505691e086'" +ansible-playbook volume/list_volumes.yml --extra-vars "deviceId='9da73b78-3054-11ea-9855-00505691e086' poolId=0" +``` + +### 7.7 - Get Volumes by Fuzzy Name + +```shell +# Required Parameters: +# volumeName: volume name +# +# Examples: + +ansible-playbook volume/get_volumes_by_fuzzy_name.yml --extra-vars "volumeName='ansible'" + +# +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# +# Examples: + +ansible-playbook volume/get_volumes_by_fuzzy_name.yml --extra-vars "pageNo=1 pageSize=100 volumeName='ansible'" + +``` + +### 7.8 - Delete Volumes by Fuzzy Name + +```shell +# Required Parameters: +# volumeName: volume name +# +# Examples: +# --extra-vars "volumeName='ansible'" + +ansible-playbook volume/delete_volumes_by_fuzzy_name.yml --extra-vars "volumeName=ansible" + +``` + +## 8 - Task Actions + +### 8.1 - List Tasks + +```shell +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: name, status, start_time, end_time +# sortDir: sort direction, default: asc, options: desc, asc +# taskName: task name +# ownerName: owner name +# status: task status, options: 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout +# startTimeFrom: query tasks which's start time after this, epoch in seconds +# startTimeTo: query tasks which's start time before this, epoch in seconds +# endTimeFrom: query tasks which's end time after this, epoch in seconds +# endTimeTo: query tasks which's end time before this, epoch in seconds +# +# Examples: + +ansible-playbook task/list_tasks.yml +ansible-playbook task/list_tasks.yml --extra-vars "sortKey='start_time' sortDir='desc'" +ansible-playbook task/list_tasks.yml --extra-vars "taskName='Delete volume'" +ansible-playbook task/list_tasks.yml --extra-vars "status=3" +ansible-playbook task/list_tasks.yml --extra-vars "startTimeFrom=`date -d '12:00:00' +%s` startTimeTo=`date -d '16:00:00' +%s`" +ansible-playbook task/list_tasks.yml --extra-vars "endTimeFrom=`date -d '12:00:00' +%s` endTimeTo=`date -d '16:00:00' +%s`" + +``` + +### 8.2 - Get Task by ID + +```shell +# Required Parameters: +# taskId: Task ID +# +# Examples: + +ansible-playbook task/get_task_by_id.yml --extra-vars "taskId=bd5f2b70-d416-4d61-8e1a-f763e68dbbe1" +``` + + +### 8.3 - Wait Task Complete + +```shell +# Required Parameters: +# taskId: Task ID +# +# Optional Parameters: +# seconds: wait seconds, default 300 +# +# Examples: + +ansible-playbook task/wait_task_complete.yml --extra-vars "taskId=bd5f2b70-d416-4d61-8e1a-f763e68dbbe1 seconds=60" + +``` + + +## 9 - CMDB Actions + + +### 9.1 - List Instances +```shell +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# +# Examples: + +ansible-playbook cmdb/list_instances.yml --extra-vars "objType=volume" + +# Optional Parameters: +# params: query parameters, see Examples +# export: export file path +# sep: separator, default '|' +# +# Examples: + +ansible-playbook cmdb/list_instances.yml --extra-vars "objType=volume" \ + --extra-vars "params='pageNo=1&pageSize=10'" \ + --extra-vars "export='volumes.csv' sep='|'" + +ansible-playbook cmdb/list_instances.yml --extra-vars "objType=volume" \ + --extra-vars "params='condition={\"constraint\":[{\"simple\":{\"name\":\"dataStatus\",\"operator\":\"equal\",\"value\":\"normal\"}},{\"logOp\":\"and\",\"simple\":{\"name\":\"name\",\"operator\":\"contain\",\"value\":\"ansible\"}}]}'" + +ansible-playbook cmdb/list_instances.yml --extra-vars "objType=fcswitchport" \ + --extra-vars "params='condition={\"constraint\":[{\"simple\":{\"name\":\"dataStatus\",\"operator\":\"equal\",\"value\":\"normal\"}},{\"logOp\":\"and\",\"simple\":{\"name\":\"name\",\"operator\":\"equal\",\"value\":\"port0\"}}]}'" + +# Generated Parameters (can be overwritten): +# className: CI class Name, see ../global.yml to get supported className in INVENTORY.objType.className +# +# Examples: + +ansible-playbook cmdb/list_instances.yml --extra-vars "className=SYS_Lun" +``` + + +### 9.2 - Get Instance by ID +```shell +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# instanceId: instance ID +# +# Examples: + +ansible-playbook cmdb/get_instance_by_id.yml --extra-vars "objType=volume instanceId=07C1C88199643614A4836E725C73F17D" + +# Generated Parameters (can be overwritten): +# className: CI class Name, see ../global.yml to get supported className in INVENTORY.objType.className +# +# Examples: + +ansible-playbook cmdb/get_instance_by_id.yml --extra-vars "className=SYS_Lun instanceId=07C1C88199643614A4836E725C73F17D" + +``` + +### 9.3 - List Relations +```shell +# Required Parameters: +# relationName: relation name, see ../global.yml to get supported relations in INVENTORY +# +# Examples: + +ansible-playbook cmdb/list_relations.yml --extra-vars "relationName=M_DjHostAttachedLun" + +# Optional Parameters: +# params: query parameters, see Examples +# export: export file path +# +# Examples: + +ansible-playbook cmdb/list_relations.yml --extra-vars "relationName=M_DjHostAttachedLun params='pageNo=1&pageSize=10'" + +ansible-playbook cmdb/list_relations.yml --extra-vars "relationName=M_DjHostAttachedLun params='condition=[{\"simple\":{\"name\":\"last_Modified\",\"operator\":\"greater%20than\",\"value\":\"1576938117968\"}}]'" + +ansible-playbook cmdb/list_relations.yml --extra-vars "relationName=M_DjHostAttachedLun export='volume-map.csv' sep='|'" +``` + +### 9.4 - Get Relation by ID +```shell +# Required Parameters: +# relationName: relation name, see ../global.yml to get supported relations in INVENTORY +# instanceId: instance ID +# +# Examples: + +ansible-playbook cmdb/get_relation_by_id.yml --extra-vars "relationName=M_DjHostAttachedLun instanceId=BF4D573E5E4C3072B679DE04F1D3742E" + +``` + +## 10 - Performance Monitor + +### 10.1 - List Object Types +```shell +ansible-playbook perf/list_obj_types.yml +``` + +### 10.2 - List Indicators +```shell +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# +# Example: + +ansible-playbook perf/list_indicators.yml --extra-vars "objType=volume" + + +# Generated Parameters (can be overwritten): +# objTypeId: object type id, see ../global.yml to get supported object types in INVENTORY.objType.objTypeId +# +# Examples: + +ansible-playbook perf/list_indicators.yml --extra-vars "objTypeId='1125921381679104'" +``` + +### 10.3 - Show Indicators Detail +```shell +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# indicators: a list of indicator names, see ../global.yml to get supported indicators in INVENTORY.objType.indicators +# +# Examples: + +ansible-playbook perf/show_indicators.yml --extra-vars "objType=volume indicators=['bandwidth','throughput','responseTime']" + +# Generated Parameters (can be overwritten): +# objTypeId: object type id, see ../global.yml to get supported object types in INVENTORY.objType.objTypeId +# indicatorIds: a list of indicator id, see ../global.yml to get supported indicators in INVENTORY.objType.indicators +# +# Examples: + +ansible-playbook perf/show_indicators.yml --extra-vars "objTypeId='1125921381679104'" \ + --extra-vars "indicatorIds=['1125921381744641','1125921381744642','1125921381744643']" +``` + + +### 10.4 - Get History Performance Data +```shell +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# indicators: a list of indicator name, see ../global.yml to get supported indicators in INVENTORY.objType.indicators +# objName: object name (fuzzy) +# +# Examples: + +ansible-playbook perf/query_history_data.yml --extra-vars "objType=volume" \ + --extra-vars "indicators=['bandwidth','throughput','responseTime']" \ + --extra-vars "objName='DJ_AT_0000'" + +# Optional Parameters: +# endTime: epoch in seconds, default value is current time +# timeSpan: time range before endTime, default value is 1h (1 hour), supported unit: s,m,h,d,w,M,y +# +# Examples: + +ansible-playbook perf/query_history_data.yml --extra-vars "objType=volume" \ + --extra-vars "indicators=['bandwidth','throughput','responseTime']" \ + --extra-vars "objName='1113-001'" \ + --extra-vars "endTime=`date -d '2019-11-21 23:00:00' +%s` timeSpan=30m" + +# Generated Parameters (can be overwritten): +# beginTime: epoch in seconds, default value is endTime - timeSpan +# interval: sample rate enum: MINUTE/HOUR/DAY/WEEK/MONTH, default value is depend on timeSpan (<=1d: MINUTE, <=1w: HOUR, >1w: DAY) +# objTypeId: object type id, see ../global.yml to get supported object types in INVENTORY.objType.objTypeId +# objIds: a list object resId, use ../cmdb/list_instances.yml to get object resId +# indicatorIds: a list of indicator id, see ../global.yml to get supported indicators in INVENTORY.objType.indicators +# +# Examples: + +ansible-playbook perf/query_history_data.yml --extra-vars "objTypeId='1125921381679104'" \ + --extra-vars "objIds=['630EA7167C22383F965664860C5FAEEC']" \ + --extra-vars "indicatorIds=['1125921381744641','1125921381744642','1125921381744643']" \ + --extra-vars "endTime=`date -d '2019-11-21 23:00:00' +%s`" \ + --extra-vars "beginTime=`date -d '2019-11-21 22:30:00' +%s`" \ + --extra-vars "interval='MINUTE'" + +``` + + +## 11 - Dataset Actions + +### 11.1 - Flat Query Histogram Time Series Data +```shell + +# Required Parameters: +# dataSet: data set name, see ../global.yml to get supported data sets in INVENTORY.objType.dataset +# filterValues: filter values, default filter by object.name +# metrics: metrics, invoke show_data_set.yml to get the supported metrics +# +# Examples: + +# query last 1 hour data, filter by object.name +ansible-playbook dataset/flat_query_histogram.yml \ + --extra-vars "dataSet=perf-lun filterValues=['DJ_AT_0000','DJ_AT_0001'] metrics=['throughput','responseTime']" + + +# Optional Parameters: +# endTime: epoch in seconds, default value is current time +# timeSpan: time range before endTime, default value is 1h (1 hour), supported unit: s,m,h,d,w,M,y +# granularity: sample rate, default value is: auto, supported values: auto,1m,30m,1d +# filterDimension: filter dimension, default value is: object.name +# dimensions: a list of dimensions, default ['object.id','object.name'] +# agg: aggregate type, supported values: avg,max,min,sum +# pageNo: page NO., default 1 +# pageSize: page size, default 60 +# export: export file path +# sep: separator, default '|' +# +# Examples: + +# query last 1 hour data, multiple dimensions +ansible-playbook dataset/flat_query_histogram.yml \ + --extra-vars "dataSet=perf-lun filterValues=['1113-001','1113-1815'] metrics=['throughput','responseTime']" \ + --extra-vars "dimensions=['object.id','object.name']" + +# query specified data from timeSpan before endTime +ansible-playbook dataset/flat_query_histogram.yml \ + --extra-vars "dataSet=perf-lun endTime=`date -d '2019-11-21 23:00:00' +%s` timeSpan=30m granularity=30m" \ + --extra-vars "filterDimension=object.name filterValues=['1113-001','1113-1815']" \ + --extra-vars "dimensions=['object.id','object.name']" \ + --extra-vars "metrics=['throughput','responseTime'] agg=avg" \ + --extra-vars "pageNo=1 pageSize=120" \ + --extra-vars "export='perf-lun-last1h.csv' sep='|'" + +# Generated Parameters (can be overwritten): +# beginTime: epoch in seconds, default value is endTime - timeSpan +# +# Examples: + +# query specified data from beginTime to endTime +ansible-playbook dataset/flat_query_histogram.yml \ + --extra-vars "dataSet=perf-lun beginTime=`date -d '2019-11-21 22:30:00' +%s` endTime=`date -d '2019-11-21 23:00:00' +%s` granularity=1m" \ + --extra-vars "filterDimension=object.name filterValues=['1113-001','1113-1815']" \ + --extra-vars "dimensions=['object.id','object.name']" \ + --extra-vars "metrics=['throughput','responseTime'] agg=avg" \ + --extra-vars "pageNo=1 pageSize=120" +``` + +### 11.2 - Flat Queries +```shell + +# Required Parameters: +# dataset: data set name, see ../global.yml to get supported data sets in INVENTORY.objType.dataset +# query: query body, see examples +# +# Optional Parameters: +# pageNo: page NO., default 1 +# pageSize: page size, default 1000 +# export: export file path +# sep: separator, default '|' +# +# Examples: + +# volume performnace +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/volume/volume-perf-flat.yml \ + --extra-vars "export='perf-lun-last1h.csv' sep='|'" + +# disk health +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/disk/disk-health-flat.yml + +# fcswitchport performance +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/fcswitchport/fcswitchport-perf-flat.yml + +# storage performance +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/storage/storage-perf-flat.yml + +# storage capacity +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/storage/storage-stat-flat.yml + +# pool performance +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/pool/pool-perf-flat.yml + +# pool capacity +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/pool/pool-stat-flat.yml + +# tier performance +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/tier/tier-perf-flat.yml + +# tier statistics +ansible-playbook dataset/flat_query.yml --extra-vars @dataset/tier/tier-stat-flat.yml +``` + + +## 12 - Storage Actions + +### 12.1 - Sync Storage + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with storageId +# +# Examples: + +ansible-playbook storage/sync_storage.yml --extra-vars "deviceName='Storage-5500'" + +# +# Generated Parameters (can be overwritten): +# deviceId: storage device ID +# +# Examples: + +ansible-playbook storage/sync_storage.yml --extra-vars "deviceId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" + +``` + +## 13 - OceanStor Storage Actions + +The following playbooks is applicable for OceanStor V3, V5, Dorado V3 and Dorado V6 series storage. + +### 13.1 - Login Storage + +```shell +# Include this login tasks before operator on DeviceManager REST API +# +# Required Parameters: +# deviceName: Storage device name define in ../global.yml STORAGES list, can be replace with deviceSn +# +# Examples: + + - import_tasks: login_storage.yml + vars: + deviceName: "Storage.11.150" + +# Optional Parameters: +# deviceSn: Storage device SN define in ../global.yml STORAGES list +# +# Examples: + + - import_tasks: login_storage.yml + vars: + deviceSn: "12323019876312325911" +``` + +### 13.2 - Check Volume Affinity + +```shell +# Check volumes affinity +# Include this check tasks before local protection operations +# +# Required Parameters: +# volumes: a list of volume names +# +# Outputs: +# deviceSn: device SN +# volumeIds: a list of volume IDs +# +# Examples: + + - import_tasks: check_volume_affinity.yml + vars: + volumes: ["DJ_AT_0002", "DJ_AT_0003"] +``` + +### 13.3 - Check Volume Pairs + +```shell +# Check data protection volume pairs +# Include this check tasks before remote protection actions +# +# Required Parameters: +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Outputs: +# devicePair: a pair of device SN: [primaryDeviceSN, secondaryDeviceSN] +# volumePairs: a list of volume pairs: [ [primaryVolumeId1, secondaryVolumeId1], [primaryVolumeId2, secondaryVolumeId2],] + +# Examples: + + - import_tasks: check_volume_pairs.yml + vars: + primaryVolumes: ["DJ_AT_0002", "DJ_AT_0003"] + secondaryVolumes: ["DJ_BC_0002", "DJ_BC_0003"] +``` + +## 14 - OceanStor HyperMetro Actions + +The following playbooks is applicable for OceanStor V3, V5, Dorado V3 and Dorado V6 series storage. + +### 14.1 - Create HyperMetro Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/create_hypermetro_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"]}' + +# Optional Parameters: +# syncSpeed: initial speed, default: 2, options: 1/low, 2/medium, 3/high, 4/highest + +ansible-playbook storage/oceanstor/create_hypermetro_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"], "syncSpeed": 2}' + +``` + +### 14.2 - Delete HyperMetro Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replace with deviceSn +# cgName: consistency group name +# +# Examples: + +ansible-playbook storage/oceanstor/delete_hypermetro_cg.yml --extra-vars "deviceName='Storage1' cgName='cg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no + +ansible-playbook storage/oceanstor/delete_hypermetro_cg.yml --extra-vars '{"deviceSn":"12323019876312325911", "cgName":"cg1", "deletePairs": no}' +``` + +### 14.3 - Add Volumes to HyperMetro Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/add_volumes_to_hypermetro_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' + +``` + +### 14.4 - Remove Volumes from HyperMetro Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/remove_volumes_from_hypermetro_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' + +# Optional Parameters: +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no + +ansible-playbook storage/oceanstor/remove_volumes_from_hypermetro_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"], "deletePairs": no}' + +``` + +## 15 - OceanStor Replication Actions + +The following playbooks is applicable for OceanStor V3, V5, Dorado V3 and Dorado V6 series storage. + +### 15.1 - Create Replication Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# mode: replication mode, options: 1/sync, 2/async +# +# Examples: + +ansible-playbook storage/oceanstor/create_replication_cg.yml --extra-vars '{"cgName": "cg1", "mode": 2, "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"]}' +# +# Optional Parameters: +# recoveryPolicy: recover policy, default: 1, options: 1/automatic, 2/manual +# syncSpeed: initial speed, default: 2, options: 1/low, 2/medium, 3/high, 4/highest +# +# Examples: + +ansible-playbook storage/oceanstor/create_replication_cg.yml --extra-vars '{"cgName": "cg1", "mode": 2, "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"]}' --extra-vars '{"recoverPolicy": 2, "syncSpeed": 4}' + +# Optional Parameters (async mode): +# syncType: synchronize type for async replication, default: 3, options: 1/manual, 2/wait after last sync begin, 3/wait after last sync ends +# interval synchronize interval in seconds (when syncType is not manual), default: 600, options: 10 ~ 86400 +# compress: enable compress for async replication, default false, options: true, false +# +# Examples: + +ansible-playbook storage/oceanstor/create_replication_cg.yml --extra-vars '{"cgName": "cg1", "mode": 2, "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"]}' --extra-vars '{"syncType": 2, "interval": 300, "compress": true}' + +# Optional Parameters (sync mode): +# timeout: remote I/O timeout threshold in seconds, default: 10, options: 10~30, or set to 255 to disable timeout +# +# Examples: + +ansible-playbook storage/oceanstor/create_replication_cg.yml --extra-vars '{"cgName": "cg1", "mode": 1, "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"]}' --extra-vars '{"timeout": 30}' + +``` + +### 15.2 Delete Replication Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replace with deviceSn +# cgName: consistency group name +# +# Examples: + +ansible-playbook storage/oceanstor/delete_replication_cg.yml --extra-vars "deviceName='storage1' cgName='cg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no +# +# Examples: + +ansible-playbook storage/oceanstor/delete_replication_cg.yml --extra-vars '{"deviceSn":"12323019876312325911", "cgName":"cg1", "deletePairs": no}' + +``` + +### 15.3 - Add Volumes to Replication Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/add_volumes_to_replication_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' + +``` + +### 15.4 - Remove Volumes from Replication Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/remove_volumes_from_replication_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' + +# Optional Parameters: +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no + +ansible-playbook storage/oceanstor/remove_volumes_from_replication_cg.yml --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"], "deletePairs": no}' + +``` + +### 15.5 - Switchover Replication Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replace with deviceSn +# cgName: consistency group name +# +# Examples: + +ansible-playbook storage/oceanstor/switchover_replication_cg.yml --extra-vars "deviceName='storage1' cgName='cg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: + +ansible-playbook storage/oceanstor/switchover_replication_cg.yml --extra-vars "deviceSn='12323019876312325911' cgName='cg1'" + +``` + +## 16 - OceanStor Dorado V6 Protection Group Actions + +The following playbooks is applicable for OceanStor Dorado V6 series storage. + +### 16.1 - Create Protection Group + +```shell +# Required Parameters: +# pgName: protection group name +# volumes: a list of primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/create_pg.yml --extra-vars '{"pgName": "pg1", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' + +``` + +### 16.2 - Delete Protection Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# pgName: protection group name +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/delete_pg.yml --extra-vars "deviceName='storage1' pgName='pg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/delete_pg.yml --extra-vars "deviceSn='12323019876312325911' pgName='pg1'" + +``` + +### 16.3 - Add Volumes to Protection Group + +```shell +# Required Parameters: +# pgName: protection group name +# volumes: a list of primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/add_volumes_to_pg.yml --extra-vars '{"pgName": "pg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' + +``` + +### 16.4 - Remove Volumes from Protection Group + +```shell +# Required Parameters: +# pgName: protection group name +# volumes: a list of primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/remove_volumes_from_pg.yml --extra-vars '{"pgName": "pg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' +``` + +## 17 - OceanStor Dorado V6 Snapshot Consistency Group Actions + +The following playbooks is applicable for OceanStor Dorado V6 series storage. + +### 17.1 - Create Snapshot Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# pgName: protection group name +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/create_snapshot_cg.yml --extra-vars "deviceName='storage1' pgName='pg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# cgName: snapshot consistency group name, default: pgName_YYYYMMDDHH24MISS +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/create_snapshot_cg.yml --extra-vars "deviceSn='12323019876312325911' pgName='pg1' cgName='pg1_20200204'" + +``` + +### 17.2 - Delete Snapshot Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: snapshot consistency group name +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/delete_snapshot_cg.yml --extra-vars "deviceName='storage1' cgName='pg1_20200204'" + +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/delete_snapshot_cg.yml --extra-vars "deviceSn='12323019876312325911' cgName='pg1_20200204'" + +``` + +### 17.3 - Reactivate Snapshot Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: snapshot consistency group name +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/reactivate_snapshot_cg.yml --extra-vars "deviceName='storage1' cgName='pg1_20200204'" + +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/reactivate_snapshot_cg.yml --extra-vars "deviceSn='12323019876312325911' cgName='pg1_20200204'" + +``` + +## 18 - OceanStor Snapshot Actions + +The following playbooks is applicable for OceanStor V3, V5, Dorado V3 and Dorado V6 series storage. + +### 18.1 - Create Snapshots + +```shell +# Required Parameters: +# volumes: a list of primary volumes +# +# Examples: + +ansible-playbook storage/oceanstor/create_snapshots.yml --extra-vars '{"volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' + + +# Optional Parameters: +# suffix: snapshot name suffix, default: volumeName_yyyymmddThhmiss +# +# Examples: + +ansible-playbook storage/oceanstor/create_snapshots.yml --extra-vars '{"suffix": "20200204", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' + +``` + +### 18.2 - Delete Snapshots + +```shell +# Required Parameters: +# volumes: a list of primary volumes, can be replaced with: snapshots +# suffix: snapshot name suffix +# +# Examples: + +ansible-playbook storage/oceanstor/delete_snapshots.yml --extra-vars '{"suffix": "20200204", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' + + +# Generated Parameters (can be overwritten): +# deviceSn: storage device SN +# snapshots: a list of snapshot names +# +# Examples: + +ansible-playbook storage/oceanstor/delete_snapshots.yml --extra-vars '{"deviceSn": "12323019876312325911", "snapshots": ["DJ_AT_0000_20200204T232229", "DJ_AT_0001_20200204T232229"]}' + +``` + +### 18.3 - Deactivate Snapshots + +```shell +# Required Parameters: +# volumes: a list of primary volumes, can be replaced with: snapshots +# suffix: snapshot name suffix +# +# Examples: + +ansible-playbook storage/oceanstor/deactivate_snapshots.yml --extra-vars '{"suffix": "20200204", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' + + +# Generated Parameters (can be overwritten): +# deviceSn: storage device SN +# snapshots: a list of snapshot names +# +# Examples: + +ansible-playbook storage/oceanstor/deactivate_snapshots.yml --extra-vars '{"deviceSn": "12323019876312325911", "snapshots": ["DJ_AT_0000_20200204T232229", "DJ_AT_0001_20200204T232229"]}' + +``` + +### 18.4 - Activate Snapshots (Consistent) + +```shell +# Required Parameters: +# volumes: a list of primary volumes, can be replaced with: snapshots +# suffix: snapshot name suffix +# +# Examples: + +ansible-playbook storage/oceanstor/activate_snapshots.yml --extra-vars '{"suffix": "20200204", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' + + +# Generated Parameters (can be overwritten): +# deviceSn: storage device SN +# snapshots: a list of snapshot names +# +# Examples: + +ansible-playbook storage/oceanstor/activate_snapshots.yml --extra-vars '{"deviceSn": "12323019876312325911", "snapshots": ["DJ_AT_0000_20200204T232229", "DJ_AT_0001_20200204T232229"]}' + +``` + +## 19 - OceanStor Dorado V6 Clone Consistency Group Actions + +The following playbooks is applicable for OceanStor Dorado V6 series storage. + +### 19.1 - Create Clone Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# pgName: protection group name +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/create_clone_cg.yml --extra-vars "deviceName='storage1' pgName='pg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# cgName: clone consistency group name, default: pgName_yyyymmddThhmiss +# sync: whether to sync immediately, default: yes, options: yes, no +# syncSpeed: sync speed, default: 2, options: 1:low, 2:medium, 3:high, 4:highest +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/create_clone_cg.yml --extra-vars '{"deviceSn": "21023598258765432076", "pgName": "pg1", "cgName": "cg1", "sync": yes, "syncSpeed": 4}' + +``` + +### 19.2 - Delete Clone Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: clone consistency group name +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/delete_clone_cg.yml --extra-vars "deviceName='storage1' cgName='cg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# deleteReplica: delete replica LUN, default: no, options: yes, no +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/delete_clone_cg.yml --extra-vars '{"deviceSn": "21023598258765432076", "cgName": "cg1", "deleteReplica": yes}' +``` + +### 19.3 - Sync Clone Consistency Group + +```shell +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: clone consistency group name +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/sync_clone_cg.yml --extra-vars "deviceName='storage1' cgName='cg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# waitSync: wait until sync complete, default: no, options: yes, no +# syncSpeed: sync speed, options: 1:low, 2:medium, 3:high, 4:highest +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/sync_clone_cg.yml --extra-vars '{"deviceSn":"21023598258765432076", "cgName":"cg1", "waitSync": yes, "syncSpeed": 4}' +``` + +### 19.4 - Add Volumes to Clone Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# volumes: a list of volume names +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/add_volumes_to_clone_cg.yml --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' + +# Generated Parameters (can be overwritten) +# suffix: clone LUN name suffix, default: volumeName_yyyymmddThhmiss +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/add_volumes_to_clone_cg.yml --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"], "suffix": "20200205" }' +``` + +### 19.5 - Remove Volumes from Clone Consistency Group + +```shell +# Required Parameters: +# cgName: consistency group name +# volumes: a list of volume names +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/remove_volumes_from_clone_cg.yml --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' + +# Optional Parameters: +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no +# deleteReplica: delete replica LUN, default: no, options: yes, no +# +# Examples: + +ansible-playbook storage/oceanstor/dorado/remove_volumes_from_clone_cg.yml --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"], "deletePairs": yes, "deleteReplica": yes}' +``` \ No newline at end of file diff --git a/playbook/az/get_az_by_name.yml b/playbook/az/get_az_by_name.yml new file mode 100644 index 0000000..489528f --- /dev/null +++ b/playbook/az/get_az_by_name.yml @@ -0,0 +1,41 @@ +--- + +# Required Parameters: +# azName: availability zone name +# +# Examples: +# --extra-vars "azName='room1'" +# +- name: Get AZ by name + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get AZ by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.azs }}?az_name={{azName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: AZ + + - name: Check AZ + vars: + query: "[?name=='{{ azName }}']" + debug: + msg: "No matched AZ: '{{ azName }}'" + when: AZ.json.az_list | json_query(query) | length < 1 + + - name: Show AZ + vars: + query: "[?name=='{{ azName }}']" + debug: + msg: "{{ AZ.json.az_list | json_query(query) }}" + when: AZ.json.az_list | json_query(query) | length >= 1 diff --git a/playbook/az/list_azs.yml b/playbook/az/list_azs.yml new file mode 100644 index 0000000..54cdb8c --- /dev/null +++ b/playbook/az/list_azs.yml @@ -0,0 +1,64 @@ +--- + +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: name +# sortDir: sort direction, default: asc, options: desc, asc +# azName: availability zone name +# +# Examples: +# --extra-vars "azName='room' sortKey='name' sortDir='desc'" +# +- name: List AZs + hosts: localhost + vars: + pageNo: 1 + pageSize: 10 + params: "{{'limit=' + pageSize|string + '&start=' + (pageSize|int * (pageNo|int - 1) + 1) | string }}" + sortDir: asc + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Set params - sortKey & sortDir + set_fact: + params: "{{ params + '&sort_key=' + sortKey + '&sort_dir=' + sortDir }}" + when: + - sortKey is defined + + - name: Set params - azName + set_fact: + params: "{{ params + '&az_name=' + azName|urlencode }}" + when: + - azName is defined + + - name: Show Param + debug: + msg: "{{params}}" + + - name: List AZs + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.azs }}?{{params}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: AZs + + - name: Show AZs + vars: + objList: "{{ AZs.json.az_list }}" + totalNum: "{{AZs.json.total}}" + sortDesc: "{{ 'True' if sortDir == 'desc' else 'False' }}" + debug: + msg: + objList: "{{ ( objList | sort(attribute=sortKey,reverse=sortDesc) ) if sortKey is defined else ( objList | sort(reverse=sortDesc) ) }}" + totalNum: "{{ totalNum }}" + pageSize: "{{ pageSize }}" + pageNo: "{{ pageNo }}" \ No newline at end of file diff --git a/playbook/cmdb/get_instance_by_id.yml b/playbook/cmdb/get_instance_by_id.yml new file mode 100644 index 0000000..21c292f --- /dev/null +++ b/playbook/cmdb/get_instance_by_id.yml @@ -0,0 +1,39 @@ +--- +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# instanceId: instance ID +# +# Examples: +# --extra-vars "objType=volume instanceId=07C1C88199643614A4836E725C73F17D" + +# Generated Parameters (can be overwritten): +# className: CI class Name, see ../global.yml to get supported className in INVENTORY.objType.className +# +# Examples: +# --extra-vars "className=SYS_Lun instanceId=07C1C88199643614A4836E725C73F17D" + +- name: GET Instance by ID + hosts: localhost + vars: + className: "{{ INVENTORY[objType].className }}" # map objType to className + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Instance + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.instances}}/{{className}}/{{instanceId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: INSTANCE + + - name: Show Instance + debug: + msg: "{{ INSTANCE.json }}" diff --git a/playbook/cmdb/get_relation_by_id.yml b/playbook/cmdb/get_relation_by_id.yml new file mode 100644 index 0000000..c4bd706 --- /dev/null +++ b/playbook/cmdb/get_relation_by_id.yml @@ -0,0 +1,25 @@ +--- + +- name: GET Relation by ID + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Relation + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.relations}}/{{relationName}}/instances/{{instanceId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: RELATION + + - name: Show Relation + debug: + msg: "{{ RELATION.json }}" diff --git a/playbook/cmdb/list_instances.yml b/playbook/cmdb/list_instances.yml new file mode 100644 index 0000000..f347185 --- /dev/null +++ b/playbook/cmdb/list_instances.yml @@ -0,0 +1,61 @@ +--- +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# +# Examples: +# --extra-vars "objType=volume" +# +# Optional Parameters: +# params: query parameters, see Examples +# export: export file path +# sep: separator, default '|' +# +# Examples: +# --extra-vars "params='pageNo=1&pageSize=10'" +# --extra-vars "params='condition={\"constraint\":[{\"simple\":{\"name\":\"dataStatus\",\"operator\":\"equal\",\"value\":\"normal\"}},{\"logOp\":\"and\",\"simple\":{\"name\":\"name\",\"operator\":\"contain\",\"value\":\"ansible\"}}]}'" +# --extra-vars "export='volumes.csv' sep='|'" +# +# Generated Parameters (can be overwritten): +# className: CI class Name, see ../global.yml to get supported className in INVENTORY.objType.className +# +# Examples: +# --extra-vars "className=SYS_Lun" +# +- name: List Instances + hosts: localhost + vars_files: + - ../global.yml + vars: + params: 'pageNo=1&pageSize=10' + className: "{{ INVENTORY[objType].className }}" # map objType to className + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: List Instances + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.instances}}/{{className}}?{{params|replace(' ','%20')}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: INSTANCES + + - name: Show Instances + debug: + msg: "{{ INSTANCES.json }}" + when: export is not defined + + - import_tasks: ../util/json2csv.yml + vars: + data: "{{INSTANCES.json.objList}}" + keys: "{{INVENTORY[objType].attributes}}" + file: "{{export}}" + when: + - export is defined + - INSTANCES.json.objList is defined + - INSTANCES.json.objList|length > 0 + diff --git a/playbook/cmdb/list_relations.yml b/playbook/cmdb/list_relations.yml new file mode 100644 index 0000000..37f7edf --- /dev/null +++ b/playbook/cmdb/list_relations.yml @@ -0,0 +1,57 @@ +--- +# Required Parameters: +# relationName: relation name, see ../global.yml to get supported relations in INVENTORY +# +# Examples: +# --extra-vars "relationName=M_DjHostAttachedLun" +# +# Optional Parameters: +# params: query parameters, see Examples +# export: export file path +# sep: separator, default '|' +# +# Examples: +# --extra-vars "params='pageNo=1&pageSize=10'" +# --extra-vars "params='condition={\"constraint\":[{\"simple\":{\"name\":\"last_Modified\",\"operator\":\"greater%20than\",\"value\":\"1576938117968\"}}]}'" +# --extra-vars "export='volume-map.csv' sep='|'" + +- name: List Relations + hosts: localhost + vars: + params: 'pageNo=1&pageSize=10' + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: List Relations + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.relations}}/{{relationName}}/instances?{{params}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: RELATIONS + + - name: Show Relations + debug: + msg: "{{ RELATIONS.json }}" + when: export is not defined + + - import_tasks: ../util/json2csv.yml + vars: + data: "{{RELATIONS.json.objList}}" + keys: + - id + - last_Modified + - source_Instance_Id + - target_Instance_Id + file: "{{export}}" + when: + - export is defined + - RELATIONS.json.objList is defined + - RELATIONS.json.objList|length > 0 diff --git a/playbook/dataset/disk/disk-health-flat.yml b/playbook/dataset/disk/disk-health-flat.yml new file mode 100644 index 0000000..20e6c0d --- /dev/null +++ b/playbook/dataset/disk/disk-health-flat.yml @@ -0,0 +1,23 @@ +--- + +dummy: + +dataset: "stat-storage-disk" +query: + timeRange: + beginTime: 1574179200000 + endTime: 1574665590000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.object.name" + values: + - "DAE001.3" + dimensions: + - field: "dimensions.object.name" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.healthScore" + aggType: "avg" diff --git a/playbook/dataset/fcswitchport/fcswitchport-perf-flat.yml b/playbook/dataset/fcswitchport/fcswitchport-perf-flat.yml new file mode 100644 index 0000000..81be3ee --- /dev/null +++ b/playbook/dataset/fcswitchport/fcswitchport-perf-flat.yml @@ -0,0 +1,32 @@ +--- + +dummy: + +dataset: "perf-fcswitch-port" +query: + timeRange: + beginTime: 1574179200000 + endTime: 1574665590000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.object.name" + values: + - "port0" + - "port2" + dimensions: + - field: "dimensions.object.name" + index: 1 + - field: "dimensions.port.wwn" + index: 2 + - field: "timestamp" + index: 3 + metrics: + - field: "metrics.bandwidth" + aggType: "sum" + - field: "metrics.utility" + aggType: "avg" + - field: "metrics.error" + aggType: "sum" + - field: "metrics.bbCreditZero" + aggType: "sum" diff --git a/playbook/dataset/flat_query.yml b/playbook/dataset/flat_query.yml new file mode 100644 index 0000000..29e552c --- /dev/null +++ b/playbook/dataset/flat_query.yml @@ -0,0 +1,45 @@ +--- + +- name: Flat Query Dataset + hosts: localhost + vars: + pageNo: 1 + pageSize: 1000 + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Show Query + debug: + msg: "{{ query }}" + + - name: Query Dataset + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.datasets}}/{{dataset}}?pageNo={{pageNo}}&pageSize={{pageSize}}" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: "{{ query }}" + register: RESULT + + - name: Show Result + debug: + msg: "{{ RESULT.json }}" + when: export is not defined + + - import_tasks: ../util/json2csv.yml + vars: + data: "{{ RESULT.json.datas }}" + keys: "{{ RESULT.json.datas[0] | dict2items | json_query('[*].key') }}" + file: "{{ export }}" + when: + - export is defined + - RESULT.json.datas is defined + - RESULT.json.datas|length > 0 diff --git a/playbook/dataset/flat_query_histogram.yml b/playbook/dataset/flat_query_histogram.yml new file mode 100644 index 0000000..51e0c45 --- /dev/null +++ b/playbook/dataset/flat_query_histogram.yml @@ -0,0 +1,114 @@ +--- + +# Required Parameters: +# dataSet: data set name, see ../global.yml to get supported data sets in INVENTORY.objType.dataset +# filterValues: filter values, default filter by object.name +# metrics: metrics, see ../global.yml to find metrics +# +# Examples: +# --extra-vars "dataSet=perf-lun filterValues=['1113-001','1113-1815'] metrics=['throughput','responseTime']" + +# Optional Parameters: +# endTime: epoch in seconds, default value is current time +# timeSpan: time range before endTime, default value is 1h (1 hour), supported unit: s,m,h,d,w,M,y +# granularity: sample rate, default value is: auto, supported values: auto,1m,30m,1d +# filterDimension: filter dimension, default value is: object.name +# dimensions: a list of dimensions, default ['object.id','object.name'] +# agg: aggregate type, supported values: avg,max,min,sum +# pageNo: page NO., default 1 +# pageSize: page size, default 60 +# export: export file path +# sep: separator, default '|' +# +# Examples: +# --extra-vars "endTime=`date -d '2019-11-21 23:00:00' +%s` timeSpan=30m granularity=1m" \ +# --extra-vars "filterDimension=object.name filterValues=['1113-001','1113-1815']" \ +# --extra-vars "dimensions=['object.id','object.name']" \ +# --extra-vars "metrics=['throughput','responseTime'] agg=avg" \ +# --extra-vars "pageNo=1 pageSize=120" +# --extra-vars "export='perf-lun-last1h.csv' sep='|'" + +- name: Query Histogram Time Series data + hosts: localhost + vars: + pageNo: 1 + pageSize: 60 + granularity: "auto" # default auto, valid values: auto, 1m, 30m, 1d + filterDimension: "object.name" # default object name + agg: "avg" # default average, valid values: min, avg, max, sum + endTime: "{{ansible_date_time.epoch}}" # default current epoch (seconds), use `data +%s` to get current unix time + timeSpan: "1h" # default last 1 hour, can be s,m,h,d,w,M,y + dimensions: ['object.id','object.name'] # dimensions + + + # Generated Parameters (can be overwritten): + # beginTime: epoch in seconds, default value is endTime - timeSpan + # + # Examples: + # --extra-vars "beginTime=`date -d '2019-11-20 23:00:00' +%s`" + + unit: "{{ timeSpan[-1] if timeSpan is regex('^[0-9]+[s|m|h|d|w|M|y]$') else 's' }}" # default unit set to seconds + seconds: + s: 1 # second + m: 60 # minute + h: 3600 # hour + d: "{{ 3600 * 24 }}" # day + w: "{{ 3600 * 24 * 7 }}" # week + M: "{{ 3600 * 24 * 30 }}" # month + y: "{{ 3600 * 24 * 365 }}" # year + + beginTime: "{{ endTime|int - timeSpan|replace(unit,'')|int * seconds[unit]|int }}" # default timeSpan before current epoch (seconds) + vars_files: + - ../global.yml + gather_facts: yes + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Generate Indexed Dimensions + set_fact: + indexed_dimensions: "{{ indexed_dimensions|default([{ 'field': 'timestamp', 'index': 1 }]) + [{ 'field': 'dimensions.' + item.1, 'index': item.0 | int + 2 }] }}" + with_indexed_items: "{{ dimensions }}" + + - name: Generate Aggregated Metrics + set_fact: + aggregated_metrics: "{{ aggregated_metrics|default([]) + [{ 'field': 'metrics.' + item, 'aggType': agg }] }}" + with_items: "{{ metrics }}" + + - name: Post Query + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.datasets}}/{{dataSet}}?pageNo={{pageNo}}&pageSize={{pageSize}}" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + timeRange: + beginTime: "{{ beginTime }}000" + endTime: "{{ endTime }}000" + granularity: "{{ granularity }}" + filters: + dimensions: + - field: "dimensions.{{ filterDimension }}" + values: "{{ filterValues }}" + dimensions: "{{ indexed_dimensions }}" + metrics: "{{ aggregated_metrics }}" + register: RESULT + + - name: Show Result + debug: + msg: "{{ RESULT.json }}" + when: export is not defined + + - import_tasks: ../util/json2csv.yml + vars: + data: "{{ RESULT.json.datas }}" + keys: "{{ RESULT.json.datas[0] | dict2items | json_query('[*].key') }}" + file: "{{export}}" + when: + - export is defined + - RESULT.json.datas is defined + - RESULT.json.datas|length > 0 \ No newline at end of file diff --git a/playbook/dataset/pool/pool-perf-flat.yml b/playbook/dataset/pool/pool-perf-flat.yml new file mode 100644 index 0000000..4f55284 --- /dev/null +++ b/playbook/dataset/pool/pool-perf-flat.yml @@ -0,0 +1,27 @@ +--- + +dummy: + +dataset: "perf-storage-pool" +query: + timeRange: + beginTime: 1574265600000 + endTime: 1574651742000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.object.name" + values: + - "pool" + dimensions: + - field: "dimensions.object.name" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.throughput" + aggType: "sum" + - field: "metrics.bandwidth" + aggType: "sum" + - field: "metrics.responseTime" + aggType: "avg" diff --git a/playbook/dataset/pool/pool-stat-flat.yml b/playbook/dataset/pool/pool-stat-flat.yml new file mode 100644 index 0000000..6a0a2b1 --- /dev/null +++ b/playbook/dataset/pool/pool-stat-flat.yml @@ -0,0 +1,39 @@ +--- + +dummy: + +dataset: "stat-storage-pool" +query: + timeRange: + beginTime: 1574265600000 + endTime: 1574651742000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.object.name" + values: + - "pool" + dimensions: + - field: "dimensions.object.name" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.totalCapacity" + aggType: "sum" + - field: "metrics.usedCapacity" + aggType: "sum" + - field: "metrics.compressedCapacity" + aggType: "sum" + - field: "metrics.dedupedCapacity" + aggType: "sum" + - field: "metrics.subscribedCapacity" + aggType: "sum" + - field: "metrics.protectionCapacity" + aggType: "sum" + - field: "metrics.tier0Capacity" + aggType: "sum" + - field: "metrics.tier1Capacity" + aggType: "sum" + - field: "metrics.tier2Capacity" + aggType: "sum" diff --git a/playbook/dataset/storage/storage-perf-flat.yml b/playbook/dataset/storage/storage-perf-flat.yml new file mode 100644 index 0000000..f80532f --- /dev/null +++ b/playbook/dataset/storage/storage-perf-flat.yml @@ -0,0 +1,27 @@ +--- + +dummy: + +dataset: "perf-storage-device" +query: + timeRange: + beginTime: 1574265600000 + endTime: 1574651742000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.device.ipAddress" + values: + - "8.46.186.15" + dimensions: + - field: "dimensions.object.name" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.throughput" + aggType: "sum" + - field: "metrics.bandwidth" + aggType: "sum" + - field: "metrics.responseTime" + aggType: "avg" diff --git a/playbook/dataset/storage/storage-stat-flat.yml b/playbook/dataset/storage/storage-stat-flat.yml new file mode 100644 index 0000000..e6cd53b --- /dev/null +++ b/playbook/dataset/storage/storage-stat-flat.yml @@ -0,0 +1,27 @@ +--- + +dummy: + +dataset: "stat-storage-device" +query: + timeRange: + beginTime: 1574265600000 + endTime: 1574651742000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.device.ipAddress" + values: + - "8.46.186.15" + dimensions: + - field: "dimensions.object.name" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.totalCapacity" + aggType: "sum" + - field: "metrics.usedCapacity" + aggType: "sum" + - field: "metrics.freeDisksCapacity" + aggType: "sum" diff --git a/playbook/dataset/tier/tier-perf-flat.yml b/playbook/dataset/tier/tier-perf-flat.yml new file mode 100644 index 0000000..788a776 --- /dev/null +++ b/playbook/dataset/tier/tier-perf-flat.yml @@ -0,0 +1,27 @@ +--- + +dummy: + +dataset: "perf-lun" +query: + timeRange: + beginTime: 1574265600000 + endTime: 1574651742000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.lun.tier" + values: + - "Gold" + dimensions: + - field: "dimensions.lun.tier" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.throughput" + aggType: "sum" + - field: "metrics.bandwidth" + aggType: "sum" + - field: "metrics.responseTime" + aggType: "max" diff --git a/playbook/dataset/tier/tier-stat-flat.yml b/playbook/dataset/tier/tier-stat-flat.yml new file mode 100644 index 0000000..83a2e3b --- /dev/null +++ b/playbook/dataset/tier/tier-stat-flat.yml @@ -0,0 +1,25 @@ +--- + +dummy: + +dataset: "stat-lun" +query: + timeRange: + beginTime: 1574265600000 + endTime: 1574651742000 + granularity: "30m" + filters: + dimensions: + - field: "dimensions.lun.tier" + values: + - "Gold" + dimensions: + - field: "dimensions.lun.tier" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.totalCapacity" + aggType: "sum" + - field: "metrics.allocCapacity" + aggType: "sum" diff --git a/playbook/dataset/volume/volume-perf-flat.yml b/playbook/dataset/volume/volume-perf-flat.yml new file mode 100644 index 0000000..3874956 --- /dev/null +++ b/playbook/dataset/volume/volume-perf-flat.yml @@ -0,0 +1,28 @@ +--- + +dummy: + +dataset: "perf-lun" +query: + timeRange: + beginTime: 1581422400000 + endTime: 1581426235000 + granularity: "auto" + filters: + dimensions: + - field: "dimensions.object.name" + values: + - "DJ_AT_0000" + - "DJ_AT_0001" + dimensions: + - field: "dimensions.object.name" + index: 1 + - field: "timestamp" + index: 2 + metrics: + - field: "metrics.throughput" + aggType: "sum" + - field: "metrics.bandwidth" + aggType: "sum" + - field: "metrics.responseTime" + aggType: "avg" diff --git a/playbook/global.yml b/playbook/global.yml new file mode 100644 index 0000000..1c20e5b --- /dev/null +++ b/playbook/global.yml @@ -0,0 +1,1079 @@ +--- + +dummy: + +BASE_DIR: ~/ansible/playbook # playbook base directory + +DJ: + host: 192.168.2.10 # DJ host name or ip address + port: 26335 # DJ northbond api port, default: 26335 + user: nbiuser # DJ user name, the user type must be 'Third-party user', the role 'NBI User Group' must be assigned to the user + pswd: xxxxx # DJ user password + lang: en_US # DJ language setting, options: zh_CN, en_US + token: # do not change this, user/login.yml will automaticly update this when login success + +STORAGES: + - name: storage1 # Storage device name + sn: "12345678901234567890" # Storage device SN + ipList: # Storage management IP addresses + - 192.168.2.11 + - 192.168.2.12 + port: 8088 # Storage DeviceManager port, default: 8088 + user: admin # Storage user name + pswd: xxxxx # Storage user password + +URI: + sessions: plat/smapp/v1/sessions + tasks: taskmgmt/v1/tasks + projects: projectmgmt/v1/projects + azs: azmgmt/v1/availability-zones + tiers: service-policy/v1/service-levels + hosts: hostmgmt/v1/hosts + hostgroups: hostmgmt/v1/hostgroups + volumes: blockservice/v1/volumes + storages: storagemgmt/v1/storages + instances: resourcedb/v1/instances + relations: resourcedb/v1/relations + perfmgr: metrics/v1/mgr-svc + perfdata: metrics/v1/data-svc + datasets: metrics/v1/datasets + +INVENTORY: + az: + className: SYS_DjAz + attributes: + - id + - last_Modified + - creatTime + - name + relations: + storage: M_DjAzContainsStorDevice + host: M_DjAzContainsDjHost + hostgroup: M_DjAzContainsDjHostGroup + fabric: M_DjAzContainsFabric + + project: + className: SYS_DjProject + attributes: + - id + - last_Modified + - creatTime + - name + - remark + - resourceGroupId + relations: + volume: M_DjProjectContainsLun + host: M_DjProjectContainsDjHost + hostgroup: M_DjProjectContainsDjHostGroup + + tier: + className: SYS_DjTier + attributes: + - id + - last_Modified + - nativeId + - name + - type + - poolTotalCapacity + - poolUsedCapacity + relations: + pool: M_DjTierContainsStoragePool + storageport: M_DjTierContainsStoragePort + volume: M_DjTierContainsLun + objTypeId: 1126174784749568 + indicators: { + "bandwidthTiB": "1126174784815118", + "maxResponseTime": "1126174784815111", + "readBandwidth": "1126174784815107", + "readHitRatio": "1126174784815115", + "readRatio": "1126174784815114", + "readResponseTime": "1126174784815109", + "readSize": "1126174784815112", + "readThroughput": "1126174784815105", + "throughputTiB": "1126174784815117", + "writeBandwidth": "1126174784815108", + "writeHitRatio": "1126174784815116", + "writeResponseTime": "1126174784815110", + "writeSize": "1126174784815113", + "writeThroughput": "1126174784815106" + } + dimensions: + - object.id + - object.name + - object.nativeId + performance: + dataset: perf-tier + metrics: + - bandwidthTiB + - responseTime + - throughputTiB + - readBandwidth + - writeBandwidth + - readThroughput + - writeThroughput + - readHitRatio + - writeHitRatio + - readSize + - writeSize + - maxResponseTime + - readResponseTime + - writeResponseTime + - readRatio + + host: + className: SYS_DjHost + attributes: + - id + - nativeId + - last_Modified + - name + - status + - type + - accessMode + - ipAddress + - version + - ultraPathVersion + - nativeMultiPathVersion + - djProjectId + relations: + hostini: M_DjHostConsistsOfInitiator + volume: M_DjHostAttachedLun + storagehost: M_DjHostAssociateStorageHost + dimensions: + - object.id + - object.name + - object.nativeId + - host.type + - host.ipAddress + - host.status + - host.version + - host.accessMode + - host.ultraPathVersion + - host.nativeMultiPathVersion + - host.hostGroup + - host.hostGroupId + - host.hostGroupNativeId + - host.project + - host.projectId + - host.az + - host.azId + statistics: + dataset: stat-dj-host-present + metrics: + - count1 + - numOfLuns + - totalCapacity + - allocCapacity + - protectionCapacity + - dedupedCapacity + - compressedCapacity + + hostini: + className: SYS_DjHostInitiator + attributes: + - id + - nativeId + - last_Modified + - wwn + - type + - status + - djHostId + + hostgroup: + className: SYS_DjHostGroup + attributes: + - id + - nativeId + - last_Modified + - name + - sourceType + - djProjectId + relations: + host: M_DjHostGroupContainsDjHost + volume: M_DjHostGroupAttachedLun + + storage: + className: SYS_StorDevice + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - status + - productName + - deviceName + - ipAddress + - manufacturer + - sn + - version + - totalCapacity + - usedCapacity + - freeDisksCapacity + relations: + controller: M_StorDevConsistsOfController + storageport: M_StorDevConsistsOfStorPort + pool: M_StorDevConsistsOfStorPool + disk: M_StorDevConsistsOfStorDisk + volume: M_StorDevConsistsOfLun + diskdomain: M_StorDevConsistsOfDiskPool + storagehost: M_StorDevConsistsOfStorHost + objTypeId: 1125904201809920 + indicators: { + "bandwidth": "1125904201875458", + "cpuUsage": "1125904201875457", + "memoryUsage": "1125904201875463", + "readBandwidth": "1125904201875459", + "readThroughput": "1125904201875461", + "responseTime": "1125904201875464", + "throughput": "1125904201875465", + "writeBandwidth": "1125904201875460", + "writeThroughput": "1125904201875462" + } + dimensions: + - object.id + - object.name + - object.nativeId + - device.ipAddress + - device.status + - device.productName + - device.manufacturer + - device.sn + - device.softwareVersion + - device.az + - device.azId + performance: + dataset: perf-storage-device + metrics: + - cpuUsage + - memoryUsage + - responseTime + - throughput + - readThroughput + - writeThroughput + - bandwidth + - readBandwidth + - writeBandwidth + statistics: + dataset: stat-storage-device + metrics: + - count1 + - totalCapacity + - usedCapacity + - freeDisksCapacity + + controller: + className: SYS_Controller + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - status + - isMaster + - engine + - location + - softVer + - cpuInfo + - memorySize + - storageDeviceId + objTypeId: 1125908496777216 + indicators: { + "bandwidth": "1125908496842755", + "cpuUsage": "1125908496842753", + "memoryUsage": "1125908496842761", + "queueLength": "1125908496842760", + "readBandwidth": "1125908496842756", + "readHitRatio": "1125908496842766", + "readResponseTime": "1125908496842770", + "readThroughput": "1125908496842758", + "responseTime": "1125908496842762", + "throughput": "1125908496842763", + "writeBandwidth": "1125908496842757", + "writeCacheUsage": "1125908496842754", + "writeHitRatio": "1125908496842767", + "writeResponseTime": "1125908496842771", + "writeThroughput": "1125908496842759" + } + dimensions: + - object.id + - object.name + - object.nativeId + - controller.status + - controller.engine + - controller.location + - device.name + - device.ipAddress + - device.status + - device.productName + - device.manufacturer + - device.sn + - device.softwareVersion + - device.az + - device.azId + - device.nativeId + - device.resid + performance: + dataset: perf-controller + metrics: + - throughput + - readBandwidth + - writeBandwidth + - responseTime + - cpuUsage + - memoryUsage + - queueLength + - readThroughput + - writeThroughput + - bandwidth + - writeCacheUsage + - readHitRatio + - writeHitRatio + - readResponseTime + - writeResponseTime + statistics: + dataset: stat-controller-present + metrics: + - count1 + + pool: + className: SYS_StoragePool + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - status + - runningStatus + - type + - totalCapacity + - usedCapacity + - dedupedCapacity + - compressedCapacity + - protectionCapacity + - tier0Capacity + - tier1Capacity + - tier2Capacity + - tier0RaidLv + - tier1RaidLv + - tier2RaidLv + - poolId + - storageDeviceId + objTypeId: 1125912791744512 + indicators: { + "bandwidth": "1125912791810051", + "readBandwidth": "1125912791810052", + "readThroughput": "1125912791810054", + "responseTime": "1125912791810050", + "throughput": "1125912791810049", + "writeBandwidth": "1125912791810053", + "writeThroughput": "1125912791810055" + } + dimensions: + - object.id + - object.name + - object.nativeId + - pool.poolId + - pool.status + - pool.runningStatus + - pool.type + - pool.raidLevel + - pool.tier + - pool.tierId + - pool.tierNativeId + - device.name + - device.ipAddress + - device.status + - device.productName + - device.manufacturer + - device.sn + - device.softwareVersion + - device.az + - device.azId + - device.nativeId + - device.resid + performance: + dataset: perf-storage-pool + metrics: + - bandwidth + - responseTime + - throughput + - readBandwidth + - writeBandwidth + - readThroughput + - writeThroughput + statistics: + dataset: stat-storage-pool + metrics: + - count1 + - totalCapacity + - usedCapacity + - protectionCapacity + - dedupedCapacity + - compressedCapacity + - tier0Capacity + - tier1Capacity + - tier2Capacity + + disk: + className: SYS_StorageDisk + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - sn + - manufacturer + - physicalModel + - firmware + - speed + - capacity + - status + - physicalType + - logicalType + - healthScore + - diskId + - poolId + - storageDeviceId + objTypeId: 1125917086711808 + indicators: { + "bandwidth": "1125917086777346", + "queueLength": "1125917086777349", + "readThroughput": "1125917086777347", + "responseTime": "1125917086777351", + "serviceTime": "1125917086777350", + "throughput": "1125917086777352", + "utility": "1125917086777345", + "writeThroughput": "1125917086777348" + } + dimensions: + - object.id + - object.name + - object.nativeId + - disk.manufacturer + - disk.logicalType + - disk.physicalModel + - disk.physicalType + - disk.status + - disk.sn + - disk.speed + - disk.diskPool + - disk.diskPoolId + - disk.diskPoolNativeId + - device.name + - device.ipAddress + - device.status + - device.productName + - device.manufacturer + - device.sn + - device.softwareVersion + - device.az + - device.azId + - device.nativeId + - device.resid + performance: + dataset: perf-storage-disk + metrics: + - utility + - responseTime + - serviceTime + - queueLength + - throughput + - readThroughput + - writeThroughput + - bandwidth + statistics: + dataset: stat-storage-disk-present + metrics: + - count1 + - capacity + - healthScore + + diskdomain: + className: SYS_DiskPool + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - status + - runningStatus + - encryptDiskType + - totalCapacity + - usedCapacity + - freeCapacity + - spareCapacity + - usedSpareCapacity + - poolId + - storageDeviceId + + volume: + className: SYS_Lun + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - lunType + - mapped + - wwn + - totalCapacity + - allocCapacity + - protectionCapacity + - dedupedCapacity + - compressedCapacity + - lunId + - poolId + - storageDeviceId + - djTierId + - djProjectId + objTypeId: 1125921381679104 + indicators: { + "bandwidth": "1125921381744643", + "hitRatio": "1125921381744660", + "maxResponseTime": "1125921381744655", + "queueLength": "1125921381744650", + "readBandwidth": "1125921381744646", + "readHitRatio": "1125921381744644", + "readRatio": "1125921381744658", + "readResponseTime": "1125921381744656", + "readSize": "1125921381744652", + "readThroughput": "1125921381744648", + "responseTime": "1125921381744642", + "serviceTime": "1125921381744654", + "throughput": "1125921381744641", + "utility": "1125921381744651", + "writeBandwidth": "1125921381744647", + "writeHitRatio": "1125921381744645", + "writeRatio": "1125921381744659", + "writeResponseTime": "1125921381744657", + "writeSize": "1125921381744653", + "writeThroughput": "1125921381744649" + } + dimensions: + - object.id + - object.name + - object.nativeId + - lun.lunId + - lun.wwn + - lun.mapped + - lun.lunType + - lun.tier + - lun.tierId + - lun.tierNativeId + - lun.project + - lun.projectId + - lun.host + - lun.hostId + - lun.hostNativeId + - lun.hostGroup + - lun.hostGroupId + - lun.hostGroupNativeId + - lun.pool + - lun.poolId + - lun.poolNativeId + - device.name + - device.ipAddress + - device.status + - device.productName + - device.manufacturer + - device.sn + - device.softwareVersion + - device.az + - device.azId + - device.nativeId + - device.resid + performance: + dataset: perf-lun + metrics: + - bandwidth + - responseTime + - throughput + - readBandwidth + - writeBandwidth + - readThroughput + - writeThroughput + - readHitRatio + - writeHitRatio + - queueLength + - utility + - readSize + - writeSize + - serviceTime + - maxResponseTime + - readResponseTime + - writeResponseTime + - readRatio + - writeRatio + - hitRatio + statistics: + dataset: stat-lun + metrics: + - count1 + - totalCapacity + - allocCapacity # real occupied space, allocCapacity > totalCapacity for thick LUN + - protectionCapacity + - dedupedCapacity # not applicable for Dorado + - compressedCapacity # not applicable for Dorado + + storageport: + className: SYS_StoragePort + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - portId + - portName + - location + - connectStatus + - status + - portType + - mac + - mgmtIp + - ipv4Mask + - mgmtIpv6 + - ipv6Mask + - iscsiName + - bondId + - bondName + - wwn + - sfpStatus + - logicalType + - numOfInitiators + - speed + - maxSpeed + - storageDeviceId + objTypeId: 1125925676646400 + indicators: { + "bandwidth": "1125925676711938", + "maxResponseTime": "1125925676711959", + "queueLength": "1125925676711955", + "readBandwidth": "1125925676711939", + "readRatio": "1125925676711960", + "readResponseTime": "1125925676711952", + "readSize": "1125925676711956", + "readThroughput": "1125925676711943", + "responseTime": "1125925676711945", + "serviceTime": "1125925676711958", + "throughput": "1125925676711946", + "utility": "1125925676711951", + "writeBandwidth": "1125925676711940", + "writeRatio": "1125925676711961", + "writeResponseTime": "1125925676711953", + "writeSize": "1125925676711957", + "writeThroughput": "1125925676711944" + } + dimensions: + - object.id + - object.name + - object.nativeId + - port.portId + - port.portType + - port.connectStatus + - port.wwn + - port.location + - port.speed + - port.maxSpeed + - port.logicalType + - port.tier + - port.tierId + - port.tierNativeId + - device.name + - device.ipAddress + - device.status + - device.productName + - device.manufacturer + - device.sn + - device.softwareVersion + - device.az + - device.azId + - device.nativeId + - device.resid + performance: + dataset: perf-storage-port + metrics: + - bandwidth + - throughput + - responseTime + - readBandwidth + - writeBandwidth + - readThroughput + - writeThroughput + - utility + - readResponseTime + - writeResponseTime + - queueLength + - readSize + - writeSize + - serviceTime + - maxResponseTime + - readRatio + - writeRatio + statistics: + dataset: stat-storage-port-present + metrics: + - count1 + - numberOfInitiators + + storagehost: + className: SYS_StorageHost + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - ipAddress + - type + - hostId + - storageDeviceId + - djHostId + objTypeId: 1125938561548288 + indicators: { + "bandwidth": "1125938561613829", + "maxBandwidth": "1125938561613825", + "maxResponseTime": "1125938561613839", + "maxThroughput": "1125938561613828", + "queueLength": "1125938561613827", + "readBandwidth": "1125938561613831", + "readResponseTime": "1125938561613840", + "readSize": "1125938561613832", + "readThroughput": "1125938561613833", + "readTransDelay": "1125938561613842", + "responseTime": "1125938561613838", + "serviceTime": "1125938561613837", + "throughput": "1125938561613830", + "utility": "1125938561613826", + "writeBandwidth": "1125938561613834", + "writeResponseTime": "1125938561613841", + "writeSize": "1125938561613835", + "writeThroughput": "1125938561613836", + "writeTransDelay": "1125938561613843" + } + dimensions: + - object.id + - object.name + - object.nativeId + - djhost.resId + - djhost.name + - djhost.type + - djhost.ipAddress + - djhost.status + - djhost.version + - djhost.accessMode + - djhost.ultraPathVersion + - djhost.nativeMultiPathVersion + - djhost.hostGroup + - djhost.hostGroupId + - djhost.hostGroupNativeId + - djhost.project + - djhost.projectId + - djhost.az + - djhost.azId + - device.name + - device.ipAddress + - device.status + - device.productName + - device.manufacturer + - device.sn + - device.softwareVersion + - device.az + - device.azId + - device.nativeId + - device.resid + performance: + dataset: perf-storage-host + metrics: + - maxBandwidth + - utility + - queueLength + - maxThroughput + - bandwidth + - throughput + - readBandwidth + - readSize + - readThroughput + - writeBandwidth + - writeSize + - writeThroughput + - serviceTime + - responseTime + - maxResponseTime + - readResponseTime + - writeResponseTime + - readTransDelay + - writeTransDelay + statistics: + dataset: stat-storage-host-present + metrics: + - count1 + + fcswitch: + className: SYS_FCSwitch + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - status + - syncStatus + - sn + - deviceName + - manufacturer + - productName + - ipAddress + - portNum + - domainId + - version + - wwn + - fabricWwn + - virtualFabricId + - isLogical + - fabricId + relations: + fcswitchport: M_FCSwitchContainsPort + dimensions: + - object.id + - object.name + - object.nativeId + - device.sn + - device.productName + - device.manufacturer + - device.ipAddress + - device.status + - device.wwn + - device.domainId + - device.fabricWwn + - device.fabricName + - device.isLogical + - device.az + - device.azId + statistics: + dataset: stat-fcswitch-present + metrics: + - count1 + - numberOfPorts + + fcswitchport: + className: SYS_FCSwitchPort + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - connectStatus + - status + - speed + - wwn + - portType + - portIndex + - portNumber + - slotNumber + - fcSwitchId + objTypeId: 1970329131941888 + indicators: { + "bandwidth": "1970329132007425", + "bandwidthRx": "1970329132007434", + "bandwidthTx": "1970329132007440", + "bbCreditZero": "1970329132007433", + "class3Discard": "1970329132007430", + "error": "1970329132007439", + "invalidCrc": "1970329132007426", + "linkFailures": "1970329132007437", + "linkReset": "1970329132007429", + "linkResetRx": "1970329132007428", + "linkResetTx": "1970329132007427", + "signalLoss": "1970329132007438", + "syncLoss": "1970329132007431", + "utility": "1970329132007436", + "utilityRx": "1970329132007432", + "utilityTx": "1970329132007435" + } + dimensions: + - object.id + - object.name + - object.nativeId + - port.status + - port.speed + - port.portType + - port.connectStatus + - port.wwn + - port.portIndex + - port.slotNumber + - port.portNumber + - port.associateFCAlias + - port.associateFCAliasId + - port.associateFCZone + - port.associateFCZoneId + - port.remoteWwn + - port.remoteNodeWwn + - host.name + - host.ipAddress + - host.project + - host.projectId + - host.hostGroup + - host.hostGroupId + - host.hostGroupNativeId + - host.nativeId + - host.resId + - storage.name + - storage.ipAddress + - storage.sn + - storage.nativeId + - storage.resId + - switch.name + - switch.sn + - switch.productName + - switch.manufacturer + - switch.ipAddress + - switch.status + - switch.wwn + - switch.domainId + - switch.fabricWwn + - switch.fabricName + - switch.isLogical + - switch.az + - switch.azId + - switch.nativeId + - switch.resId + performance: + dataset: perf-fcswitch-port + metrics: + - error + - bandwidthRx + - bandwidthTx + - invalidCrc + - class3Discard + - linkResetRx + - linkResetTx + - linkReset + - linkFailures + - signalLoss + - syncLoss + - utilityRx + - utilityTx + - bandwidth + - utility + - bbCreditZero + statistics: + dataset: stat-fcswitch-port-present + metrics: + - count1 + + fcswitchportlink: + className: FcLink + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - portWwn + - nodeWwn + - remotePortWwn + - remoteNodeWwn + + fabric: + className: SYS_Fabric + attributes: + - id + - nativeId + - last_Modified + - lastMonitorTime + - dataStatus + - name + - wwn + relations: + fcswitch: M_FabricContainsFCSwitch + fczone: M_FabricContainsZone + fcalias: M_FabricContainsAlias + + fczone: + className: SYS_FCSwitchZone + attributes: + - id + - last_Modified + - dataStatus + - name + - cfg + - status + - fabricWwn + - memberCount + - fabricId + + fczonewwn: + className: SYS_FCZoneMemberWwn + attributes: + - id + - last_Modified + - zoneId + - fabricWwn + - wwn + + fczoneport: + className: SYS_FCZoneMemberPort + attributes: + - id + - last_Modified + - zoneId + - fabricWwn + - portIndex + - domainId + + fczonealias: + className: SYS_FCZoneMemberAlias + attributes: + - id + - last_Modified + - zoneId + - fabricWwn + - aliasName + + fcalias: + className: SYS_FCSwitchAlias + attributes: + - id + - last_Modified + - dataStatus + - name + - type + - memberCount + - fabricWwn + - fabricId + + fcaliaswwn: + className: SYS_FCAliasMemberWwn + attributes: + - id + - last_Modified + - aliasId + - fabricWwn + - wwn + + fcaliasport: + className: SYS_FCAliasMemberPort + - id + - last_Modified + - aliasId + - fabricWwn + - portIndex + - domainId diff --git a/playbook/host/get_hostgroups_by_fuzzy_name.yml b/playbook/host/get_hostgroups_by_fuzzy_name.yml new file mode 100644 index 0000000..827b1db --- /dev/null +++ b/playbook/host/get_hostgroups_by_fuzzy_name.yml @@ -0,0 +1,34 @@ +--- + +# Required Parameters: +# hostGroupName: host group name +# +# Examples: +# --extra-vars "hostGroupName='test'" +# +- name: Get Host Groups by Fuzzy Name + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Host Groups by Fuzzy Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostGroupName}}" + register: HOSTGROUPS + + - name: Show Host Groups + debug: + msg: "{{ HOSTGROUPS.json }}" diff --git a/playbook/host/get_hosts_by_fuzzy_name.yml b/playbook/host/get_hosts_by_fuzzy_name.yml new file mode 100644 index 0000000..37049ee --- /dev/null +++ b/playbook/host/get_hosts_by_fuzzy_name.yml @@ -0,0 +1,34 @@ +--- + +# Required Parameters: +# hostName: host name +# +# Examples: +# --extra-vars "hostName='test'" +# +- name: Get Hosts by Fuzzy Name + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Hosts by Fuzzy Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostName}}" + register: HOSTS + + - name: Show Hosts + debug: + msg: "{{ HOSTS.json }}" diff --git a/playbook/host/list_hostgroups.yml b/playbook/host/list_hostgroups.yml new file mode 100644 index 0000000..7bf1729 --- /dev/null +++ b/playbook/host/list_hostgroups.yml @@ -0,0 +1,121 @@ +--- + +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: host_count +# sortDir: sort direction, options: desc, asc +# hostGroupName: host group name +# managedStatus: a list of managed status, options: NORMAL, TAKE_OVERING, TAKE_ERROR, TAKE_OVER_ALARM, UNKNOWN +# azName: availability zone name +# projectName: project name +# +# Examples: +# --extra-vars "hostGroupName='test'" +# --extra-vars "azName='room1' projectName='project1'" + +# Generated Parameters (can be overwritten): +# azIds: a list of availability zone IDs +# projectId: project ID +# +# Examples: +# --extra-vars "azIds=['B2012FF2ECB03CCCA03FFAAD4BA590F1'] projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" + +- name: List Host Groups + hosts: localhost + vars: + pageNo: 1 + pageSize: 10 + sortKey: null + sortDir: null + hostGroupName: null + managedStatus: [] + azIds: [] + projectId: null + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Query AZ by name + vars: + query: "[?name=='{{ azName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.azs }}?az_name={{azName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: AZ + failed_when: AZ.json.az_list | json_query(query) | length != 1 + when: azName is defined + + - name: Get AZ ID + vars: + query: "[?name=='{{ azName }}'].id" + set_fact: + azIds: "{{ azIds + AZ.json.az_list | json_query(query) }}" + when: + - azName is defined + - AZ.json.az_list | json_query(query) | length == 1 + + - name: Query project by name + vars: + query: "[?name=='{{ projectName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.projects }}?name={{projectName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PROJECT + failed_when: PROJECT.json.projectList | json_query(query) | length != 1 + when: projectName is defined + + - name: Get project ID + vars: + query: "[?name=='{{ projectName }}'].id" + set_fact: + projectId: "{{ PROJECT.json.projectList | json_query(query) | first }}" + when: + - projectName is defined + - PROJECT.json.projectList | json_query(query) | length == 1 + + - name: List Host Groups + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + limit: "{{pageSize}}" + start: "{{ pageSize|int * (pageNo|int - 1) }}" + sort_key: "{{sortKey}}" + sort_dir: "{{sortDir}}" + name: "{{hostGroupName}}" + managed_status: "{{managedStatus}}" + az_ids: "{{azIds}}" + project_id: "{{projectId}}" + register: HOSTGROUPS + + - name: Show Host Groups + vars: + objList: "{{ HOSTGROUPS.json.hostgroups }}" + totalNum: "{{HOSTGROUPS.json.total}}" + sortDesc: "{{ 'True' if sortDir == 'desc' else 'False' }}" + debug: + msg: + objList: "{{ ( objList | sort(attribute=sortKey,reverse=sortDesc) ) if sortKey != 'null' else ( objList | sort(reverse=sortDesc) ) }}" + totalNum: "{{ totalNum }}" + pageSize: "{{ pageSize }}" + pageNo: "{{ pageNo }}" diff --git a/playbook/host/list_hosts.yml b/playbook/host/list_hosts.yml new file mode 100644 index 0000000..318cfb7 --- /dev/null +++ b/playbook/host/list_hosts.yml @@ -0,0 +1,133 @@ +--- + +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: initiator_count +# sortDir: sort direction, options: desc, asc +# hostName: host name +# ip: ip address +# osType: os type, options: LINUX, WINDOWS, SUSE, EULER, REDHAT, CENTOS, WINDOWSSERVER2012, SOLARIS, HPUX, AIX, XENSERVER, MACOS, VMWAREESX, ORACLE, OPENVMS +# displayStatus: display status, options: OFFLINE, NOT_RESPONDING, NORMAL, RED, GRAY, GREEN, YELLOW +# managedStatus: a list of managed status, options: NORMAL, TAKE_OVERING, TAKE_ERROR, TAKE_OVER_ALARM, UNKNOWN +# accessMode: access mode, options: ACCOUNT, NONE, VCENTER +# azName: availability zone name +# projectName: project name +# +# Examples: +# --extra-vars "accessMode='NONE' displayStatus='NORMAL' managedStatus=['NORMAL']" +# --extra-vars "azName='room1' projectName='project1'" + +# Generated Parameters (can be overwritten): +# azId: availability zone ID +# projectId: project ID +# +# Examples: +# --extra-vars "azId='B2012FF2ECB03CCCA03FFAAD4BA590F1' projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" + +- name: List Hosts + hosts: localhost + vars: + pageNo: 1 + pageSize: 10 + sortKey: null + sortDir: null + hostName: null + ip: null + osType: null + displayStatus: null + managedStatus: [] + accessMode: null + azId: null + projectId: null + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Query AZ by name + vars: + query: "[?name=='{{ azName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.azs }}?az_name={{azName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: AZ + failed_when: AZ.json.az_list | json_query(query) | length != 1 + when: azName is defined + + - name: Get AZ ID + vars: + query: "[?name=='{{ azName }}'].id" + set_fact: + azId: "{{ AZ.json.az_list | json_query(query) | first }}" + when: + - azName is defined + - AZ.json.az_list | json_query(query) | length == 1 + + - name: Query project by name + vars: + query: "[?name=='{{ projectName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.projects }}?name={{projectName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PROJECT + failed_when: PROJECT.json.projectList | json_query(query) | length != 1 + when: projectName is defined + + - name: Get project ID + vars: + query: "[?name=='{{ projectName }}'].id" + set_fact: + projectId: "{{ PROJECT.json.projectList | json_query(query) | first }}" + when: + - projectName is defined + - PROJECT.json.projectList | json_query(query) | length == 1 + + - name: List Hosts + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + limit: "{{pageSize}}" + start: "{{ pageSize|int * (pageNo|int - 1) }}" + sort_key: "{{sortKey}}" + sort_dir: "{{sortDir}}" + name: "{{hostName}}" + ip: "{{ip}}" + os_type: "{{osType}}" + display_status: "{{displayStatus}}" + managed_status: "{{managedStatus}}" + access_mode: "{{accessMode}}" + az_id: "{{azId}}" + project_id: "{{projectId}}" + register: HOSTS + + - name: Show Hosts + vars: + objList: "{{ HOSTS.json.hosts }}" + totalNum: "{{HOSTS.json.total}}" + sortDesc: "{{ 'True' if sortDir == 'desc' else 'False' }}" + debug: + msg: + objList: "{{ ( objList | sort(attribute=sortKey,reverse=sortDesc) ) if sortKey != 'null' else ( objList | sort(reverse=sortDesc) ) }}" + totalNum: "{{ totalNum }}" + pageSize: "{{ pageSize }}" + pageNo: "{{ pageNo }}" diff --git a/playbook/host/show_host.yml b/playbook/host/show_host.yml new file mode 100644 index 0000000..2d5e1ec --- /dev/null +++ b/playbook/host/show_host.yml @@ -0,0 +1,111 @@ +--- + +# Required Parameters: +# hostName: host name, can be replaced with hostId +# +# Examples: +# --extra-vars "hostName='ansible1'" + +# Optional Parameters: +# hostId: host ID +# showPort: show host ports, default: true +# portName: port wwn or iqn +# portType: port type, options: UNKNOWN, FC, ISCSI +# portStatus: port status, options: UNKNOWN, ONLINE, OFFLINE, UNBOUND +# +# Examples: +# --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" +# --extra-vars '{"hostName": "ansible1", "showPort": false}' +# --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b' portName='10000090fa1b623e'" +# --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b' portType='ISCSI'" +# --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b' portStatus='ONLINE'" + +- name: Get Host by ID + hosts: localhost + vars: + showPort: true + portParams: "" + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Host by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostName}}" + register: HOSTS + when: hostName is defined + + - name: Get Host ID + vars: + query: "[?name=='{{ hostName }}'].id" + set_fact: + hostId: "{{ HOSTS.json.hosts | json_query(query) | first }}" + failed_when: HOSTS.json.hosts | json_query(query) | length != 1 + when: hostName is defined + + - name: Get Host Info + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/{{hostId}}/summary" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: HOST + + - name: Show Host + debug: + msg: "{{ HOST.json }}" + + - name: Set portParams - portName + set_fact: + portParams: "{{ portParams + 'port_name=' + portName + '&' }}" + when: + - showPort == true + - portName is defined + + - name: Set portParams - portType + set_fact: + portParams: "{{ portParams + 'protocol=' + portType + '&' }}" + when: + - showPort == true + - portType is defined + + - name: Set param - portStatus + set_fact: + portParams: "{{ portParams + 'status=' + portStatus + '&' }}" + when: + - showPort == true + - portStatus is defined + + - name: Get Ports + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/{{hostId}}/initiators?{{portParams}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PORT + when: + - showPort == true + + - name: Show Ports + debug: + msg: "{{ PORT.json }}" + when: + - showPort == true \ No newline at end of file diff --git a/playbook/host/show_hostgroup.yml b/playbook/host/show_hostgroup.yml new file mode 100644 index 0000000..97f2322 --- /dev/null +++ b/playbook/host/show_hostgroup.yml @@ -0,0 +1,102 @@ +--- + +# Required Parameters: +# hostGroupName: host group name, can be replaced with hostId +# +# Examples: +# --extra-vars "hostGroupName='group1'" +# +# Optional Parameters: +# hostGroupId: host group ID +# showHost: show hosts, default: true +# hostName: host name +# ip: ip address +# osType: os type, options: LINUX, WINDOWS, SUSE, EULER, REDHAT, CENTOS, WINDOWSSERVER2012, SOLARIS, HPUX, AIX, XENSERVER, MACOS, VMWAREESX, ORACLE, OPENVMS +# displayStatus: a list of display status, options: OFFLINE, NOT_RESPONDING, NORMAL, RED, GRAY, GREEN, YELLOW +# managedStatus: a list of managed status, options: NORMAL, TAKE_OVERING, TAKE_ERROR, TAKE_OVER_ALARM, UNKNOWN +# +# Examples: +# --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" +# --extra-vars '{"hostGroupName":"group1","showHost":false}' +# --extra-vars '{"hostGroupName":"group1","displayStatus":["NORMAL"],"managedStatus":["NORMAL"]}' + +- name: Get Host Group + hosts: localhost + vars: + showHost: true + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Host Group by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostGroupName}}" + register: HOSTGROUPS + when: hostGroupName is defined + + - name: Get Host Group ID + vars: + query: "[?name=='{{ hostGroupName }}'].id" + set_fact: + hostGroupId: "{{ HOSTGROUPS.json.hostgroups | json_query(query) | first }}" + failed_when: HOSTGROUPS.json.hostgroups | json_query(query) | length != 1 + when: hostGroupName is defined + + - name: Get Host Group + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/{{hostGroupId}}/summary" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: HOSTGROUP + + - name: Show Host Group + debug: + msg: "{{ HOSTGROUP.json }}" + + - name: List Hosts in the Host Group + vars: + hostName: null + ip: null + osType: null + displayStatus: [] + managedStatus: [] + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/{{hostGroupId}}/hosts/list" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostName}}" + ip: "{{ip}}" + os_type: "{{osType}}" + display_status: "{{displayStatus}}" + managed_status: "{{managedStatus}}" + register: HOSTS + when: + - showHost == true + + - name: Show Hosts in the Host Group + debug: + msg: "{{ HOSTS.json }}" + when: + - showHost == true diff --git a/playbook/perf/list_indicators.yml b/playbook/perf/list_indicators.yml new file mode 100644 index 0000000..42da718 --- /dev/null +++ b/playbook/perf/list_indicators.yml @@ -0,0 +1,65 @@ +--- + +- name: List Indicators + hosts: localhost + vars_files: + - ../global.yml + vars: + # Required Parameters: + # objType: object type name, see ../global.yml to get supported object types in INVENTORY + # + # Example: + # --extra-vars "objType=volume" + + # Generated Parameters (can be overwritten): + # objTypeId: object type id, see ../global.yml to get supported object types in INVENTORY.objType.objTypeId + # + # Examples: + # --extra-vars "objTypeId='1125921381679104'" + + objTypeId: "{{ INVENTORY[objType].objTypeId }}" # map objType to objTypeId + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: List Indicators + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.perfmgr}}/obj-types/{{objTypeId}}/indicators" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: INDICATORS + + - name: Show Indicators + debug: + msg: "{{ INDICATORS.json }}" + + - name: Get Indicator Details + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.perfmgr}}/indicators" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: "{{ INDICATORS.json.data.indicator_ids }}" + register: DETAILS + + - name: Show Indicator Details + debug: + msg: "{{ DETAILS.json }}" + + - name: Generate Indicator Map + set_fact: + indicator_map: "{{ indicator_map|default({}) | combine( {DETAILS.json.data[item|string].indicator_name: item|string } ) }}" + with_items: "{{INDICATORS.json.data.indicator_ids}}" + + - name: Show Indicator Map + debug: + msg: "{{ indicator_map }}" diff --git a/playbook/perf/list_obj_types.yml b/playbook/perf/list_obj_types.yml new file mode 100644 index 0000000..a7a05ef --- /dev/null +++ b/playbook/perf/list_obj_types.yml @@ -0,0 +1,25 @@ +--- + +- name: List Object Types + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: List Object Types + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.perfmgr}}/obj-types" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TYPES + + - name: Show Object Types + debug: + msg: "{{ TYPES.json }}" diff --git a/playbook/perf/query_history_data.yml b/playbook/perf/query_history_data.yml new file mode 100644 index 0000000..8bd2c4a --- /dev/null +++ b/playbook/perf/query_history_data.yml @@ -0,0 +1,102 @@ +--- + +# Required Parameters: +# objType: object type name, see ../global.yml to get supported object types in INVENTORY +# indicators: a list of indicator name, see ../global.yml to get supported indicators in INVENTORY.objType.indicators +# objName: object name (fuzzy) +# +# Examples: +# --extra-vars "objType=volume indicators=['bandwidth','throughput','responseTime'] objName='1113-001'" + +# Optional Parameters: +# endTime: epoch in seconds, default value is current time +# timeSpan: time range before endTime, default value is 1h (1 hour), supported unit: s,m,h,d,w,M,y +# +# Examples: +# --extra-vars "endTime=`date -d '2019-11-21 23:00:00' +%s` timeSpan=30m" + +# Generated Parameters (can be overwritten): +# beginTime: epoch in seconds, default value is endTime - timeSpan +# interval: sample rate enum: MINUTE/HOUR/DAY/WEEK/MONTH, default value is depend on timeSpan (<=1d: MINUTE, <=1w: HOUR, >1w: DAY) +# objTypeId: object type id, see ../global.yml to get supported object types in INVENTORY.objType.objTypeId +# objIds: a list object resId, use ../cmdb/list_instances.yml to get object resId +# indicatorIds: a list of indicator id, see ../global.yml to get supported indicators in INVENTORY.objType.indicators +# +# Examples: +# --extra-vars "beginTime=`date -d '2019-11-20 23:00:00' +%s` interval=HOUR" +# --extra-vars "objTypeId='1125921381679104' objIds=['630EA7167C22383F965664860C5FAEEC'] indicatorIds=['1125921381744641','1125921381744642','1125921381744643']" + +- name: Query Hisotry Performance + hosts: localhost + vars_files: + - ../global.yml + vars: + endTime: "{{ansible_date_time.epoch}}" # default current epoch (seconds), use `data +%s` to get current unix time + timeSpan: "1h" # default last 1 hour, can be s,m,h,d,w,M,y + unit: "{{ timeSpan[-1] if timeSpan is regex('^[0-9]+[s|m|h|d|w|M|y]$') else 's' }}" # default unit set to seconds + seconds: + s: 1 # second + m: 60 # minute + h: 3600 # hour + d: "{{ 3600 * 24 }}" # day + w: "{{ 3600 * 24 * 7 }}" # week + M: "{{ 3600 * 24 * 30 }}" # month + y: "{{ 3600 * 24 * 365 }}" # year + beginTime: "{{ endTime|int - timeSpan|replace(unit,'')|int * seconds[unit]|int }}" # default timeSpan before current epoch (seconds) + timeSpanSeconds: "{{ endTime|int - beginTime|int }}" + interval: "{{ 'MINUTE' if timeSpanSeconds <= seconds['d'] else 'HOUR' if timeSpanSeconds <= seconds['w'] else 'DAY' }}" + objTypeId: "{{ INVENTORY[objType].objTypeId }}" # map objType to objTypeId + gather_facts: yes + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Indicator IDs # map indicator names to IDs + set_fact: + indicatorIds: "{{ indicatorIds|default([]) + [ INVENTORY[objType].indicators[ item ] ] }}" + with_items: "{{ indicators }}" + when: indicators is defined + + - name: List Instances + vars: + className: "{{ INVENTORY[objType].className }}" # map objType to objTypeId + params: "pageNo=1&pageSize=1000&condition={\"constraint\":[{\"simple\":{\"name\":\"dataStatus\",\"operator\":\"equal\",\"value\":\"normal\"}},{\"logOp\":\"and\",\"simple\":{\"name\":\"name\",\"operator\":\"contain\",\"value\":\"{{objName|urlencode}}\"}}]}" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.instances}}/{{className}}?{{params}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: INSTANCES + when: objName is defined + + - name: Get Object IDs + set_fact: + objIds: "{{ INSTANCES.json.objList | json_query('[*].id') }}" + when: objName is defined + + - name: Get Hisotry Performance Data + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.perfdata }}/history-data/action/query" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + obj_type_id: "{{ objTypeId }}" + indicator_ids: "{{ indicatorIds }}" + obj_ids: "{{ objIds }}" + interval: "{{ interval }}" + range: BEGIN_END_TIME + begin_time: "{{ beginTime }}000" + end_time: "{{ endTime }}000" + register: PERFDATA + + - name: Show Data + debug: + msg: "{{ PERFDATA.json }}" \ No newline at end of file diff --git a/playbook/perf/show_indicators.yml b/playbook/perf/show_indicators.yml new file mode 100644 index 0000000..e5d571b --- /dev/null +++ b/playbook/perf/show_indicators.yml @@ -0,0 +1,50 @@ +--- + +- name: Show Indicators Detail + hosts: localhost + vars_files: + - ../global.yml + vars: + # Required Parameters: + # objType: object type name, see ../global.yml to get supported object types in INVENTORY + # indicators: a list of indicator names, see ../global.yml to get supported indicators in INVENTORY.objType.indicators + # + # Examples: + # --extra-vars "objType=volume indicators=['bandwidth','throughput','responseTime']" + + # Generated Parameters (can be overwritten): + # objTypeId: object type id, see ../global.yml to get supported object types in INVENTORY.objType.objTypeId + # indicatorIds: a list of indicator id, see ../global.yml to get supported indicators in INVENTORY.objType.indicators + # + # Examples: + # --extra-vars "objTypeId='1125921381679104' indicatorIds=['1125921381744641','1125921381744642','1125921381744643']" + + objTypeId: "{{ INVENTORY[objType].objTypeId }}" # map objType to objTypeId + + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Indicator IDs # map indicator names to IDs + set_fact: + indicatorIds: "{{ indicatorIds|default([]) + [ INVENTORY[objType].indicators[ item ] ] }}" + with_items: "{{ indicators }}" + when: indicators is defined + + - name: Get Indicators Detail + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.perfmgr}}/indicators" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: "{{ indicatorIds }}" + register: DETAILS + + - name: Show Indicators Detail + debug: + msg: "{{ DETAILS.json }}" diff --git a/playbook/project/get_project_by_name.yml b/playbook/project/get_project_by_name.yml new file mode 100644 index 0000000..02c917e --- /dev/null +++ b/playbook/project/get_project_by_name.yml @@ -0,0 +1,41 @@ +--- + +# Required Parameters: +# projectName: project name +# +# Examples: +# --extra-vars "projectName='project1'" +# +- name: Get Project by name + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Project by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.projects }}?name={{projectName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PROJECT + + - name: Check Project + vars: + query: "[?name=='{{ projectName }}']" + debug: + msg: "No matched project: '{{ projectName }}'" + when: PROJECT.json.projectList | json_query(query) | length < 1 + + - name: Show Project + vars: + query: "[?name=='{{ projectName }}']" + debug: + msg: "{{ PROJECT.json.projectList | json_query(query) }}" + when: PROJECT.json.projectList | json_query(query) | length >= 1 diff --git a/playbook/project/list_projects.yml b/playbook/project/list_projects.yml new file mode 100644 index 0000000..f172990 --- /dev/null +++ b/playbook/project/list_projects.yml @@ -0,0 +1,50 @@ +--- + +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# projectName: project name +# +# Examples: +# --extra-vars "projectName='project1'" +# +- name: List Projects + hosts: localhost + vars: + pageNo: 1 + pageSize: 10 + params: "{{'limit=' + pageSize|string + '&start=' + (pageSize|int * (pageNo|int - 1) + 1) | string }}" + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Set params - projectName + set_fact: + params: "{{ params + '&name=' + projectName|urlencode }}" + when: + - projectName is defined + + - name: List Projects + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.projects }}?{{params}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PROJECTS + + - name: Show Projects + vars: + objList: "{{ PROJECTS.json.projectList }}" + totalNum: "{{ PROJECTS.json.total }}" + debug: + msg: + objList: "{{ objList }}" + totalNum: "{{ totalNum }}" + pageSize: "{{ pageSize }}" + pageNo: "{{ pageNo }}" diff --git a/playbook/storage/oceanstor/activate_snapshots.yml b/playbook/storage/oceanstor/activate_snapshots.yml new file mode 100644 index 0000000..f5e3ad7 --- /dev/null +++ b/playbook/storage/oceanstor/activate_snapshots.yml @@ -0,0 +1,82 @@ +--- + +# Required Parameters: +# volumes: a list of primary volumes, can be replaced with: snapshots +# suffix: snapshot name suffix +# +# Examples: +# --extra-vars '{"suffix": "20200204T232229", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' +# +# Generated Parameters (can be overwritten): +# deviceSn: storage device SN +# snapshots: a list of snapshot names +# +# Examples: +# --extra-vars '{"deviceSn": "12323019876312325911", "snapshots": ["DJ_AT_0000_20200204T232229", "DJ_AT_0001_20200204T232229"]}' +# +- name: Activate Snapshots + hosts: localhost + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_affinity.yml + when: + - volumes is defined + + - import_tasks: login_storage.yml + + - name: Generate Snapshot Names + vars: + snapName: "{{item}}_{{suffix}}" + set_fact: + snapshots: "{{ snapshots|default([]) + [snapName] }}" + with_items: "{{ volumes }}" + when: + - volumes is defined + - suffix is defined + + - name: Query Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SNAPSHOT?filter=NAME%3A%3A{{item|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: SNAPSHOTS + with_items: "{{ snapshots }}" + + - name: Get Snapshot IDs + vars: + queryId: "[*].ID" + set_fact: + snapIds: "{{ snapIds|default([]) + SNAPSHOTS.results[item.0].json.data | json_query(queryId) }}" + with_indexed_items: "{{ snapshots }}" + + - name: Show Snapshot IDs + debug: + msg: + snapIds: "{{ snapIds }}" + + - name: Activate Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/snapshot/activate" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + SNAPSHOTLIST: "{{ snapIds }}" + register: ACTIVATE_SNAPSHOTS + + - name: Show Activate Results + debug: + msg: "{{ ACTIVATE_SNAPSHOTS.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/add_volumes_to_hypermetro_cg.yml b/playbook/storage/oceanstor/add_volumes_to_hypermetro_cg.yml new file mode 100644 index 0000000..b032072 --- /dev/null +++ b/playbook/storage/oceanstor/add_volumes_to_hypermetro_cg.yml @@ -0,0 +1,192 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: +# --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' +# + +- name: Add Volumes to HyperMetro Consistency Group + hosts: localhost + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_pairs.yml + vars: + primaryVolumes: "{{primaryVolumes}}" + secondaryVolumes: "{{secondaryVolumes}}" + + - name: Set primary storage SN + set_fact: + deviceSn: "{{ devicePair.primary }}" + + - import_tasks: login_storage.yml + + - name: Query HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: HyperMetroCG + + - name: Get HyperMetro CG ID + set_fact: + cgId: "{{ HyperMetroCG.json.data[0].ID }}" + domainId: "{{ HyperMetroCG.json.data[0].DOMAINID }}" + failed_when: HyperMetroCG.json.data | length != 1 + + - name: Show HyperMetro CG ID + debug: + msg: + cgId: "{{ cgId }}" + domainId: "{{ domainId }}" + + - name: Check Exist HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair?filter=DOMAINID%3A%3A{{domainId}}%20and%20LOCALOBJID%3A%3A{{item.LOCALOBJID}}%20and%20REMOTEOBJID%3A%3A{{item.REMOTEOBJID}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ExistHyperMetroPairs + with_items: "{{ volumePairs }}" + + - name: Get Exist HyperMetro Pair IDs + vars: + queryPairId: "[*].ID" + queryObjId: "[*].LOCALOBJID" + set_fact: + existPairIds: "{{ existPairIds|default([]) + ExistHyperMetroPairs.results[item.0].json.data | json_query(queryPairId) }}" + existObjIds: "{{ existObjIds|default([]) + ExistHyperMetroPairs.results[item.0].json.data | json_query(queryObjId) }}" + with_indexed_items: "{{ volumePairs }}" + + - name: Pause HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair/disable_hcpair" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: PAUSE_PAIRS + with_items: "{{ existPairIds }}" + when: existPairIds|length > 0 + + - name: Show Pause Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ PAUSE_PAIRS.results | json_query(queryError) }}" + when: existPairIds|length > 0 + + - name: Create New HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + DOMAINID: "{{ domainId }}" + HCRESOURCETYPE: 1 + LOCALOBJID: "{{ item.LOCALOBJID }}" + REMOTEOBJID: "{{ item.REMOTEOBJID }}" + register: NewHyperMetroPairs + when: item.LOCALOBJID not in existObjIds + with_items: "{{ volumePairs }}" + + - name: Get HyperMetro Pair IDs + vars: + queryPairId: "[*].json.data.ID" + newPairIds: "{{ NewHyperMetroPairs.results | json_query(queryPairId) }}" + set_fact: + pairIds: "{{ existPairIds + newPairIds }}" + failed_when: pairIds|length != volumePairs|length + + - name: Show HyperMetro Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Pause HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup/stop" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: STOP_CG + + - name: Show Pause Results + debug: + msg: "{{ STOP_CG.json.error }}" + + - name: Add HyperMetro Pairs to CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/hyperMetro/associate/pair" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + ASSOCIATEOBJID: "{{ item }}" + register: ADD_PAIRS + with_items: "{{ pairIds }}" + + - name: Show Add Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ ADD_PAIRS.results | json_query(queryError) }}" + + - name: Sync HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup/sync" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SYNC_CG + + - name: Show Sync Results + debug: + msg: "{{ SYNC_CG.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/add_volumes_to_replication_cg.yml b/playbook/storage/oceanstor/add_volumes_to_replication_cg.yml new file mode 100644 index 0000000..001b9fb --- /dev/null +++ b/playbook/storage/oceanstor/add_volumes_to_replication_cg.yml @@ -0,0 +1,228 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: +# --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' +# + +- name: Add Volumes to Replication Consistency Group + hosts: localhost + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_pairs.yml + vars: + primaryVolumes: "{{primaryVolumes}}" + secondaryVolumes: "{{secondaryVolumes}}" + + - name: Set primary storage SN + set_fact: + deviceSn: "{{ devicePair.primary }}" + remoteSn: "{{ devicePair.secondary }}" + + - import_tasks: login_storage.yml + + - name: Get Remote Devices + vars: + queryId: "[? SN=='{{remoteSn}}'].ID" + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/remote_device?range=[0-100]" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: REMOTE_DEVICES + failed_when: ( REMOTE_DEVICES.json.data is not defined ) or (REMOTE_DEVICES.json.data | json_query(queryId) | length != 1) + + - name: Get Remote Device ID + vars: + queryId: "[? SN=='{{remoteSn}}'].ID" + set_fact: + remoteDeviceId: "{{ REMOTE_DEVICES.json.data | json_query(queryId) | first}}" + + - name: Query Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ReplicationCG + + - name: Get Replication CG ID + vars: + cg: "{{ ReplicationCG.json.data[0] }}" + set_fact: + cgId: "{{ cg.ID }}" + mode: "{{ cg.REPLICATIONMODEL }}" + syncType: "{{ cg.SYNCHRONIZETYPE | default(3) }}" + recoveryPolicy: "{{ cg.RECOVERYPOLICY | default(1) }}" + syncSpeed: "{{ cg.SPEED | default(2) }}" + interval: "{{ cg.TIMINGVALINSEC | default(600) }}" + compress: "{{ cg.ENABLECOMPRESS | default(false) }}" + timeout: "{{ cg.REMTIMEOUTPERIOD | default(10) }}" + failed_when: ReplicationCG.json.data | length != 1 + + - name: Show Replication CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Check Exist Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR?filter=LOCALRESID%3A%3A{{item.LOCALOBJID}}%20and%20REMOTERESID%3A%3A{{item.REMOTEOBJID}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ExistReplicationPairs + with_items: "{{ volumePairs }}" + + - name: Get Exist Pair IDs + vars: + queryPairId: "[? REMOTEDEVICEID=='{{remoteDeviceId}}'].ID" + queryObjId: "[? REMOTEDEVICEID=='{{remoteDeviceId}}'].LOCALRESID" + set_fact: + existPairIds: "{{ existPairIds|default([]) + ExistReplicationPairs.results[item.0].json.data | json_query(queryPairId) }}" + existObjIds: "{{ existObjIds|default([]) + ExistReplicationPairs.results[item.0].json.data | json_query(queryObjId) }}" + with_indexed_items: "{{ volumePairs }}" + + - name: Split Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR/split" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: SPLIT_PAIRS + with_items: "{{ existPairIds }}" + when: existPairIds|length > 0 + + - name: Show Split Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ SPLIT_PAIRS.results | json_query(queryError) }}" + when: existPairIds|length > 0 + + - name: Create New Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + LOCALRESID: "{{ item.LOCALOBJID }}" + REMOTEDEVICEID: "{{ remoteDeviceId }}" + REMOTERESID: "{{ item.REMOTEOBJID }}" + SYNCHRONIZETYPE: "{{ syncType }}" + RECOVERYPOLICY: "{{ recoveryPolicy }}" + SPEED: "{{ syncSpeed }}" + TIMINGVAL: "{{ interval }}" + REPLICATIONMODEL: "{{ mode }}" + ENABLECOMPRESS: "{{ compress }}" + REMTIMEOUTPERIOD: "{{ timeout }}" + register: NewReplicationPairs + when: item.LOCALOBJID not in existObjIds + with_items: "{{ volumePairs }}" + + - name: Get Replication Pair IDs + vars: + queryPairId: "[*].json.data.ID" + newPairIds: "{{ NewReplicationPairs.results | json_query(queryPairId) }}" + set_fact: + pairIds: "{{ existPairIds + newPairIds }}" + failed_when: pairIds|length != volumePairs|length + + - name: Show All Replication Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Pause Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SPLIT_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SPLIT_CG + + - name: Show Pause Results + debug: + msg: "{{ SPLIT_CG.json.error }}" + + - name: Add Replication Pairs to CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/ADD_MIRROR" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + RMLIST: + - "{{ item }}" + register: ADD_PAIRS + with_items: "{{ pairIds }}" + + - name: Check Add Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ ADD_PAIRS.results | json_query(queryError) }}" + + - name: Sync Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SYNCHRONIZE_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SYNC_CG + + - name: Check Sync Results + debug: + msg: "{{ SYNC_CG.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/check_volume_affinity.yml b/playbook/storage/oceanstor/check_volume_affinity.yml new file mode 100644 index 0000000..a77e56c --- /dev/null +++ b/playbook/storage/oceanstor/check_volume_affinity.yml @@ -0,0 +1,49 @@ +# Check volumes affinity +# Include this check tasks before local protection operations +# +# Required Parameters: +# volumes: a list of volume names +# +# Examples: +# - import_tasks: check_volume_affinity.yml +# vars: +# volumes: ["DJ_AT_0002", "DJ_AT_0003"] +# +# Outputs: +# deviceSn: device SN +# volumeIds: a list of volume IDs +# + +- import_tasks: ../../user/login.yml + +- name: Get Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?name={{item|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + with_items: "{{ volumes }}" + +- name: Get Storage SN and Volume IDs + vars: + queryStorageSn: "[? name=='{{item.1}}'].storage_sn" + queryVolumeId: "[? name=='{{item.1}}'].volume_raw_id" + set_fact: + deviceSns: "{{ deviceSns|default([]) + VOLUMES.results[item.0].json.volumes | json_query(queryStorageSn) }}" + volumeIds: "{{ volumeIds|default([]) + VOLUMES.results[item.0].json.volumes | json_query(queryVolumeId) }}" + with_indexed_items: "{{ volumes }}" + +- name: Check Affinity + set_fact: + deviceSn: "{{ deviceSns | unique | first }}" + failed_when: deviceSns | unique | length != 1 + +- name: Show Volume IDs + debug: + msg: + deviceSn: "{{deviceSn}}" + volumeIds: "{{volumeIds}}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/check_volume_pairs.yml b/playbook/storage/oceanstor/check_volume_pairs.yml new file mode 100644 index 0000000..17fc9c3 --- /dev/null +++ b/playbook/storage/oceanstor/check_volume_pairs.yml @@ -0,0 +1,105 @@ +# Check data protection volume pairs +# Include this check tasks before remote protection actions +# +# Required Parameters: +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: +# - import_tasks: check_volume_pairs.yml +# vars: +# primaryVolumes: ["DJ_AT_0002", "DJ_AT_0003"] +# secondaryVolumes: ["DJ_BC_0002", "DJ_BC_0003"] +# +# Outputs: +# devicePair: a pair of device SN: [primaryDeviceSN, secondaryDeviceSN] +# volumePairs: a list of volume pairs: [ [primaryVolumeId1, secondaryVolumeId1], [primaryVolumeId2, secondaryVolumeId2],] +# + +- import_tasks: ../../user/login.yml + +- name: Get Primary Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?name={{item|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PRIMARY_VOLUMES + with_items: "{{ primaryVolumes }}" + +- name: Get Primary Storage SN and Volume IDs + vars: + queryStorageSn: "[? name=='{{item.1}}'].storage_sn" + queryVolumeId: "[? name=='{{item.1}}'].volume_raw_id" + queryVolumeSize: "[? name=='{{item.1}}'].capacity" + set_fact: + primaryStorageSns: "{{ primaryStorageSns|default([]) + PRIMARY_VOLUMES.results[item.0].json.volumes | json_query(queryStorageSn) }}" + primaryVolumeIds: "{{ primaryVolumeIds|default([]) + PRIMARY_VOLUMES.results[item.0].json.volumes | json_query(queryVolumeId) }}" + primaryVolumeSize: "{{ primaryVolumeSize|default([]) + PRIMARY_VOLUMES.results[item.0].json.volumes | json_query(queryVolumeSize) }}" + with_indexed_items: "{{ primaryVolumes }}" + +- name: Check Primary Volumes Affinity + set_fact: + primaryStorageSn: "{{ primaryStorageSns | unique | first }}" + failed_when: primaryStorageSns | unique | length != 1 + +- name: Show Primary Volumes + debug: + msg: + primaryStorageSn: "{{primaryStorageSn}}" + primaryVolumeIds: "{{primaryVolumeIds}}" + primaryVolumeSize: "{{primaryVolumeSize}}" + +- name: Get Secondary Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?name={{item|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: SECONDARY_VOLUMES + with_items: "{{ secondaryVolumes }}" + +- name: Get Secondary Storage SN and Volume IDs + vars: + queryStorageSn: "[? name=='{{item.1}}'].storage_sn" + queryVolumeId: "[? name=='{{item.1}}'].volume_raw_id" + queryVolumeSize: "[? name=='{{item.1}}'].capacity" + set_fact: + secondaryStorageSns: "{{ secondaryStorageSns|default([]) + SECONDARY_VOLUMES.results[item.0].json.volumes | json_query(queryStorageSn) }}" + secondaryVolumeIds: "{{ secondaryVolumeIds|default([]) + SECONDARY_VOLUMES.results[item.0].json.volumes | json_query(queryVolumeId) }}" + secondaryVolumeSize: "{{ secondaryVolumeSize|default([]) + SECONDARY_VOLUMES.results[item.0].json.volumes | json_query(queryVolumeSize) }}" + with_indexed_items: "{{ secondaryVolumes }}" + +- name: Check Secondary Volumes Affinity + set_fact: + secondaryStorageSn: "{{ secondaryStorageSns | unique | first }}" + failed_when: secondaryStorageSns | unique | length != 1 + +- name: Show Secondary Volumes + debug: + msg: + secondaryStorageSn: "{{secondaryStorageSn}}" + secondaryVolumeIds: "{{secondaryVolumeIds}}" + secondaryVolumeSize: "{{secondaryVolumeSize}}" + failed_when: primaryVolumeIds|length != secondaryVolumeIds|length + +- name: Generate Volume ID Pairs + vars: + LOCALOBJID: "{{ item.1 }}" + REMOTEOBJID: "{{ secondaryVolumeIds[item.0] }}" + set_fact: + volumePairs: "{{ volumePairs|default([]) + [{'LOCALOBJID': LOCALOBJID, 'REMOTEOBJID': REMOTEOBJID}] }}" + failed_when: primaryVolumeSize[item.0]|int != secondaryVolumeSize[item.0]|int + with_indexed_items: "{{ primaryVolumeIds }}" + +- name: Generate Storage SN Pair + set_fact: + devicePair: + primary: "{{ primaryStorageSn }}" + secondary: "{{ secondaryStorageSn }}" diff --git a/playbook/storage/oceanstor/create_hypermetro_cg.yml b/playbook/storage/oceanstor/create_hypermetro_cg.yml new file mode 100644 index 0000000..18e1174 --- /dev/null +++ b/playbook/storage/oceanstor/create_hypermetro_cg.yml @@ -0,0 +1,217 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: +# --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"]}' +# +# Optional Parameters: +# syncSpeed: initial speed, default: 2, options: 1/low, 2/medium, 3/high, 4/highest + +- name: Create HyperMetro Consistency Group + hosts: localhost + vars: + syncSpeed: 2 + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_pairs.yml + vars: + primaryVolumes: "{{primaryVolumes}}" + secondaryVolumes: "{{secondaryVolumes}}" + + - name: Set primary storage SN + set_fact: + deviceSn: "{{ devicePair.primary }}" + + - import_tasks: login_storage.yml + + - name: Get HyperMetro Domain + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroDomain" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: HyperMetroDomains + + - name: Get HyperMetro Domain ID + vars: + remoteDevices: "{{ item.REMOTEDEVICES }}" + set_fact: + domainId: "{{ item.ID }}" + when: remoteDevices[0].devESN == secondaryStorageSn + with_items: "{{ HyperMetroDomains.json.data }}" + + - name: Show HyperMetro Domain ID + debug: + msg: + domainId: "{{ domainId }}" + + - name: Check Exist HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair?filter=DOMAINID%3A%3A{{domainId}}%20and%20LOCALOBJID%3A%3A{{item.LOCALOBJID}}%20and%20REMOTEOBJID%3A%3A{{item.REMOTEOBJID}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ExistHyperMetroPairs + with_items: "{{ volumePairs }}" + + - name: Get Exist HyperMetro Pair IDs + vars: + queryPairId: "[*].ID" + queryObjId: "[*].LOCALOBJID" + set_fact: + existPairIds: "{{ existPairIds|default([]) + ExistHyperMetroPairs.results[item.0].json.data | json_query(queryPairId) }}" + existObjIds: "{{ existObjIds|default([]) + ExistHyperMetroPairs.results[item.0].json.data | json_query(queryObjId) }}" + with_indexed_items: "{{ volumePairs }}" + + - name: Pause HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair/disable_hcpair" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: PAUSE_PAIRS + with_items: "{{ existPairIds }}" + when: existPairIds|length > 0 + + - name: Show Pause Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ PAUSE_PAIRS.results | json_query(queryError) }}" + when: existPairIds|length > 0 + + - name: Create New HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + DOMAINID: "{{ domainId }}" + HCRESOURCETYPE: 1 + LOCALOBJID: "{{ item.LOCALOBJID }}" + REMOTEOBJID: "{{ item.REMOTEOBJID }}" + register: NewHyperMetroPairs + when: item.LOCALOBJID not in existObjIds + with_items: "{{ volumePairs }}" + + - name: Get HyperMetro Pair IDs + vars: + queryPairId: "[*].json.data.ID" + newPairIds: "{{ NewHyperMetroPairs.results | json_query(queryPairId) }}" + set_fact: + pairIds: "{{ existPairIds + newPairIds }}" + failed_when: pairIds|length != volumePairs|length + + - name: Show All HyperMetro Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Check HyperMetro CG name conflicts + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup?filter=NAME%3A%3A{{cgName}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ExistHyperMetroCG + failed_when: ExistHyperMetroCG.json.data is defined + + - name: Create HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + NAME: "{{ cgName }}" + SPEED: "{{ syncSpeed }}" + DOMAINID: "{{ domainId }}" + register: HyperMetroCG + + - name: Get HyperMetro CG ID + set_fact: + cgId: "{{ HyperMetroCG.json.data.ID }}" + + - name: Show HyperMetro CG + debug: + msg: + cgId: "{{ cgId }}" + + - name: Add HyperMetro Pairs to CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/hyperMetro/associate/pair" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + ASSOCIATEOBJID: "{{ item }}" + register: ADD_PAIRS + with_items: "{{ pairIds }}" + + - name: Check Add Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ ADD_PAIRS.results | json_query(queryError) }}" + + - name: Sync HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup/sync" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SYNC + + - name: Check Sync Results + debug: + msg: "{{ SYNC.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/create_replication_cg.yml b/playbook/storage/oceanstor/create_replication_cg.yml new file mode 100644 index 0000000..a1083ce --- /dev/null +++ b/playbook/storage/oceanstor/create_replication_cg.yml @@ -0,0 +1,250 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# mode: replication mode, options: 1/sync, 2/async +# +# Examples: +# --extra-vars '{"cgName": "cg1", "mode": 2, "primaryVolumes": ["DJ_AT_0000", "DJ_AT_0001"], "secondaryVolumes": ["DJ_BC_0000", "DJ_BC_0001"]}' +# +# Optional Parameters: +# recoveryPolicy: recover policy, default: 1, options: 1/automatic, 2/manual +# syncSpeed: initial speed, default: 2, options: 1/low, 2/medium, 3/high, 4/highest +# +# Examples: +# --extra-vars '{"recoverPolicy": 2, "syncSpeed": 4}' +# +# Optional Parameters (async mode): +# syncType: synchronize type for async replication, default: 3, options: 1/manual, 2/wait after last sync begin, 3/wait after last sync ends +# interval synchronize interval in seconds (when syncType is not manual), default: 600, options: 10 ~ 86400 +# compress: enable compress for async replication, default false, options: true, false +# +# Examples: +# --extra-vars '{"syncType": 2, "interval": 300, "compress": true}' +# +# Optional Parameters (sync mode): +# timeout: remote I/O timeout threshold in seconds, default: 10, options: 10~30, or set to 255 to disable timeout +# +# Examples: +# --extra-vars '{"timeout": 30}' + + +- name: Create Replication Consistency Group + hosts: localhost + vars: + syncType: 3 + interval: 600 + compress: false + timeout: 10 + recoveryPolicy: 1 + syncSpeed: 2 + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_pairs.yml + vars: + primaryVolumes: "{{primaryVolumes}}" + secondaryVolumes: "{{secondaryVolumes}}" + + - name: Set storage SN + set_fact: + deviceSn: "{{ devicePair.primary }}" + remoteSn: "{{ devicePair.secondary }}" + + - import_tasks: login_storage.yml + + - name: Get Remote Devices + vars: + queryId: "[? SN=='{{remoteSn}}'].ID" + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/remote_device?range=[0-100]" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: REMOTE_DEVICES + failed_when: ( REMOTE_DEVICES.json.data is not defined ) or (REMOTE_DEVICES.json.data | json_query(queryId) | length != 1) + + - name: Get Remote Device ID + vars: + queryId: "[? SN=='{{remoteSn}}'].ID" + set_fact: + remoteDeviceId: "{{ REMOTE_DEVICES.json.data | json_query(queryId) | first}}" + + - name: Check Exist Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR?filter=LOCALRESID%3A%3A{{item.LOCALOBJID}}%20and%20REMOTERESID%3A%3A{{item.REMOTEOBJID}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ExistReplicationPairs + with_items: "{{ volumePairs }}" + + - name: Get Exist Pair IDs + vars: + queryPairId: "[? REMOTEDEVICEID=='{{remoteDeviceId}}'].ID" + queryObjId: "[? REMOTEDEVICEID=='{{remoteDeviceId}}'].LOCALRESID" + set_fact: + existPairIds: "{{ existPairIds|default([]) + ExistReplicationPairs.results[item.0].json.data | json_query(queryPairId) }}" + existObjIds: "{{ existObjIds|default([]) + ExistReplicationPairs.results[item.0].json.data | json_query(queryObjId) }}" + with_indexed_items: "{{ volumePairs }}" + + - name: Split Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR/split" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: SPLIT_PAIRS + with_items: "{{ existPairIds }}" + when: existPairIds|length > 0 + + - name: Show Split Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ SPLIT_PAIRS.results | json_query(queryError) }}" + when: existPairIds|length > 0 + + - name: Create New Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + LOCALRESID: "{{ item.LOCALOBJID }}" + REMOTEDEVICEID: "{{ remoteDeviceId }}" + REMOTERESID: "{{ item.REMOTEOBJID }}" + SYNCHRONIZETYPE: "{{ syncType }}" + RECOVERYPOLICY: "{{ recoveryPolicy }}" + SPEED: "{{ syncSpeed }}" + TIMINGVAL: "{{ interval }}" + REPLICATIONMODEL: "{{ mode }}" + ENABLECOMPRESS: "{{ compress }}" + REMTIMEOUTPERIOD: "{{ timeout }}" + register: NewReplicationPairs + when: item.LOCALOBJID not in existObjIds + with_items: "{{ volumePairs }}" + + - name: Get Replication Pair IDs + vars: + queryPairId: "[*].json.data.ID" + newPairIds: "{{ NewReplicationPairs.results | json_query(queryPairId) }}" + set_fact: + pairIds: "{{ existPairIds + newPairIds }}" + failed_when: pairIds|length != volumePairs|length + + - name: Show All Replication Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Check Replication CG name conflicts + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP?filter=NAME%3A%3A{{cgName}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ExistReplicationCG + failed_when: ExistReplicationCG.json.data is defined + + - name: Create Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + NAME: "{{ cgName }}" + SYNCHRONIZETYPE: "{{ syncType }}" + RECOVERYPOLICY: "{{ recoveryPolicy }}" + SPEED: "{{ syncSpeed }}" + TIMINGVALINSEC: "{{ interval }}" + REPLICATIONMODEL: "{{ mode }}" + ENABLECOMPRESS: "{{ compress }}" + register: ReplicationCG + + - name: Get Replication CG ID + set_fact: + cgId: "{{ ReplicationCG.json.data.ID }}" + + - name: Show Replication CG + debug: + msg: + cgId: "{{ cgId }}" + + - name: Add Replication Pairs to CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/ADD_MIRROR" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + RMLIST: + - "{{ item }}" + register: ADD_PAIRS + with_items: "{{ pairIds }}" + + - name: Check Add Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ ADD_PAIRS.results | json_query(queryError) }}" + + - name: Sync Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SYNCHRONIZE_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SYNC + + - name: Check Sync Results + debug: + msg: "{{ SYNC.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/create_snapshots.yml b/playbook/storage/oceanstor/create_snapshots.yml new file mode 100644 index 0000000..688253d --- /dev/null +++ b/playbook/storage/oceanstor/create_snapshots.yml @@ -0,0 +1,75 @@ +--- + +# Required Parameters: +# volumes: a list of primary volumes +# +# Examples: +# --extra-vars '{"volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' +# +# Optional Parameters: +# suffix: snapshot name suffix, default: volumeName_yyyymmddThhmiss +# +# Examples: +# --extra-vars '{"suffix": "20200204", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' +# + +- name: Create and Active Snapshots + hosts: localhost + vars: + suffix: "{{ ansible_date_time.iso8601_basic_short }}" + vars_files: + - ../../global.yml + gather_facts: yes + become: no + tasks: + - import_tasks: check_volume_affinity.yml + + - import_tasks: login_storage.yml + + - name: Create Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/snapshot" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + NAME: "{{ volumes[item.0] }}_{{ suffix }}" + PARENTTYPE: 11 # 11: LUN, 27: Snapshot + PARENTID: "{{ item.1 }}" + register: SNAPSHOTS + with_indexed_items: "{{ volumeIds }}" + + - name: Get Snapshot IDs + vars: + queryId: "[*].json.data.ID" + set_fact: + snapIds: "{{ SNAPSHOTS.results | json_query(queryId) }}" + + - name: Show Snapshot IDs + debug: + msg: + snapIds: "{{ snapIds }}" + + - name: Active Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/snapshot/activate" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + SNAPSHOTLIST: "{{ snapIds }}" + register: ACTIVE_SNAPSHOTS + + - name: Show Active Results + debug: + msg: "{{ ACTIVE_SNAPSHOTS.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/deactivate_snapshots.yml b/playbook/storage/oceanstor/deactivate_snapshots.yml new file mode 100644 index 0000000..dba9342 --- /dev/null +++ b/playbook/storage/oceanstor/deactivate_snapshots.yml @@ -0,0 +1,85 @@ +--- + +# Required Parameters: +# volumes: a list of primary volumes, can be replaced with: snapshots +# suffix: snapshot name suffix +# +# Examples: +# --extra-vars '{"suffix": "20200204T232229", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' +# +# Generated Parameters (can be overwritten): +# deviceSn: storage device SN +# snapshots: a list of snapshot names +# +# Examples: +# --extra-vars '{"deviceSn": "12323019876312325911", "snapshots": ["DJ_AT_0000_20200204T232229", "DJ_AT_0001_20200204T232229"]}' +# +- name: Deactivate Snapshots + hosts: localhost + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_affinity.yml + when: + - volumes is defined + + - import_tasks: login_storage.yml + + - name: Generate Snapshot Names + vars: + snapName: "{{item}}_{{suffix}}" + set_fact: + snapshots: "{{ snapshots|default([]) + [snapName] }}" + with_items: "{{ volumes }}" + when: + - volumes is defined + - suffix is defined + + - name: Query Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SNAPSHOT?filter=NAME%3A%3A{{item|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: SNAPSHOTS + with_items: "{{ snapshots }}" + + - name: Get Snapshot IDs + vars: + queryId: "[*].ID" + set_fact: + snapIds: "{{ snapIds|default([]) + SNAPSHOTS.results[item.0].json.data | json_query(queryId) }}" + with_indexed_items: "{{ snapshots }}" + + - name: Show Snapshot IDs + debug: + msg: + snapIds: "{{ snapIds }}" + + - name: Deactivate Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/snapshot/stop" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: DEACTIVATE_SNAPSHOTS + with_items: "{{ snapIds }}" + + - name: Show Deactivate Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ DEACTIVATE_SNAPSHOTS.results | json_query(queryError) }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/delete_hypermetro_cg.yml b/playbook/storage/oceanstor/delete_hypermetro_cg.yml new file mode 100644 index 0000000..48f2c42 --- /dev/null +++ b/playbook/storage/oceanstor/delete_hypermetro_cg.yml @@ -0,0 +1,174 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: consistency group name +# +# Examples: +# --extra-vars "deviceName='storage1' cgName='cg1'" +# +# Optional Parameters: +# deviceSn: storage device SN +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no +# +# Examples: +# --extra-vars '{"deviceSn":"12323019876312325911", "cgName":"cg1", "deletePairs": no}' +# +- name: Delete HyperMetro Consistency Group + hosts: localhost + vars: + deletePairs: yes + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: login_storage.yml + + - name: Query HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: HyperMetroCG + failed_when: HyperMetroCG.json.data is not defined + + - name: Get HyperMetro CG ID + set_fact: + cgId: "{{ HyperMetroCG.json.data[0].ID }}" + domainId: "{{ HyperMetroCG.json.data[0].DOMAINID }}" + + - name: Show HyperMetro CG ID + debug: + msg: + cgId: "{{ cgId }}" + domainId: "{{ domainId }}" + + - name: Query HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair?filter=CGID%3A%3A{{cgId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: HyperMetroPairs + + - name: Get HyperMetro Pair IDs + vars: + query: "[*].ID" + set_fact: + pairIds: "{{ HyperMetroPairs.json.data | json_query(query) }}" + + - name: Show HyperMetro Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Pause HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup/stop" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: STOP_CG + + - name: Show Pause Results + debug: + msg: "{{ STOP_CG.json.error }}" + + - name: Remove HyperMetro Pairs from CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/hyperMetro/associate/pair?ID={{cgId}}&ASSOCIATEOBJID={{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: REMOVE_PAIRS + with_items: "{{ pairIds }}" + when: pairIds|length > 0 + + - name: Show Remove Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ REMOVE_PAIRS.results | json_query(queryError) }}" + when: pairIds|length > 0 + + - name: Delete HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup/{{cgId}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_CG + + - name: Show Delete CG Results + debug: + msg: "{{ DELETE_CG.json.error }}" + + - name: Delete HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair/{{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_PAIRS + with_items: "{{ pairIds }}" + when: deletePairs and pairIds|length > 0 + + - name: Show Delete Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ DELETE_PAIRS.results | json_query(queryError) }}" + when: deletePairs and pairIds|length > 0 + + - name: Sync HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair/synchronize_hcpair" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: SYNC_PAIRS + with_items: "{{ pairIds }}" + when: not deletePairs and pairIds|length > 0 + + - name: Show Sync Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ SYNC_PAIRS.results | json_query(queryError) }}" + when: not deletePairs and pairIds|length > 0 \ No newline at end of file diff --git a/playbook/storage/oceanstor/delete_replication_cg.yml b/playbook/storage/oceanstor/delete_replication_cg.yml new file mode 100644 index 0000000..5e5cd3e --- /dev/null +++ b/playbook/storage/oceanstor/delete_replication_cg.yml @@ -0,0 +1,178 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replace with deviceSn +# cgName: consistency group name +# +# Examples: +# --extra-vars "deviceName='storage1' cgName='cg1'" +# +# Optional Parameters: +# deviceSn: storage device SN +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no +# +# Examples: +# --extra-vars '{"deviceSn":"12323019876312325911", "cgName":"cg1", "deletePairs": no}' +# + +- name: Delete Replication Consistency Group + hosts: localhost + vars: + deletePairs: yes + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: login_storage.yml + + - name: Query Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ReplicationCG + failed_when: ReplicationCG.json.data is not defined + + - name: Get Replication CG ID + set_fact: + cgId: "{{ ReplicationCG.json.data[0].ID }}" + + - name: Show Replication CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Query Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR?filter=CGID%3A%3A{{cgId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ReplicationPairs + + - name: Get Replication Pair IDs + vars: + query: "[*].ID" + set_fact: + pairIds: "{{ ReplicationPairs.json.data | json_query(query) }}" + + - name: Show Replication Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Pause Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SPLIT_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SPLIT_CG + + - name: Show Pause Results + debug: + msg: "{{ SPLIT_CG.json.error }}" + + - name: Remove Replication Pairs from CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/DEL_MIRROR" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + RMLIST: + - "{{ item }}" + register: REMOVE_PAIRS + with_items: "{{ pairIds }}" + when: pairIds|length > 0 + + - name: Show Remove Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ REMOVE_PAIRS.results | json_query(queryError) }}" + when: pairIds|length > 0 + + - name: Delete Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP/{{cgId}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_CG + + - name: Show Delete CG Results + debug: + msg: "{{ DELETE_CG.json.error }}" + + - name: Delete Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR/{{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_PAIRS + with_items: "{{ pairIds }}" + when: deletePairs and pairIds|length > 0 + + - name: Show Delete Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ DELETE_PAIRS.results | json_query(queryError) }}" + when: deletePairs and pairIds|length > 0 + + - name: Sync Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR/sync" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: SYNC_PAIRS + with_items: "{{ pairIds }}" + when: not deletePairs and pairIds|length > 0 + + - name: Show Sync Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ SYNC_PAIRS.results | json_query(queryError) }}" + when: not deletePairs and pairIds|length > 0 \ No newline at end of file diff --git a/playbook/storage/oceanstor/delete_snapshots.yml b/playbook/storage/oceanstor/delete_snapshots.yml new file mode 100644 index 0000000..065b187 --- /dev/null +++ b/playbook/storage/oceanstor/delete_snapshots.yml @@ -0,0 +1,82 @@ +--- + +# Required Parameters: +# volumes: a list of primary volumes, can be replaced with: snapshots +# suffix: snapshot name suffix +# +# Examples: +# --extra-vars '{"suffix": "20200204T232229", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' +# +# Generated Parameters (can be overwritten): +# deviceSn: storage device SN +# snapshots: a list of snapshot names +# +# Examples: +# --extra-vars '{"deviceSn": "12323019876312325911", "snapshots": ["DJ_AT_0000_20200204T232229", "DJ_AT_0001_20200204T232229"]}' +# +- name: Delete Snapshots + hosts: localhost + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_affinity.yml + when: + - volumes is defined + + - import_tasks: login_storage.yml + + - name: Generate Snapshot Names + vars: + snapName: "{{item}}_{{suffix}}" + set_fact: + snapshots: "{{ snapshots|default([]) + [snapName] }}" + with_items: "{{ volumes }}" + when: + - volumes is defined + - suffix is defined + + - name: Query Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SNAPSHOT?filter=NAME%3A%3A{{item|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: SNAPSHOTS + with_items: "{{ snapshots }}" + + - name: Get Snapshot IDs + vars: + queryId: "[*].ID" + set_fact: + snapIds: "{{ snapIds|default([]) + SNAPSHOTS.results[item.0].json.data | json_query(queryId) }}" + with_indexed_items: "{{ snapshots }}" + + - name: Show Snapshot IDs + debug: + msg: + snapIds: "{{ snapIds }}" + + - name: Delete Snapshots + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/snapshot/{{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_SNAPSHOTS + with_items: "{{ snapIds }}" + + - name: Show Delete Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ DELETE_SNAPSHOTS.results | json_query(queryError) }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/add_volumes_to_clone_cg.yml b/playbook/storage/oceanstor/dorado/add_volumes_to_clone_cg.yml new file mode 100644 index 0000000..3ede54f --- /dev/null +++ b/playbook/storage/oceanstor/dorado/add_volumes_to_clone_cg.yml @@ -0,0 +1,142 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# volumes: a list of volume names +# +# Examples: +# --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' +# +# Generated Parameters (can be overwritten) +# suffix: clone LUN name suffix, default: volumeName_yyyymmddThhmiss +# +# Examples: +# --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"], "suffix": "20200205" }' + +- name: Add Volumes to Clone Consistency Group + hosts: localhost + vars: + suffix: "{{ ansible_date_time.iso8601_basic_short }}" + vars_files: + - ../../../global.yml + gather_facts: yes + become: no + tasks: + - import_tasks: ../check_volume_affinity.yml + + - import_tasks: ../login_storage.yml + + - name: Query Clone CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup?cgType=1&filter=name%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: CLONE_CG + failed_when: CLONE_CG.json.data is not defined + + - name: Get Clone CG ID + set_fact: + cgId: "{{ CLONE_CG.json.data[0].ID }}" + + - name: Show Clone CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Create Clone Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clonepair/create" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + sourceID: "{{ item.1 }}" + name: "{{ volumes[item.0] }}_{{suffix}}" + register: CLONE_PAIRS + with_indexed_items: "{{ volumeIds }}" + + - name: Get Clone Pair IDs + vars: + queryPairId: "[*].json.data.ID" + set_fact: + pairIds: "{{ CLONE_PAIRS.results | json_query(queryPairId) }}" + + - name: Show Clone Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Sync Clone Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clonepair/synchronize" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + copyAction: 3 # 0:start, 1:pause, 2:stop, 3:resume + register: SYNC_PAIRS + with_items: "{{ pairIds }}" + + - name: Show Sync Pair Result + vars: + queryError: "[*].json.error" + debug: + msg: "{{ SYNC_PAIRS.results | json_query(queryError) }}" + + - name: Wait Sync Complete + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clonepair/{{item}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: WAIT_SYNC + vars: + syncStatus: "{{ WAIT_SYNC.json.data.syncStatus }}" + retries: 1440 + delay: 60 + until: syncStatus != '1' # 0:unsynced, 1:syncing, 2:normal, 3:sync_paused + with_items: "{{ pairIds }}" + + - name: Add Clone Pair to CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup/create_associate" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + ASSOCIATEOBJTYPE: 57702 + ASSOCIATEOBJID: "{{ item }}" + register: ADD_PAIRS + with_items: "{{ pairIds }}" + + - name: Show Add Pair Result + vars: + queryError: "[*].json.error" + debug: + msg: "{{ ADD_PAIRS.results | json_query(queryError) }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/add_volumes_to_pg.yml b/playbook/storage/oceanstor/dorado/add_volumes_to_pg.yml new file mode 100644 index 0000000..755a166 --- /dev/null +++ b/playbook/storage/oceanstor/dorado/add_volumes_to_pg.yml @@ -0,0 +1,68 @@ +--- + +# Required Parameters: +# pgName: protection group name +# volumes: a list of primary volumes +# +# Examples: +# --extra-vars '{"pgName": "pg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' +# + +- name: Add Volumes to Protection Group + hosts: localhost + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../check_volume_affinity.yml + + - import_tasks: ../login_storage.yml + + - name: Query PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup?filter=protectGroupName%3A%3A{{pgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: PG + failed_when: PG.json.data is not defined + + - name: Get PG ID + vars: + pg: "{{ PG.json.data[0] }} " + set_fact: + pgId: "{{ pg.protectGroupId }}" + + - name: Show PG ID + debug: + msg: + pgId: "{{ pgId }}" + + - name: Add Volumes to PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup/associate" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + protectGroupId: "{{ pgId }}" + ASSOCIATEOBJTYPE: 11 # 11: LUN + ASSOCIATEOBJID: "{{ item }}" + register: ADD_VOLUMES + with_items: "{{ volumeIds }}" + + - name: Show Add Volume Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ ADD_VOLUMES.results | json_query(queryError) }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/create_clone_cg.yml b/playbook/storage/oceanstor/dorado/create_clone_cg.yml new file mode 100644 index 0000000..fecb768 --- /dev/null +++ b/playbook/storage/oceanstor/dorado/create_clone_cg.yml @@ -0,0 +1,76 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# pgName: protection group name +# +# Examples: +# --extra-vars "deviceName='storage1' pgName='pg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# cgName: clone consistency group name, default: pgName_yyyymmddThhmiss +# sync: whether to sync immediately, default: yes, options: yes, no +# syncSpeed: sync speed, default: 2, options: 1:low, 2:medium, 3:high, 4:highest +# +# Examples: +# --extra-vars '{"deviceSn": "21023598258765432076", "pgName": "pg1", "cgName": "cg1", "sync": yes, "syncSpeed": 4}' + +- name: Create Clone Consistency Group + hosts: localhost + vars: + cgName: "{{ pgName }}_{{ ansible_date_time.iso8601_basic_short }}" + sync: yes + syncSpeed: 2 + vars_files: + - ../../../global.yml + gather_facts: yes + become: no + tasks: + - import_tasks: ../login_storage.yml + + - name: Query PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup?filter=protectGroupName%3A%3A{{pgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: PG + failed_when: PG.json.data is not defined + + - name: Get PG ID + vars: + pg: "{{ PG.json.data[0] }} " + set_fact: + pgId: "{{ pg.protectGroupId }}" + + - name: Show PG ID + debug: + msg: + pgId: "{{ pgId }}" + + - name: Create Clone CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + name: "{{ cgName }}" + sourcePgId: "{{ pgId }}" + isNeedSynchronize: "{{ sync }}" + copyRate: "{{ syncSpeed }}" + register: CLONE_CG + + - name: Show Clone CG + debug: + msg: "{{ CLONE_CG.json }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/create_pg.yml b/playbook/storage/oceanstor/dorado/create_pg.yml new file mode 100644 index 0000000..3dce8ca --- /dev/null +++ b/playbook/storage/oceanstor/dorado/create_pg.yml @@ -0,0 +1,81 @@ +--- + +# Required Parameters: +# pgName: protection group name +# volumes: a list of primary volumes +# +# Examples: +# --extra-vars '{"pgName": "pg1", "volumes": ["DJ_AT_0000", "DJ_AT_0001"]}' +# + +- name: Create Protection Group + hosts: localhost + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../check_volume_affinity.yml + + - import_tasks: ../login_storage.yml + + - name: Check PG name conflicts + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup?filter=protectGroupName%3A%3A{{pgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ExistPG + failed_when: ExistPG.json.data is defined + + - name: Create PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + protectGroupName: "{{ pgName }}" + register: PG + + - name: Get PG ID + set_fact: + pgId: "{{ PG.json.data.protectGroupId }}" + + - name: Show PG ID + debug: + msg: + pgId: "{{ pgId }}" + + - name: Add Volumes to PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup/associate" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + protectGroupId: "{{ pgId }}" + ASSOCIATEOBJTYPE: 11 # 11: LUN + ASSOCIATEOBJID: "{{ item }}" + register: ADD_VOLUMES + with_items: "{{ volumeIds }}" + + - name: Show Add Volume Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ ADD_VOLUMES.results | json_query(queryError) }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/create_snapshot_cg.yml b/playbook/storage/oceanstor/dorado/create_snapshot_cg.yml new file mode 100644 index 0000000..ed0ac9c --- /dev/null +++ b/playbook/storage/oceanstor/dorado/create_snapshot_cg.yml @@ -0,0 +1,71 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# pgName: protection group name +# +# Examples: +# --extra-vars "deviceName='storage1' pgName='pg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# cgName: snapshot consistency group name, default: pgName_YYYYMMDDHH24MISS +# +# Examples: +# --extra-vars "deviceSn='12323019876312325911' pgName='pg1' cgName='pg1_20200204'" + + +- name: Create Snapshot Consistency Group + hosts: localhost + vars: + cgName: "{{ pgName }}_{{ ansible_date_time.iso8601_basic_short }}" + vars_files: + - ../../../global.yml + gather_facts: yes + become: no + tasks: + - import_tasks: ../login_storage.yml + + - name: Query PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup?filter=protectGroupName%3A%3A{{pgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: PG + failed_when: PG.json.data is not defined + + - name: Get PG ID + vars: + pg: "{{ PG.json.data[0] }} " + set_fact: + pgId: "{{ pg.protectGroupId }}" + + - name: Show PG ID + debug: + msg: + pgId: "{{ pgId }}" + + - name: Create Snapshot CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SNAPSHOT_CONSISTENCY_GROUP" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + NAME: "{{ cgName }}" + PARENTID: "{{ pgId }}" + register: SNAP_CG + + - name: Show Snapshot CG + debug: + msg: "{{ SNAP_CG.json }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/delete_clone_cg.yml b/playbook/storage/oceanstor/dorado/delete_clone_cg.yml new file mode 100644 index 0000000..c745fc4 --- /dev/null +++ b/playbook/storage/oceanstor/dorado/delete_clone_cg.yml @@ -0,0 +1,69 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: clone consistency group name +# +# Examples: +# --extra-vars "deviceName='storage1' cgName='cg1'" +# +# Optional Parameters: +# deviceSn: storage device SN +# deleteReplica: delete replica LUN, default: no, options: yes, no +# +# Examples: +# --extra-vars '{"deviceSn": "21023598258765432076", "cgName": "cg1", "deleteReplica": yes}' +# + +- name: Delete Clone Consistency Group + hosts: localhost + vars: + deleteReplica: no + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../login_storage.yml + + - name: Query Clone CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup?cgType=1&filter=name%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: CLONE_CG + failed_when: CLONE_CG.json.data is not defined + + - name: Get Clone CG ID + set_fact: + cgId: "{{ CLONE_CG.json.data[0].ID }}" + + - name: Show Clone CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Delete Clone CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + isDeleteDstLun: "{{ deleteReplica }}" + register: DELETE_CG + + - name: Show Delete Result + debug: + msg: "{{ DELETE_CG.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/delete_pg.yml b/playbook/storage/oceanstor/dorado/delete_pg.yml new file mode 100644 index 0000000..9a3edd3 --- /dev/null +++ b/playbook/storage/oceanstor/dorado/delete_pg.yml @@ -0,0 +1,113 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# pgName: protection group name +# +# Examples: +# --extra-vars "deviceName='storage1' pgName='pg1'" +# +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: +# --extra-vars "deviceSn='12323019876312325911' pgName='pg1'" + +- name: Delete Protection Group + hosts: localhost + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../login_storage.yml + + - name: Query PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup?filter=protectGroupName%3A%3A{{pgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: PG + failed_when: PG.json.data is not defined + + - name: Check Replicas + vars: + pg: "{{ PG.json.data[0] }} " + replicaNum: "{{ pg.cdpGroupNum|int + pg.cloneGroupNum|int + pg.replicationGroupNum|int + pg.snapshotGroupNum|int + pg.drStarNum|int +pg.hyperMetroGroupNum|int}}" + fail: + msg: "Cannot be deleted, replicas exists" + when: replicaNum|int > 0 + + - name: Get PG ID + vars: + pg: "{{ PG.json.data[0] }} " + set_fact: + pgId: "{{ pg.protectGroupId }}" + lunNum: "{{ pg.lunNum }}" + + - name: Show PG ID + debug: + msg: + pgId: "{{ pgId }}" + + - name: Query Volumes in PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/lun/associate?ASSOCIATEOBJTYPE=57846&ASSOCIATEOBJID={{pgId}}&range=[0-{{lunNum}}]" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: VOLUMES + when: lunNum|int > 0 + + - name: Get Volume IDs + vars: + query: "[*].ID" + set_fact: + volumeIds: "{{ VOLUMES.json.data | json_query(query) }}" + when: lunNum|int > 0 + + - name: Remove Volumes from PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup/associate?protectGroupId={{pgId}}&ASSOCIATEOBJTYPE=11&ASSOCIATEOBJID={{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: REMOVE_VOLUMES + with_items: "{{ volumeIds }}" + when: lunNum|int > 0 + + - name: Show Remove Volume Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ REMOVE_VOLUMES.results | json_query(queryError) }}" + when: lunNum|int > 0 + + - name: Delete PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup/{{pgId}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_PG + + - name: Show Delete PG Results + debug: + msg: "{{ DELETE_PG.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/delete_snapshot_cg.yml b/playbook/storage/oceanstor/dorado/delete_snapshot_cg.yml new file mode 100644 index 0000000..e7a04f0 --- /dev/null +++ b/playbook/storage/oceanstor/dorado/delete_snapshot_cg.yml @@ -0,0 +1,61 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: snapshot consistency group name +# +# Examples: +# --extra-vars "deviceName='storage1' cgName='pg1_20200204'" + +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: +# --extra-vars "deviceSn='12323019876312325911' cgName='pg1_20200204'" + +- name: Delete Snapshot Consistency Group + hosts: localhost + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../login_storage.yml + + - name: Query Snapshot CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SNAPSHOT_CONSISTENCY_GROUP?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: SNAP_CG + failed_when: SNAP_CG.json.data is not defined + + - name: Get Snapshot CG ID + set_fact: + cgId: "{{ SNAP_CG.json.data[0].ID }}" + + - name: Show Snapshot CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Delete Snapshot CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SNAPSHOT_CONSISTENCY_GROUP/{{cgId}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_CG + + - name: Show Delete Result + debug: + msg: "{{ DELETE_CG.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/reactivate_snapshot_cg.yml b/playbook/storage/oceanstor/dorado/reactivate_snapshot_cg.yml new file mode 100644 index 0000000..fa287ba --- /dev/null +++ b/playbook/storage/oceanstor/dorado/reactivate_snapshot_cg.yml @@ -0,0 +1,64 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: snapshot consistency group name +# +# Examples: +# --extra-vars "deviceName='storage1' cgName='pg1_20200204'" + +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: +# --extra-vars "deviceSn='12323019876312325911' cgName='pg1_20200204'" + +- name: Reactivate Snapshot Consistency Group + hosts: localhost + vars_files: + - ../../../global.yml + gather_facts: yes + become: no + tasks: + - import_tasks: ../login_storage.yml + + - name: Query Snapshot CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SNAPSHOT_CONSISTENCY_GROUP?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: SNAP_CG + failed_when: SNAP_CG.json.data is not defined + + - name: Get Snapshot CG ID + set_fact: + cgId: "{{ SNAP_CG.json.data[0].ID }}" + + - name: Show Snapshot CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Reactivate Snapshot CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/snapshot_consistency_group/restore" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: REACTIVATE_CG + + - name: Show Reactivate Result + debug: + msg: "{{ REACTIVATE_CG.json.error }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/remove_volumes_from_clone_cg.yml b/playbook/storage/oceanstor/dorado/remove_volumes_from_clone_cg.yml new file mode 100644 index 0000000..3c33d7b --- /dev/null +++ b/playbook/storage/oceanstor/dorado/remove_volumes_from_clone_cg.yml @@ -0,0 +1,125 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# volumes: a list of volume names +# +# Examples: +# --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' +# +# Optional Parameters: +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no +# deleteReplica: delete replica LUN, default: no, options: yes, no +# +# Examples: +# --extra-vars '{"cgName": "cg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"], "deletePairs": yes, "deleteReplica": yes}' +# + +- name: Remove Volumes from Clone Consistency Group + hosts: localhost + vars: + deletePairs: yes + deleteReplica: no + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../check_volume_affinity.yml + + - import_tasks: ../login_storage.yml + + - name: Query Clone CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup?cgType=1&filter=name%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: CLONE_CG + failed_when: CLONE_CG.json.data is not defined + + - name: Get Clone CG ID + set_fact: + cgId: "{{ CLONE_CG.json.data[0].ID }}" + + - name: Show Clone CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Query Clone Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/clonepair/associate?ASSOCIATEOBJTYPE=57703&ASSOCIATEOBJID={{cgId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: CLONE_PAIRS + + - name: Get Clone Pair IDs + vars: + query: "[? sourceID=='{{item}}'].ID" + pairId: "{{ CLONE_PAIRS.json.data | json_query(query) }}" + set_fact: + pairIds: "{{ pairIds|default([]) + pairId }}" + with_items: "{{ volumeIds }}" + + - name: Show Clone Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Remove Clone Pairs from CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup/remove_associate" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + ASSOCIATEOBJTYPE: 57702 + ASSOCIATEOBJID: "{{ item }}" + register: REMOVE_PAIRS + with_items: "{{ pairIds }}" + + - name: Show Remove Pair Result + vars: + queryError: "[*].json.error" + debug: + msg: "{{ REMOVE_PAIRS.results | json_query(queryError) }}" + + - name: Delete Clone Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clonepair/{{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + isDeleteDstLun: "{{ deleteReplica }}" + register: DELETE_PAIRS + with_items: "{{ pairIds }}" + when: deletePairs + + - name: Show Delete Pair Result + vars: + queryError: "[*].json.error" + debug: + msg: "{{ DELETE_PAIRS.results | json_query(queryError) }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/remove_volumes_from_pg.yml b/playbook/storage/oceanstor/dorado/remove_volumes_from_pg.yml new file mode 100644 index 0000000..873b58d --- /dev/null +++ b/playbook/storage/oceanstor/dorado/remove_volumes_from_pg.yml @@ -0,0 +1,63 @@ +--- + +# Required Parameters: +# pgName: protection group name +# volumes: a list of primary volumes +# +# Examples: +# --extra-vars '{"pgName": "pg1", "volumes": ["DJ_AT_0002", "DJ_AT_0003"]}' +# + +- name: Remove Volumes From Protection Group + hosts: localhost + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../check_volume_affinity.yml + + - import_tasks: ../login_storage.yml + + - name: Query PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup?filter=protectGroupName%3A%3A{{pgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: PG + failed_when: PG.json.data is not defined + + - name: Get PG ID + vars: + pg: "{{ PG.json.data[0] }} " + set_fact: + pgId: "{{ pg.protectGroupId }}" + + - name: Show PG ID + debug: + msg: + pgId: "{{ pgId }}" + + - name: Remove Volumes from PG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/protectgroup/associate?protectGroupId={{pgId}}&ASSOCIATEOBJTYPE=11&ASSOCIATEOBJID={{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: REMOVE_VOLUMES + with_items: "{{ volumeIds }}" + + - name: Show Remove Volume Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ REMOVE_VOLUMES.results | json_query(queryError) }}" \ No newline at end of file diff --git a/playbook/storage/oceanstor/dorado/sync_clone_cg.yml b/playbook/storage/oceanstor/dorado/sync_clone_cg.yml new file mode 100644 index 0000000..68c0023 --- /dev/null +++ b/playbook/storage/oceanstor/dorado/sync_clone_cg.yml @@ -0,0 +1,110 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with deviceSn +# cgName: clone consistency group name +# +# Examples: +# --extra-vars "deviceName='storage1' cgName='cg1'" + +# Optional Parameters: +# deviceSn: storage device SN +# waitSync: wait until sync complete, default: no, options: yes, no +# syncSpeed: sync speed, options: 1:low, 2:medium, 3:high, 4:highest +# +# Examples: +# --extra-vars '{"deviceSn":"21023598258765432076", "cgName":"cg1", "waitSync": yes, "syncSpeed": 4}' + +- name: Synchronize Clone Consistency Group + hosts: localhost + vars: + waitSync: no + vars_files: + - ../../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../login_storage.yml + + - name: Query Clone CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup?cgType=1&filter=name%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: CLONE_CG + failed_when: CLONE_CG.json.data is not defined + + - name: Get Clone CG ID + vars: + syncStatus: "{{ CLONE_CG.json.data[0].syncStatus }}" # 0:unsynced, 1:syncing, 2:normal, 3:sync_paused + set_fact: + cgId: "{{ CLONE_CG.json.data[0].ID }}" + copyAction: "{{ 0 if syncStatus in ['0', '2'] else 3 if syncStatus == '3' else -1 if syncStatus == '1' else -2 }}" # 0:start, 1:pause, 2:stop, 3:continue + failed_when: syncStatus not in ['0','1','2','3'] + + - name: Show Clone CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Set Sync Speed + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + copyRate: "{{ syncSpeed }}" + register: SYNC_SPEED + when: syncSpeed is defined + + - name: Start Sync Clone CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup/synchronize" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + copyAction: "{{ copyAction }}" + register: SYNC_CG + when: copyAction|int >= 0 + + - name: Show Sync Result + debug: + msg: "{{ SYNC_CG.json.error }}" + when: copyAction|int >= 0 + + - name: Wait Sync Complete + uri: + url: "https://{{deviceHost}}:{{devicePort}}/api/v2/clone_consistentgroup/{{cgId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: CLONE_CG + vars: + syncStatus: "{{ CLONE_CG.json.data.syncStatus }}" + retries: 1440 + delay: 60 + until: syncStatus != '1' # 0:unsynced, 1:syncing, 2:normal, 3:sync_paused + when: waitSync \ No newline at end of file diff --git a/playbook/storage/oceanstor/login_storage.yml b/playbook/storage/oceanstor/login_storage.yml new file mode 100644 index 0000000..0838eaf --- /dev/null +++ b/playbook/storage/oceanstor/login_storage.yml @@ -0,0 +1,126 @@ +# Include this login tasks before operator on DeviceManager REST API +# +# Required Parameters: +# deviceName: Storage name define in ../global.yml STORAGES list, can be replace with deviceSn +# +# Examples: +# - import_tasks: login_storage.yml +# vars: +# deviceName: "Storage.11.150" +# +# Optional Parameters: +# deviceSn: Storage SN define in ../global.yml STORAGES list +# +# Examples: +# - import_tasks: login_storage.yml +# vars: +# deviceSn: "12323019876312325911" + +- name: Load Storage Auth Info with Device Name + vars: + querySn: "[? name=='{{deviceName}}'].sn" + queryIpList: "[? name=='{{deviceName}}'].ipList" + queryPort: "[? name=='{{deviceName}}'].port" + queryUser: "[? name=='{{deviceName}}'].user" + queryPswd: "[? name=='{{deviceName}}'].pswd" + set_fact: + deviceSn: "{{ STORAGES | json_query(querySn) | first }}" + deviceIpList: "{{ STORAGES | json_query(queryIpList) | first }}" + devicePort: "{{ STORAGES | json_query(queryPort) | first }}" + deviceUser: "{{ STORAGES | json_query(queryUser) | first }}" + devicePswd: "{{ STORAGES | json_query(queryPswd) | first }}" + failed_when: STORAGES | json_query(querySn) | length != 1 + when: deviceName is defined + +- name: Load Storage Auth Info with Device SN + vars: + queryName: "[? sn=='{{deviceSn}}'].name" + queryIpList: "[? sn=='{{deviceSn}}'].ipList" + queryPort: "[? sn=='{{deviceSn}}'].port" + queryUser: "[? sn=='{{deviceSn}}'].user" + queryPswd: "[? sn=='{{deviceSn}}'].pswd" + set_fact: + deviceName: "{{ STORAGES | json_query(queryName) | first }}" + deviceIpList: "{{ STORAGES | json_query(queryIpList) | first }}" + devicePort: "{{ STORAGES | json_query(queryPort) | first }}" + deviceUser: "{{ STORAGES | json_query(queryUser) | first }}" + devicePswd: "{{ STORAGES | json_query(queryPswd) | first }}" + failed_when: STORAGES | json_query(queryName) | length != 1 + when: + - deviceName is not defined + - deviceSn is defined + +- name: Check IP Address + wait_for: + host: "{{ item }}" + port: "{{ devicePort }}" + timeout: 1 + ignore_errors: true + with_items: "{{ deviceIpList }}" + register: CHECK_IP + when: deviceIpList is defined + +- name: Set Accessable IP + vars: + queryHost: "[? failed==`false`].item" + set_fact: + deviceHost: "{{ CHECK_IP.results | json_query(queryHost) | first }}" + failed_when: CHECK_IP.results | json_query(queryHost) | length == 0 + when: deviceIpList is defined + +- name: Load Exisitng Session + include_vars: + file: "{{BASE_DIR}}/storage/oceanstor/sessions.json" + name: SESSIONS + +- name: Set Token + set_fact: + deviceToken: "{{ SESSIONS[deviceHost].token }}" + deviceSession: "{{ SESSIONS[deviceHost].session }}" + when: deviceHost in SESSIONS + +- name: Valid Existing Session + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/system/" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: SYSTEM + ignore_errors: yes + when: deviceHost in SESSIONS + +- name: Login to Storage + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/xxxxx/sessions" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + body_format: json + body: + username: "{{ deviceUser }}" + password: "{{ devicePswd }}" + scope: 0 + register: LOGIN + failed_when: LOGIN.json.error.code|int != 0 + when: (deviceHost not in SESSIONS) or (SYSTEM.json.error.code|int != 0) + +- name: Update Token + set_fact: + deviceToken: "{{ LOGIN.json.data.iBaseToken }}" + deviceSession: "{{ LOGIN.cookies.session }}" + when: + - (deviceHost not in SESSIONS) or (SYSTEM.json.error.code|int != 0) + - LOGIN.json.data.iBaseToken is defined + +- name: Save Session + local_action: copy content={{ SESSIONS | combine( { deviceHost:{'token':deviceToken,'session':deviceSession} } ) }} dest={{BASE_DIR}}/storage/oceanstor/sessions.json + when: + - (deviceHost not in SESSIONS) or (SYSTEM.json.error.code|int != 0) + - LOGIN.json.data.iBaseToken is defined + diff --git a/playbook/storage/oceanstor/remove_volumes_from_hypermetro_cg.yml b/playbook/storage/oceanstor/remove_volumes_from_hypermetro_cg.yml new file mode 100644 index 0000000..7c1efdc --- /dev/null +++ b/playbook/storage/oceanstor/remove_volumes_from_hypermetro_cg.yml @@ -0,0 +1,183 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: +# --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' +# +# Optional Parameters: +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no +# +- name: Remove Volumes from HyperMetro Consistency Group + hosts: localhost + vars: + deletePairs: yes + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_pairs.yml + vars: + primaryVolumes: "{{primaryVolumes}}" + secondaryVolumes: "{{secondaryVolumes}}" + + - name: Set primary storage SN + set_fact: + deviceSn: "{{ devicePair.primary }}" + + - import_tasks: login_storage.yml + + - name: Query HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: HyperMetroCG + + - name: Get HyperMetro CG ID + set_fact: + cgId: "{{ HyperMetroCG.json.data[0].ID }}" + domainId: "{{ HyperMetroCG.json.data[0].DOMAINID }}" + failed_when: HyperMetroCG.json.data | length != 1 + + - name: Show HyperMetro CG ID + debug: + msg: + cgId: "{{ cgId }}" + domainId: "{{ domainId }}" + + - name: Query HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair?filter=CGID%3A%3A{{cgId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: HyperMetroPairs + + - name: Get HyperMetro Pair IDs + vars: + query: "[? LOCALOBJID=='{{item.LOCALOBJID}}' && REMOTEOBJID=='{{item.REMOTEOBJID}}'].ID" + pairId: "{{ HyperMetroPairs.json.data | json_query(query) | first }}" + set_fact: + pairIds: "{{ pairIds|default([]) + [pairId] }}" + with_items: "{{ volumePairs }}" + + - name: Show HyperMetro Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Pause HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup/stop" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: STOP_CG + + - name: Show Pause Results + debug: + msg: "{{ STOP_CG.json.error }}" + + - name: Remove HyperMetro Pairs from CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/hyperMetro/associate/pair?ID={{cgId}}&ASSOCIATEOBJID={{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: REMOVE_PAIRS + with_items: "{{ pairIds }}" + + - name: Show Remove Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ REMOVE_PAIRS.results | json_query(queryError) }}" + + - name: Sync HyperMetro CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetro_ConsistentGroup/sync" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SYNC_CG + + - name: Show Sync Results + debug: + msg: "{{ SYNC_CG.json.error }}" + + - name: Delete HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair/{{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_PAIRS + with_items: "{{ pairIds }}" + when: deletePairs and pairIds|length > 0 + + - name: Show Delete Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ DELETE_PAIRS.results | json_query(queryError) }}" + when: deletePairs and pairIds|length > 0 + + - name: Sync HyperMetro Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/HyperMetroPair/synchronize_hcpair" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: SYNC_PAIRS + with_items: "{{ pairIds }}" + when: not deletePairs and pairIds|length > 0 + + - name: Show Sync Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ SYNC_PAIRS.results | json_query(queryError) }}" + when: not deletePairs and pairIds|length > 0 \ No newline at end of file diff --git a/playbook/storage/oceanstor/remove_volumes_from_replication_cg.yml b/playbook/storage/oceanstor/remove_volumes_from_replication_cg.yml new file mode 100644 index 0000000..1bf43e0 --- /dev/null +++ b/playbook/storage/oceanstor/remove_volumes_from_replication_cg.yml @@ -0,0 +1,188 @@ +--- + +# Required Parameters: +# cgName: consistency group name +# primaryVolumes: a list of primary volume names +# secondaryVolumes: a list of secondary volume names, must be the same number of volumes with the primary volumes +# +# Examples: +# --extra-vars '{"cgName": "cg1", "primaryVolumes": ["DJ_AT_0002", "DJ_AT_0003"], "secondaryVolumes": ["DJ_BC_0002", "DJ_BC_0003"]}' +# +# Optional Parameters: +# deletePairs: delete pairs after remove from CG, default: yes, options: yes, no +# +- name: Remove Volumes from Replication Consistency Group + hosts: localhost + vars: + deletePairs: yes + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: check_volume_pairs.yml + vars: + primaryVolumes: "{{primaryVolumes}}" + secondaryVolumes: "{{secondaryVolumes}}" + + - name: Set primary storage SN + set_fact: + deviceSn: "{{ devicePair.primary }}" + + - import_tasks: login_storage.yml + + - name: Query Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ReplicationCG + + - name: Get Replication CG ID + set_fact: + cgId: "{{ ReplicationCG.json.data[0].ID }}" + failed_when: ReplicationCG.json.data | length != 1 + + - name: Show Replication CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Query Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR?filter=CGID%3A%3A{{cgId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ReplicationPairs + + - name: Get Replication Pair IDs + vars: + query: "[? LOCALRESID=='{{item.LOCALOBJID}}' && REMOTERESID=='{{item.REMOTEOBJID}}'].ID" + pairId: "{{ ReplicationPairs.json.data | json_query(query) | first }}" + set_fact: + pairIds: "{{ pairIds|default([]) + [pairId] }}" + with_items: "{{ volumePairs }}" + + - name: Show Replication Pair IDs + debug: + msg: + pairIds: "{{ pairIds }}" + + - name: Pause Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SPLIT_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SPLIT_CG + + - name: Show Pause Results + debug: + msg: "{{ SPLIT_CG.json.error }}" + + - name: Remove Replication Pairs from CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/DEL_MIRROR" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + RMLIST: + - "{{ item }}" + register: REMOVE_PAIRS + with_items: "{{ pairIds }}" + when: pairIds|length > 0 + + - name: Show Remove Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ REMOVE_PAIRS.results | json_query(queryError) }}" + when: pairIds|length > 0 + + - name: Sync Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SYNCHRONIZE_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SYNC_CG + + - name: Check Sync Results + debug: + msg: "{{ SYNC_CG.json.error }}" + + - name: Delete Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR/{{item}}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: DELETE_PAIRS + with_items: "{{ pairIds }}" + when: deletePairs and pairIds|length > 0 + + - name: Show Delete Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ DELETE_PAIRS.results | json_query(queryError) }}" + when: deletePairs and pairIds|length > 0 + + - name: Sync Replication Pairs + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/REPLICATIONPAIR/sync" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ item }}" + register: SYNC_PAIRS + with_items: "{{ pairIds }}" + when: not deletePairs and pairIds|length > 0 + + - name: Show Sync Pair Results + vars: + queryError: "[*].json.error" + debug: + msg: "{{ SYNC_PAIRS.results | json_query(queryError) }}" + when: not deletePairs and pairIds|length > 0 \ No newline at end of file diff --git a/playbook/storage/oceanstor/sessions.json b/playbook/storage/oceanstor/sessions.json new file mode 100644 index 0000000..9e26dfe --- /dev/null +++ b/playbook/storage/oceanstor/sessions.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/playbook/storage/oceanstor/switchover_replication_cg.yml b/playbook/storage/oceanstor/switchover_replication_cg.yml new file mode 100644 index 0000000..69271cd --- /dev/null +++ b/playbook/storage/oceanstor/switchover_replication_cg.yml @@ -0,0 +1,144 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replace with deviceSn +# cgName: consistency group name +# +# Examples: +# --extra-vars "deviceName='storage1' cgName='cg1'" +# +# Optional Parameters: +# deviceSn: storage device SN +# +# Examples: +# +# --extra-vars "deviceSn='12323019876312325911' cgName='cg1'" + + +- name: Switchover Replication Consistency Group + hosts: localhost + vars: + deletePairs: yes + vars_files: + - ../../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: login_storage.yml + + - name: Query Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP?filter=NAME%3A%3A{{cgName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + register: ReplicationCG + failed_when: ReplicationCG.json.data is not defined + + - name: Get Replication CG ID + set_fact: + cgId: "{{ ReplicationCG.json.data[0].ID }}" + + - name: Show Replication CG ID + debug: + msg: + cgId: "{{ cgId }}" + + - name: Pause Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SPLIT_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SPLIT_CG + + - name: Show Pause Results + debug: + msg: "{{ SPLIT_CG.json.error }}" + + - name: Set Secondary to Read/Write + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP/{{cgId}}" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + SECRESACCESS: 3 # 2: read-only, 3: read/write + register: SET_RW + + - name: Show Set Read/Write Results + debug: + msg: "{{ SET_RW.json.error }}" + + - name: Switchover Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SWITCH_GROUP_ROLE" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SWITCH_CG + + - name: Show Switchover CG Results + debug: + msg: "{{ SWITCH_CG.json.error }}" + + - name: Set Secondary to Read-Only + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/CONSISTENTGROUP/{{cgId}}" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + SECRESACCESS: 2 # 2: read-only, 3: read/write + register: SET_RO + + - name: Show Set Read-Only Results + debug: + msg: "{{ SET_RO.json.error }}" + + - name: Sync Replication CG + uri: + url: "https://{{deviceHost}}:{{devicePort}}/deviceManager/rest/{{deviceSn}}/SYNCHRONIZE_CONSISTENCY_GROUP" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + iBaseToken: "{{ deviceToken }}" + Cookie: "session={{ deviceSession }}" + body_format: json + body: + ID: "{{ cgId }}" + register: SYNC_CG + + - name: Check Sync Results + debug: + msg: "{{ SYNC_CG.json.error }}" \ No newline at end of file diff --git a/playbook/storage/sync_storage.yml b/playbook/storage/sync_storage.yml new file mode 100644 index 0000000..5330079 --- /dev/null +++ b/playbook/storage/sync_storage.yml @@ -0,0 +1,89 @@ +--- + +# Required Parameters: +# deviceName: storage device name, can be replaced with storageId +# +# Examples: +# --extra-vars "deviceName='Storage-5500'" +# +# Generated Parameters (can be overwritten): +# deviceId: storage device ID +# +# Examples: +# --extra-vars "deviceId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" + +- name: Sync Storage + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Query Storages + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.storages }}?start=1&limit=1000" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: DEVICES + when: deviceName is defined + + - name: Get Storage ID + vars: + query: "[?name=='{{ deviceName }}'].id" + set_fact: + deviceId: "{{ DEVICES.json.datas | json_query(query) | first }}" + failed_when: DEVICES.json.datas | json_query(query) | length != 1 + when: deviceName is defined + + - name: Show Params + debug: + msg: + id: "{{deviceId}}" + + - name: Sync Storage + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.storages }}/refresh" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + id: "{{deviceId}}" + register: SYNCTASK + + - name: Wait Sync Start + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.storages }}/{{deviceId}}/detail" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: DETAIL + retries: 60 + delay: 1 + until: DETAIL.json.syn_status|int == 1 # 0/NotSync, 1/Syncing, 2/Synced, 3/Unknown + + - name: Wait Sync Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.storages }}/{{deviceId}}/detail" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: DETAIL + retries: 60 + delay: 5 + until: DETAIL.json.syn_status|int != 1 # 0/NotSync, 1/Syncing, 2/Synced, 3/Unknown \ No newline at end of file diff --git a/playbook/task/get_task_by_id.yml b/playbook/task/get_task_by_id.yml new file mode 100644 index 0000000..0910e8e --- /dev/null +++ b/playbook/task/get_task_by_id.yml @@ -0,0 +1,37 @@ +--- + +# Required Parameters: +# taskId: Task ID +# +# Examples: +# --extra-vars "taskId='bd5f2b70-d416-4d61-8e1a-f763e68dbbe1'" + +- name: Get Task + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Task + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{taskId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + + - name: Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ taskId }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Status: "{{statusMap[status]}}" diff --git a/playbook/task/list_tasks.yml b/playbook/task/list_tasks.yml new file mode 100644 index 0000000..25dc49c --- /dev/null +++ b/playbook/task/list_tasks.yml @@ -0,0 +1,105 @@ +--- + +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: name, status, start_time, end_time +# sortDir: sort direction, default: asc, options: desc, asc +# taskName: task name +# ownerName: owner name +# status: task status, options: 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout +# startTimeFrom: query tasks which's start time after this, epoch in seconds +# startTimeTo: query tasks which's start time before this, epoch in seconds +# endTimeFrom: query tasks which's end time after this, epoch in seconds +# endTimeTo: query tasks which's end time before this, epoch in seconds +# +# Examples: +# --extra-vars "sortKey='start_time' sortDir='desc'" +# --extra-vars "taskName='Delete volume'" +# --extra-vars "status=3" +# --extra-vars "startTimeFrom=`date -d '12:00:00' +%s` startTimeTo=`date -d '16:00:00' +%s`" +# --extra-vars "endTimeFrom=`date -d '12:00:00' +%s` endTimeTo=`date -d '16:00:00' +%s`" + +- name: List Tasks + hosts: localhost + vars: + pageNo: 1 + pageSize: 10 + params: "{{'limit=' + pageSize|string + '&start=' + (pageSize|int * (pageNo|int - 1)) | string }}" + sortDir: asc + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Set params - sortKey & sortDir + vars: + sortAttr: "{{ ('name_en' if DJ.lang == 'en_US' else 'name_cn') if sortKey == 'name' else sortKey }}" + set_fact: + sortAttr: "{{ sortAttr }}" + params: "{{ params + '&sort_key=' + sortAttr + '&sort_dir=' + sortDir }}" + when: sortKey is defined + + - name: Set params - taskName + set_fact: + params: "{{ params + ('&name_en=' if DJ.lang == 'en_US' else '&name_cn=') + taskName|urlencode }}" + when: taskName is defined + + - name: Set params - ownerName + set_fact: + params: "{{ params + '&owner_name=' + ownerName|urlencode }}" + when: ownerName is defined + + - name: Set params - status + set_fact: + params: "{{ params + '&status=' + status }}" + when: status is defined + + - name: Set params - startTimeFrom + set_fact: + params: "{{ params + '&start_time_from=' + startTimeFrom + '000' }}" + when: startTimeFrom is defined + + - name: Set params - startTimeTo + set_fact: + params: "{{ params + '&start_time_to=' + startTimeTo + '000' }}" + when: startTimeTo is defined + + - name: Set params - endTimeFrom + set_fact: + params: "{{ params + '&end_time_from=' + endTimeFrom + '000' }}" + when: endTimeFrom is defined + + - name: Set params - endTimeTo + set_fact: + params: "{{ params + '&end_time_to=' + endTimeTo + '000' }}" + when: endTimeTo is defined + + - name: Show Param + debug: + msg: "{{params}}" + + - name: List Tasks + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}?{{params}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + + - name: Show Tasks + vars: + objList: "{{ TASKS.json.tasks }}" + totalNum: "{{TASKS.json.total}}" + sortDesc: "{{ 'True' if sortDir == 'desc' else 'False' }}" + debug: + msg: + objList: "{{ ( objList | sort(attribute=sortAttr,reverse=sortDesc) ) if sortAttr is defined else ( objList | sort(reverse=sortDesc) ) }}" + totalNum: "{{ totalNum }}" + pageSize: "{{ pageSize }}" + pageNo: "{{ pageNo }}" diff --git a/playbook/task/wait_task_complete.yml b/playbook/task/wait_task_complete.yml new file mode 100644 index 0000000..8128c98 --- /dev/null +++ b/playbook/task/wait_task_complete.yml @@ -0,0 +1,47 @@ +--- +# Required Parameters: +# taskId: Task ID +# +# Optional Parameters: +# seconds: wait seconds, default 300 +# +# Examples: +# --extra-vars "taskId=bd5f2b70-d416-4d61-8e1a-f763e68dbbe1 seconds=60" + +- name: Wait Task Complete + hosts: localhost + vars: + seconds: 300 + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Wait Task Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{taskId}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + vars: + query: "[?id=='{{ taskId }}'].status" + retries: "{{ (seconds|int / 5) | int }}" + delay: 5 + until: (TASKS.json | json_query(query) | first | int) not in [1, 2] # 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout + + - name: Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ taskId }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Result: "{{statusMap[status]}}" + diff --git a/playbook/tier/get_tier_by_name.yml b/playbook/tier/get_tier_by_name.yml new file mode 100644 index 0000000..540186c --- /dev/null +++ b/playbook/tier/get_tier_by_name.yml @@ -0,0 +1,41 @@ +--- + +# Required Parameters: +# tierName: service level name +# +# Examples: +# --extra-vars "tierName='Gold'" +# +- name: Get Tier by name + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Tier by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tiers }}?name={{tierName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TIER + + - name: Check Tier + vars: + query: "[?name=='{{ tierName }}']" + debug: + msg: "No matched service level: '{{ tierName }}'" + when: TIER.json['service-levels'] | json_query(query) | length < 1 + + - name: Show Tier + vars: + query: "[?name=='{{ tierName }}']" + debug: + msg: "{{ TIER.json['service-levels'] | json_query(query) }}" + when: TIER.json['service-levels'] | json_query(query) | length >= 1 diff --git a/playbook/tier/list_tiers.yml b/playbook/tier/list_tiers.yml new file mode 100644 index 0000000..5a986b6 --- /dev/null +++ b/playbook/tier/list_tiers.yml @@ -0,0 +1,109 @@ +--- +# Optional Parameters: +# detail: show detail, default: false, options: true, false +# sortKey: sort key, options: name, total_capacity, created_at +# sortDir: sort direction, default: asc, options: desc, asc +# tierName: service level name +# azName: availability zone name +# projectName: project name +# +# Examples: +# --extra-vars "tierName='Gold'" +# --extra-vars "sortKey='total_capacity' sortDir='desc'" +# --extra-vars "azName='room1' projectName='project1'" + +- name: List Tiers + hosts: localhost + vars: + detail: false + params: "detail={{ detail }}" + sortDir: asc + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Set params - sortKey & sortDir + set_fact: + params: "{{ params + '&sort_key=' + sortKey + '&sort_dir=' + sortDir }}" + when: + - sortKey is defined + + - name: Set params - tierName + set_fact: + params: "{{ params + '&name=' + tierName|urlencode }}" + when: + - tierName is defined + + - name: Query AZ by name + vars: + query: "[?name=='{{ azName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.azs }}?az_name={{azName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: AZ + failed_when: AZ.json.az_list | json_query(query) | length != 1 + when: azName is defined + + - name: Set params - azName + vars: + query: "[?name=='{{ azName }}'].id" + set_fact: + params: "{{ params + '&available_zone_id=' + AZ.json.az_list | json_query(query) | first }}" + when: + - azName is defined + - AZ.json.az_list | json_query(query) | length == 1 + + - name: Query project by name + vars: + query: "[?name=='{{ projectName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.projects }}?name={{projectName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PROJECT + failed_when: PROJECT.json.projectList | json_query(query) | length != 1 + when: projectName is defined + + - name: Set params - projectName + vars: + query: "[?name=='{{ projectName }}'].id" + set_fact: + params: "{{ params + '&project_id=' + PROJECT.json.projectList | json_query(query) | first }}" + when: + - projectName is defined + - PROJECT.json.projectList | json_query(query) | length == 1 + + - name: List Tiers + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tiers }}?{{params}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TIERS + + - name: Show Tiers + vars: + objList: "{{ TIERS.json[\"service-levels\"] }}" + totalNum: "{{ TIERS.json[\"service-levels\"] | length }}" + sortDesc: "{{ 'True' if sortDir == 'desc' else 'False' }}" + debug: + msg: + objList: "{{ ( objList | sort(attribute=sortKey,reverse=sortDesc) ) if sortKey is defined else ( objList | sort(reverse=sortDesc) ) }}" + totalNum: "{{ totalNum }}" + pageSize: "{{ totalNum }}" + pageNo: "1" diff --git a/playbook/user/login.yml b/playbook/user/login.yml new file mode 100644 index 0000000..e5a4b56 --- /dev/null +++ b/playbook/user/login.yml @@ -0,0 +1,50 @@ +# Include this tasks at the beginning of playbooks to login to DJ +# +# Required to load var file ../global.yml +# +# Examples: +# vars_files: +# - ../global.yml +# tasks: +# - import_tasks: ../user/login.yml + +- name: Check DJ Session + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}?status=2" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + status_code: 200, 401 + register: CHECK + +- name: Login DJ + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.sessions }}" + method: PUT + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + body_format: json + body: + grantType: "password" + userName: "{{DJ.user}}" + value: "{{DJ.pswd}}" + register: SESSION + when: CHECK.status|int == 401 # 401: Unauthorized + +- name: Update DJ Token + set_fact: + DJ: "{{ DJ | combine({'token': SESSION.json.accessSession}) }}" + when: CHECK.status|int == 401 + +- name: Update global.yml + replace: + path: "{{BASE_DIR}}/global.yml" + regexp: '^ token:.*$' + replace: " token: {{DJ.token}}" + when: CHECK.status|int == 401 + diff --git a/playbook/user/logout.yml b/playbook/user/logout.yml new file mode 100644 index 0000000..9c47370 --- /dev/null +++ b/playbook/user/logout.yml @@ -0,0 +1,22 @@ +# Include this tasks at the end of playbooks if need to logout +# +# Required to load var file ../global.yml +# +# Examples: +# +# vars_files: +# - ../global.yml +# tasks: +# - import_tasks: ../user/login.yml +# +# - import_tasks: ../user/logout.yml + +- name: Logout DJ + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.sessions }}" + method: DELETE + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" diff --git a/playbook/util/json2csv.py b/playbook/util/json2csv.py new file mode 100644 index 0000000..e1f9da3 --- /dev/null +++ b/playbook/util/json2csv.py @@ -0,0 +1,43 @@ +#!/usr/bin/python +# -*- encoding:utf-8 -*- + +# Required parameters: +# data: a list of dict +# keys: keys to export +# file: file to export to +# +# Optional parameters: +# sep: separator, default '|' + +import ast +import csv +import argparse + +def parse_args(): + parser = argparse.ArgumentParser() + parser.add_argument('-d', '--data', type=str, required=True, help='A list of dicts') + parser.add_argument('-k', '--keys', type=str, required=True, help='Keys to export') + parser.add_argument('-f', '--file', type=str, required=True, help='File to export to') + parser.add_argument('-s', '--sep', type=str, required=False, default='|', help='Separator, default: |') + return parser.parse_args() + +if __name__ == '__main__': + args = parse_args() + keys = ast.literal_eval(args.keys) + data = ast.literal_eval(args.data) + + csv_file = open(args.file, 'w') + csv_writer = csv.writer(csv_file, delimiter = args.sep) + + csv_writer.writerow(keys) + + for dict in data: + values = [] + for key in keys: + if key in dict: + values.append(dict[key]) + else: + values.append('') + csv_writer.writerow(values) + csv_file.close() +# end if __main__ \ No newline at end of file diff --git a/playbook/util/json2csv.yml b/playbook/util/json2csv.yml new file mode 100644 index 0000000..3f558aa --- /dev/null +++ b/playbook/util/json2csv.yml @@ -0,0 +1,10 @@ +# Required Parameters: +# data: a list of dict +# keys: keys to export +# file: file to export to +# +# Optional Parameters: +# sep: separator, default '|' + +- name: Export to {{file}} + local_action: command python "{{BASE_DIR}}"/util/json2csv.py -d "{{data}}" -k "{{keys}}" -f "{{file}}" -s "{{ sep | default('|') }}" \ No newline at end of file diff --git a/playbook/volume/attach_volumes_to_host.yml b/playbook/volume/attach_volumes_to_host.yml new file mode 100644 index 0000000..b2bfb9f --- /dev/null +++ b/playbook/volume/attach_volumes_to_host.yml @@ -0,0 +1,117 @@ +--- + +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostName: host name, can be instead with hostId +# +# Examples: +# --extra-vars "volumeName='ansibleC_' hostName='79rbazhs'" +# +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostId: host ID +# +# Examples: +# --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ +# --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" + +- name: Attach Volumes to Host + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Volumes by Fuzzy Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?limit=1000&start=0&name={{volumeName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + when: volumeName is defined + + - name: Get Volume ID List + set_fact: + volumeIds: "{{ VOLUMES.json.volumes | json_query('[*].id') }}" + failed_when: VOLUMES.json.volumes | length < 1 + when: volumeName is defined + + - name: Get Host by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostName}}" + register: HOSTS + when: hostName is defined + + - name: Get Host ID + vars: + query: "[?name=='{{ hostName }}'].id" + set_fact: + hostId: "{{ HOSTS.json.hosts | json_query(query) | first }}" + failed_when: HOSTS.json.hosts | json_query(query) | length != 1 + when: hostName is defined + + - name: Show Param + debug: + msg: + volume_ids: "{{ volumeIds }}" + host_id: "{{ hostId }}" + + - name: Attach Volumes + vars: + query: "[?name=='{{ hostName }}'].id" + hostId: "{{ HOSTS.json.hosts | json_query(query) | first }}" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}/host-mapping" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + status_code: 202 + body_format: json + body: + volume_ids: "{{ volumeIds }}" + host_id: "{{ hostId }}" + register: ATTACH_VOLUME + + - name: Wait Task Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{ ATTACH_VOLUME.json.task_id }}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + vars: + query: "[?id=='{{ ATTACH_VOLUME.json.task_id }}'].status" + retries: 60 + delay: 5 + until: (TASKS.json | json_query(query) | first | int) not in [1, 2] # 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout + + - name: Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ ATTACH_VOLUME.json.task_id }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Result: "{{statusMap[status]}}" diff --git a/playbook/volume/attach_volumes_to_hostgroup.yml b/playbook/volume/attach_volumes_to_hostgroup.yml new file mode 100644 index 0000000..ead4373 --- /dev/null +++ b/playbook/volume/attach_volumes_to_hostgroup.yml @@ -0,0 +1,117 @@ +--- + +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostGroupName: host group name, can be instead with hostGroupId +# +# Examples: +# --extra-vars "volumeName='ansibleC_' hostGroupName='exclusive-df06cf7456dc485d'" +# +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostGroupId: host group ID +# +# Examples: +# --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ +# --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" + +- name: Attach Volumes to Host Group + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Volumes by Fuzzy Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?limit=1000&start=0&name={{volumeName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + when: volumeName is defined + + - name: Get Volume ID List + set_fact: + volumeIds: "{{ VOLUMES.json.volumes | json_query('[*].id') }}" + failed_when: VOLUMES.json.volumes | length < 1 + when: volumeName is defined + + - name: Get Host Group by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostGroupName}}" + register: HOSTGROUPS + when: hostGroupName is defined + + - name: Get Host Group ID + vars: + query: "[?name=='{{ hostGroupName }}'].id" + set_fact: + hostGroupId: "{{ HOSTGROUPS.json.hostgroups | json_query(query) | first }}" + failed_when: HOSTGROUPS.json.hostgroups | json_query(query) | length != 1 + when: hostGroupName is defined + + - name: Show Param + debug: + msg: + volume_ids: "{{ volumeIds }}" + hostgroup_id: "{{ hostGroupId }}" + + - name: Attach Volumes + vars: + query: "[?name=='{{ hostGroupName }}'].id" + hostGroupId: "{{ HOSTGROUPS.json.hostgroups | json_query(query) | first }}" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}/hostgroup-mapping" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + status_code: 202 + body_format: json + body: + volume_ids: "{{ volumeIds }}" + hostgroup_id: "{{ hostGroupId }}" + register: ATTACH_VOLUME + + - name: Wait Task Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{ ATTACH_VOLUME.json.task_id }}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + vars: + query: "[?id=='{{ ATTACH_VOLUME.json.task_id }}'].status" + retries: 60 + delay: 5 + until: (TASKS.json | json_query(query) | first | int) not in [1, 2] # 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout + + - name: Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ ATTACH_VOLUME.json.task_id }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Result: "{{statusMap[status]}}" diff --git a/playbook/volume/create_volume.yml b/playbook/volume/create_volume.yml new file mode 100644 index 0000000..08c7b21 --- /dev/null +++ b/playbook/volume/create_volume.yml @@ -0,0 +1,264 @@ +--- + +# Required Parameters: +# volumes: a list of volumes: [{ +# name: volume name or prefix, +# capacity: capacity in GiB, +# count: number of volumes, +# start_suffix: suffix start number, default 0 +# }] +# tierName: service level name, can be instead with tierId +# +# Examples: +# --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible1_", "capacity": 10, "count": 2}] }' +# --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible1_", "capacity": 10, "count": 2, "start_suffix": 2}] }' +# --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible2_", "capacity": 10, "count": 2}, {"name": "ansible3_", "capacity": 10, "count": 2}] }' +# +# Optional Parameters: +# projectName: project name +# azName: availability zone name +# affinity: create multiple volumes on 1 storage, default: true, options: true, false +# affinityVolume: create target volume on the same storage of this affinityVolume +# hostName: map to host +# hostGroupName: map to host group +# +# Examples: +# --extra-vars "projectName='project1'" --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible4_", "capacity": 10, "count": 2}] }' +# --extra-vars "azName='room1'" --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible5_", "capacity": 10, "count": 2}] }' +# --extra-vars "affinity='false'" --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible6_", "capacity": 10, "count": 2}] }' +# --extra-vars "affinityVolume='ansible1_0000'" --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible7_", "capacity": 10, "count": 2}] }' +# --extra-vars "hostName='79rbazhs'" --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible8_", "capacity": 10, "count": 2}] }' +# --extra-vars "hostGroupName='exclusive-df06cf7456dc485d'" --extra-vars '{"tierName": "Gold", "volumes": [{"name": "ansible9_", "capacity": 10, "count": 2}] }' +# +# Generated Parameters (can be overwritten): +# tierId: service level ID +# projectId: project ID +# azId: az ID +# affinityVolumeId affinity volume ID +# hostId: host ID +# hostGroupId host group ID +# +# Examples: +# --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleA_", "capacity": 10, "count": 2}] }' +# --extra-vars "projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleB_", "capacity": 10, "count": 2}] }' +# --extra-vars "azId='02B770926FCB3AE5A413E8A74F9A576B'" --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleC_", "capacity": 10, "count": 2}] }' +# --extra-vars "affinityVolumeId='cfe7eb0f-73f8-4110-bff4-07cb46121566'" --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleD_", "capacity": 10, "count": 2}] }' +# --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleE_", "capacity": 10, "count": 2}] }' +# --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" --extra-vars '{"tierId": "bdd129e1-6fbf-4456-91d8-d1fe426bf8e0", "volumes": [{"name": "ansibleF_", "capacity": 10, "count": 2}] }' + +- name: Create Volumes + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Query Tier by Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tiers }}?name={{tierName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TIER + when: tierName is defined + + - name: Get Tier ID + vars: + query: "[?name=='{{ tierName }}'].id" + set_fact: + tierId: "{{ TIER.json[\"service-levels\"] | json_query(query) | first }}" + failed_when: TIER.json['service-levels'] | json_query(query) | length != 1 + when: tierName is defined + + - name: Set Param - volumes, tierId, affinity + set_fact: + params: + volumes: "{{ volumes }}" + service_level_id: "{{ tierId }}" + scheduler_hints: + affinity: "{{ affinity | default('true') }}" + + - name: Query project by name + vars: + query: "[?name=='{{ projectName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.projects }}?name={{projectName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PROJECT + when: projectName is defined + + - name: Get project ID + vars: + query: "[?name=='{{ projectName }}'].id" + set_fact: + projectId: "{{ PROJECT.json.projectList | json_query(query) | first }}" + failed_when: PROJECT.json.projectList | json_query(query) | length != 1 + when: projectName is defined + + - name: Set Param - projectId + set_fact: + params: "{{ params | combine( { 'project_id': projectId } ) }}" + when: projectId is defined + + - name: Query AZ by name + vars: + query: "[?name=='{{ azName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.azs }}?az_name={{azName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: AZ + when: azName is defined + + - name: Get AZ ID + vars: + query: "[?name=='{{ azName }}'].id" + set_fact: + azId: "{{ AZ.json.az_list | json_query(query) | first }}" + failed_when: AZ.json.az_list | json_query(query) | length != 1 + when: azName is defined + + - name: Set Param - azId + set_fact: + params: "{{ params | combine( { 'availability_zone': azId } ) }}" + when: azId is defined + + - name: Query Host by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostName}}" + register: HOST + when: hostName is defined + + - name: Get Host ID + vars: + query: "[?name=='{{ hostName }}'].id" + set_fact: + hostId: "{{ HOST.json.hosts | json_query(query) | first }}" + failed_when: HOST.json.hosts | json_query(query) | length != 1 + when: hostName is defined + + - name: Set Param - hostId + set_fact: + params: "{{ params | combine( { 'mapping': { 'host_id': hostId } } ) }}" + when: hostId is defined + + - name: Query Host Group by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostGroupName}}" + register: HOSTGROUP + when: hostGroupName is defined + + - name: Get Host Group ID + vars: + query: "[?name=='{{ hostGroupName }}'].id" + set_fact: + hostGroupId: "{{ HOSTGROUP.json.hostgroups | json_query(query) | first }}" + failed_when: HOSTGROUP.json.hostgroups | json_query(query) | length != 1 + when: hostGroupName is defined + + - name: Set Param - hostGroupId + set_fact: + params: "{{ params | combine( { 'mapping': { 'hostgroup_id': hostGroupId } } ) }}" + when: hostGroupId is defined + + - name: Query Affinity Volume by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?name={{affinityVolume}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: AFFVOL + when: affinityVolume is defined + + - name: Get Affinity Volume ID + vars: + query: "[?name=='{{ affinityVolume }}'].id" + set_fact: + affinityVolumeId: "{{ AFFVOL.json.volumes | json_query(query) | first }}" + failed_when: AFFVOL.json.volumes | json_query(query) | length != 1 + when: affinityVolume is defined + + - name: Set Param - affinityVolumeId + set_fact: + params: "{{ params | combine( { 'scheduler_hints': { 'affinity': true, 'affinity_volume': affinityVolumeId } } ) }}" + when: affinityVolumeId is defined + + - name: Show Param + debug: + msg: "{{params}}" + + - name: Create Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + status_code: 202 + body_format: json + body: "{{params}}" + register: CREATE_VOLUME + + - name: Wait Task Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{ CREATE_VOLUME.json.task_id }}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + vars: + query: "[?id=='{{ CREATE_VOLUME.json.task_id }}'].status" + retries: 60 + delay: 5 + until: (TASKS.json | json_query(query) | first | int) not in [1, 2] # 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout + + - name: Show Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ CREATE_VOLUME.json.task_id }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Result: "{{statusMap[status]}}" + diff --git a/playbook/volume/delete_volumes_by_fuzzy_name.yml b/playbook/volume/delete_volumes_by_fuzzy_name.yml new file mode 100644 index 0000000..35b5561 --- /dev/null +++ b/playbook/volume/delete_volumes_by_fuzzy_name.yml @@ -0,0 +1,72 @@ +--- + +# Required Parameters: +# volumeName: volume name +# +# Examples: +# --extra-vars "volumeName='ansible'" + +- name: Delete Volumes by Fuzzy Name + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: List Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?limit=1000&start=0&name={{volumeName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + + - name: Show Volumes + debug: + msg: "{{ VOLUMES.json.volumes | json_query('[*].name') }}" + + - name: Delete Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}/delete" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + status_code: 202 + body_format: json + body: + volume_ids: "{{ VOLUMES.json.volumes | json_query('[*].id') }}" + register: DELETE_VOLUME + + - name: Wait Task Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{ DELETE_VOLUME.json.task_id }}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + vars: + query: "[?id=='{{ DELETE_VOLUME.json.task_id }}'].status" + retries: 60 + delay: 5 + until: (TASKS.json | json_query(query) | first | int) not in [1, 2] # 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout + + - name: Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ DELETE_VOLUME.json.task_id }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Result: "{{statusMap[status]}}" \ No newline at end of file diff --git a/playbook/volume/detach_volumes_from_host.yml b/playbook/volume/detach_volumes_from_host.yml new file mode 100644 index 0000000..bba2393 --- /dev/null +++ b/playbook/volume/detach_volumes_from_host.yml @@ -0,0 +1,117 @@ +--- + +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostName: host name, can be instead with hostId +# +# Examples: +# --extra-vars "volumeName='ansibleC_' hostName='79rbazhs'" +# +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostId: host ID +# +# Examples: +# --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ +# --extra-vars "hostId='32fb302d-25cb-4e4b-83d6-03f03498a69b'" + +- name: Detach Volumes from Host + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Volumes by Fuzzy Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?limit=1000&start=0&name={{volumeName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + when: volumeName is defined + + - name: Get Volume ID List + set_fact: + volumeIds: "{{ VOLUMES.json.volumes | json_query('[*].id') }}" + failed_when: VOLUMES.json.volumes | length < 1 + when: volumeName is defined + + - name: Get Host by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostName}}" + register: HOSTS + when: hostName is defined + + - name: Get Host ID + vars: + query: "[?name=='{{ hostName }}'].id" + set_fact: + hostId: "{{ HOSTS.json.hosts | json_query(query) | first }}" + failed_when: HOSTS.json.hosts | json_query(query) | length != 1 + when: hostName is defined + + - name: Show Param + debug: + msg: + volume_ids: "{{ volumeIds }}" + host_id: "{{ hostId }}" + + - name: Detach Volumes + vars: + query: "[?name=='{{ hostName }}'].id" + hostId: "{{ HOSTS.json.hosts | json_query(query) | first }}" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}/host-unmapping" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + status_code: 202 + body_format: json + body: + volume_ids: "{{ volumeIds }}" + host_id: "{{ hostId }}" + register: DETACH_VOLUME + + - name: Wait Task Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{ DETACH_VOLUME.json.task_id }}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + vars: + query: "[?id=='{{ DETACH_VOLUME.json.task_id }}'].status" + retries: 60 + delay: 5 + until: (TASKS.json | json_query(query) | first | int) not in [1, 2] # 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout + + - name: Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ DETACH_VOLUME.json.task_id }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Result: "{{statusMap[status]}}" diff --git a/playbook/volume/detach_volumes_from_hostgroup.yml b/playbook/volume/detach_volumes_from_hostgroup.yml new file mode 100644 index 0000000..8c62d8a --- /dev/null +++ b/playbook/volume/detach_volumes_from_hostgroup.yml @@ -0,0 +1,117 @@ +--- + +# Required Parameters: +# volumeName: volume fuzzy name, can be instead with volumeIds +# hostGroupName: host group name, can be instead with hostGroupId +# +# Examples: +# --extra-vars "volumeName='ansibleC_' hostGroupName='exclusive-df06cf7456dc485d'" +# +# Generated Parameters (can be overwritten): +# volumeIds: a list of volume IDs +# hostGroupId: host group ID +# +# Examples: +# --extra-vars '{"volumeIds": ["9bff610a-6b5b-42db-87ac-dc74bc724525","507dcef9-205a-405c-a794-e791330560a1"]}' \ +# --extra-vars "hostGroupId='bade27c4-6a27-449c-a9c2-d8d122e9b360'" + +- name: Detach Volumes from Host Group + hosts: localhost + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Get Volumes by Fuzzy Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?limit=1000&start=0&name={{volumeName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + when: volumeName is defined + + - name: Get Volume ID List + set_fact: + volumeIds: "{{ VOLUMES.json.volumes | json_query('[*].id') }}" + failed_when: VOLUMES.json.volumes | length < 1 + when: volumeName is defined + + - name: Get Host Group by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostGroupName}}" + register: HOSTGROUPS + when: hostGroupName is defined + + - name: Get Host Group ID + vars: + query: "[?name=='{{ hostGroupName }}'].id" + set_fact: + hostGroupId: "{{ HOSTGROUPS.json.hostgroups | json_query(query) | first }}" + failed_when: HOSTGROUPS.json.hostgroups | json_query(query) | length != 1 + when: hostGroupName is defined + + - name: Show Param + debug: + msg: + volume_ids: "{{ volumeIds }}" + hostgroup_id: "{{ hostGroupId }}" + + - name: Detach Volumes + vars: + query: "[?name=='{{ hostGroupName }}'].id" + hostGroupId: "{{ HOSTGROUPS.json.hostgroups | json_query(query) | first }}" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}/hostgroup-unmapping" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + status_code: 202 + body_format: json + body: + volume_ids: "{{ volumeIds }}" + hostgroup_id: "{{ hostGroupId }}" + register: DETACH_VOLUME + + - name: Wait Task Complete + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tasks }}/{{ DETACH_VOLUME.json.task_id }}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TASKS + vars: + query: "[?id=='{{ DETACH_VOLUME.json.task_id }}'].status" + retries: 60 + delay: 5 + until: (TASKS.json | json_query(query) | first | int) not in [1, 2] # 1/not_start, 2/running, 3/succeeded, 4/partially_succeeded, 5/failed, 6/timeout + + - name: Task Details + vars: + statusMap: { '1': 'Not Start', '2': 'Running', '3': 'Succeeded', '4': 'Partially Succeeded', '5': 'Failed', '6': 'Timeout'} + query: "[?id=='{{ DETACH_VOLUME.json.task_id }}'].status" + status: "{{ TASKS.json | json_query(query) | first }}" + debug: + msg: + Detail: "{{ TASKS.json }}" + Result: "{{statusMap[status]}}" diff --git a/playbook/volume/get_volumes_by_fuzzy_name.yml b/playbook/volume/get_volumes_by_fuzzy_name.yml new file mode 100644 index 0000000..d04ce36 --- /dev/null +++ b/playbook/volume/get_volumes_by_fuzzy_name.yml @@ -0,0 +1,44 @@ +--- + +# Required Parameters: +# volumeName: volume name +# +# Examples: +# --extra-vars "volumeName='ansible'" +# +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# +# Examples: +# --extra-vars "pageNo=1 pageSize=100 volumeName='ansible'" + +- name: Get Volumes by Fuzzy Name + hosts: localhost + vars: + pageNo: 1 # page number + pageSize: 10 # page size + pageStart: "{{ pageSize|int * (pageNo|int - 1) }}" + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: List Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?limit={{pageSize}}&start={{pageStart}}&name={{volumeName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + + - name: Show Volumes + debug: + msg: + Detail: "{{ VOLUMES.json }}" + Matches: "{{ VOLUMES.json.volumes | json_query('[*].name') }}" diff --git a/playbook/volume/list_volumes.yml b/playbook/volume/list_volumes.yml new file mode 100644 index 0000000..7b2f9bc --- /dev/null +++ b/playbook/volume/list_volumes.yml @@ -0,0 +1,271 @@ +--- + +# Optional Parameters: +# pageNo: page number, default 1 +# pageSize: page size, default: 10 +# sortKey: sort key, options: size +# sortDir: sort direction, default: asc, options: desc, asc +# volumeName: volume name +# volumeWwn: volume WWN +# status: volume status, options: creating, normal, mapping, unmapping, deleting, error, expanding +# allocType: allocate type, options: thin, thick +# attached: is attached, options: true, false +# mode: service mode, options: service, non-service, all +# tierName: service level name +# projectName: project name +# hostName: host name +# hostGroupName: host group name +# deviceName: storage device name +# poolName: storage pool name +# +# Examples: +# --extra-vars "projectName='project1'" + +# Generated Parameters (can be overwritten): +# tierId: service level ID +# projectId: project ID +# hostId: host ID +# hostGroupId: host group ID +# deviceId: storage device ID +# poolId: storage pool ID +# +# Examples: +# --extra-vars "projectId='2AC426C9F4C535A2BEEFAEE9F2EDF740'" + +- name: List Volumes + hosts: localhost + vars: + pageNo: 1 + pageSize: 10 + params: "{{'limit=' + pageSize|string + '&start=' + (pageSize|int * (pageNo|int - 1)) | string }}" + sortDir: asc + vars_files: + - ../global.yml + gather_facts: no + become: no + tasks: + - import_tasks: ../user/login.yml + + - name: Set params - sortKey & sortDir + set_fact: + params: "{{ params + '&sort_key=' + sortKey + '&sort_dir=' + sortDir }}" + when: sortKey is defined + + - name: Set params - volumeName + set_fact: + params: "{{ params + '&name=' + volumeName|urlencode }}" + when: volumeName is defined + + - name: Set params - volumeWwn + set_fact: + params: "{{ params + '&volume_wwn=' + volumeWwn }}" + when: volumeWwn is defined + + - name: Set params - status + set_fact: + params: "{{ params + '&status=' + status }}" + when: status is defined + + - name: Set params - allocType + set_fact: + params: "{{ params + '&allocate_type=' + allocType }}" + when: allocType is defined + + - name: Set params - attached + set_fact: + params: "{{ params + '&attached=' + attached }}" + when: attached is defined + + - name: Set params - mode + set_fact: + params: "{{ params + '&query_mode=' + mode }}" + when: mode is defined + + - name: Query Tier by Name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.tiers }}?name={{tierName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: TIER + when: tierName is defined + + - name: Get Tier ID + vars: + query: "[?name=='{{ tierName }}'].id" + set_fact: + tierId: "{{ TIER.json[\"service-levels\"] | json_query(query) | first }}" + failed_when: TIER.json['service-levels'] | json_query(query) | length != 1 + when: tierName is defined + + - name: Set params - tierId + set_fact: + params: "{{ params + '&service_level_id=' + tierId }}" + when: tierId is defined + + - name: Query project by name + vars: + query: "[?name=='{{ projectName }}'].id" + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.projects }}?name={{projectName|urlencode}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: PROJECT + when: projectName is defined + + - name: Get project ID + vars: + query: "[?name=='{{ projectName }}'].id" + set_fact: + projectId: "{{ PROJECT.json.projectList | json_query(query) | first }}" + failed_when: PROJECT.json.projectList | json_query(query) | length != 1 + when: projectName is defined + + - name: Set params - projectId + set_fact: + params: "{{ params + '&project_id=' + projectId }}" + when: projectId is defined + + - name: Query Host by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hosts }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostName}}" + register: HOST + when: hostName is defined + + - name: Get Host ID + vars: + query: "[?name=='{{ hostName }}'].id" + set_fact: + hostId: "{{ HOST.json.hosts | json_query(query) | first }}" + failed_when: HOST.json.hosts | json_query(query) | length != 1 + when: hostName is defined + + - name: Set params - hostId + set_fact: + params: "{{ params + '&host_id=' + hostId }}" + when: hostId is defined + + - name: Query Host Group by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.hostgroups }}/summary" + method: POST + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + body_format: json + body: + name: "{{hostGroupName}}" + register: HOSTGROUP + when: hostGroupName is defined + + - name: Get Host Group ID + vars: + query: "[?name=='{{ hostGroupName }}'].id" + set_fact: + hostGroupId: "{{ HOSTGROUP.json.hostgroups | json_query(query) | first }}" + failed_when: HOSTGROUP.json.hostgroups | json_query(query) | length != 1 + when: hostGroupName is defined + + - name: Set params - hostGroupId + set_fact: + params: "{{ params + '&hostgroup_id=' + hostGroupId }}" + when: hostGroupId is defined + + - name: Query Device by name + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.instances}}/{{INVENTORY.storage.className}}?pageNo=1&pageSize=10&condition={\"constraint\":[{\"simple\":{\"name\":\"dataStatus\",\"operator\":\"equal\",\"value\":\"normal\"}},{\"logOp\":\"and\",\"simple\":{\"name\":\"deviceName\",\"operator\":\"equal\",\"value\":\"{{deviceName|urlencode}}\"}}]}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: DEVICE + when: deviceName is defined + + - name: Get Device ID + vars: + query: "[?deviceName=='{{ deviceName }}'].nativeId" + set_fact: + deviceId: "{{ DEVICE.json.objList | json_query(query) | first }}" + failed_when: DEVICE.json.objList | json_query(query) | length != 1 + when: deviceName is defined + + - name: Set params - deviceId + set_fact: + params: "{{ params + '&storage_id=' + deviceId }}" + when: deviceId is defined + + - name: Query Pool by deviceId and poolName + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{URI.instances}}/{{INVENTORY.pool.className}}?pageNo=1&pageSize=10&condition={\"constraint\":[{\"simple\":{\"name\":\"nativeId\",\"operator\":\"contain\",\"value\":\"nedn={{deviceId}}\"}},{\"logOp\":\"and\",\"simple\":{\"name\":\"name\",\"operator\":\"equal\",\"value\":\"{{poolName|urlencode}}\"}}]}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: POOL + when: + - poolName is defined + - deviceId is defined + + - name: Get Pool ID + vars: + query: "[?name=='{{ poolName }}'].poolId" + set_fact: + poolId: "{{ POOL.json.objList | json_query(query) | first }}" + failed_when: POOL.json.objList | json_query(query) | length != 1 + when: + - poolName is defined + - deviceId is defined + + - name: Set params - poolId + set_fact: + params: "{{ params + '&pool_raw_id=' + poolId }}" + when: poolId is defined + + - name: Show Param + debug: + msg: "{{params}}" + + - name: List Volumes + uri: + url: "https://{{DJ.host}}:{{DJ.port}}/rest/{{ URI.volumes }}?{{params}}" + method: GET + validate_certs: no + headers: + Accept: "application/json" + Content-Type: "application/json;charset=utf8" + X-Auth-Token: "{{DJ.token}}" + register: VOLUMES + + - name: Show Volumes + vars: + objList: "{{ VOLUMES.json.volumes }}" + sortDesc: "{{ 'True' if sortDir == 'desc' else 'False' }}" + sortAttr: "{{ ('capacity' if sortKey == 'size' else sortKey) if sortKey is defined else 'null' }}" + debug: + msg: + objList: "{{ ( objList | sort(attribute=sortAttr,reverse=sortDesc) ) if sortAttr != 'null' else ( objList | sort(reverse=sortDesc) ) }}" + totalNum: "{{VOLUMES.json.count}}" + pageSize: "{{ pageSize }}" + pageNo: "{{ pageNo }}" diff --git a/setup/ansible-setup.md b/setup/ansible-setup.md new file mode 100644 index 0000000..ff0dddf --- /dev/null +++ b/setup/ansible-setup.md @@ -0,0 +1,90 @@ +# Setup Anbile + +## Setup Ansible for RHEL7, CentOS7, EulerOS2.5 + +### Configure yum source (local image as example) + +```shell +# Mount local image +mount /dev/sr0 /iso + +# Configure yum repository +cat /etc/yum.repos.d/local.repo +[local] +name=local +baseurl=file:///iso +enabled=1 +gpgcheck=0 +``` + +### Download Ansible Binary and Dependencies + +[python-paramiko-2.1.1-4.el7.noarch.rpm](http://mirror.centos.org/centos/7/extras/x86_64/Packages/python-paramiko-2.1.1-4.el7.noarch.rpm) + +[sshpass-1.06-2.el7.x86_64.rpm](http://mirror.centos.org/centos/7/extras/x86_64/Packages/sshpass-1.06-2.el7.x86_64.rpm) + +[python2-jmespath-0.9.0-3.el7.noarch.rpm](http://mirror.centos.org/centos/7/extras/x86_64/Packages/python2-jmespath-0.9.0-3.el7.noarch.rpm) + +[python-httplib2-0.9.2-1.el7.noarch.rpm](http://mirror.centos.org/centos/7/extras/x86_64/Packages/python-httplib2-0.9.2-1.el7.noarch.rpm) + +[python-passlib-1.6.5-2.el7.noarch.rpm](http://mirror.centos.org/centos/7/extras/x86_64/Packages/python-passlib-1.6.5-2.el7.noarch.rpm) + +[ansible-2.4.2.0-2.el7.noarch.rpm](http://mirror.centos.org/centos/7/extras/x86_64/Packages/ansible-2.4.2.0-2.el7.noarch.rpm) + +### Install Ansible + +```shell +yum install python-paramiko-2.1.1-4.el7.noarch.rpm \ + sshpass-1.06-2.el7.x86_64.rpm \ + python2-jmespath-0.9.0-3.el7.noarch.rpm \ + python-httplib2-0.9.2-1.el7.noarch.rpm \ + python-passlib-1.6.5-2.el7.noarch.rpm \ + ansible-2.4.2.0-2.el7.noarch.rpm +``` + +### Download and install the following dependencies if it's required when install + +[libyaml-0.1.4-11.el7_0.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/libyaml-0.1.4-11.el7_0.x86_64.rpm) + +[PyYAML-3.10-11.el7.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/PyYAML-3.10-11.el7.x86_64.rpm) + +[python-babel-0.9.6-8.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-babel-0.9.6-8.el7.noarch.rpm) + +[python-markupsafe-0.11-10.el7.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-markupsafe-0.11-10.el7.x86_64.rpm) + +[python-jinja2-2.7.2-4.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-jinja2-2.7.2-4.el7.noarch.rpm) + +[python-ply-3.4-11.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-ply-3.4-11.el7.noarch.rpm) + +[python-pycparser-2.14-1.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-pycparser-2.14-1.el7.noarch.rpm) + +[python-cffi-1.6.0-5.el7.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-cffi-1.6.0-5.el7.x86_64.rpm) + +[python-enum34-1.0.4-1.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-enum34-1.0.4-1.el7.noarch.rpm) + +[python-idna-2.4-1.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-idna-2.4-1.el7.noarch.rpm) + +[python-ipaddress-1.0.16-2.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-ipaddress-1.0.16-2.el7.noarch.rpm) + +[python2-pyasn1-0.1.9-7.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python2-pyasn1-0.1.9-7.el7.noarch.rpm) + +[python-setuptools-0.9.8-7.el7.noarch.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python-setuptools-0.9.8-7.el7.noarch.rpm) + +[python2-cryptography-1.7.2-2.el7.x86_64.rpm](http://mirror.centos.org/centos/7/os/x86_64/Packages/python2-cryptography-1.7.2-2.el7.x86_64.rpm) + +```shell +yum install libyaml-0.1.4-11.el7_0.x86_64.rpm \ + PyYAML-3.10-11.el7.x86_64.rpm \ + python-babel-0.9.6-8.el7.noarch.rpm \ + python-markupsafe-0.11-10.el7.x86_64.rpm \ + python-jinja2-2.7.2-4.el7.noarch.rpm \ + python-ply-3.4-11.el7.noarch.rpm \ + python-pycparser-2.14-1.el7.noarch.rpm \ + python-cffi-1.6.0-5.el7.x86_64.rpm \ + python-enum34-1.0.4-1.el7.noarch.rpm \ + python-idna-2.4-1.el7.noarch.rpm \ + python-ipaddress-1.0.16-2.el7.noarch.rpm \ + python2-pyasn1-0.1.9-7.el7.noarch.rpm \ + python-setuptools-0.9.8-7.el7.noarch.rpm \ + python2-cryptography-1.7.2-2.el7.x86_64.rpm +``` \ No newline at end of file