From 80228d8aaa3f848ffd338e111345ade249fb466d Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Mon, 12 Aug 2024 13:43:15 -0600 Subject: [PATCH 01/13] Solved merge conflicts --- CHANGELOG.rst | 45 +++++ README.md | 13 +- changelogs/.plugin-cache.yaml | 2 +- changelogs/changelog.yaml | 125 ++++++++++++ ...summary.yml => v1.11.0-beta.1_summary.yml} | 2 +- docs/source/modules/zos_archive.rst | 35 +++- docs/source/modules/zos_backup_restore.rst | 19 +- docs/source/modules/zos_blockinfile.rst | 18 ++ docs/source/modules/zos_copy.rst | 34 ++-- docs/source/modules/zos_data_set.rst | 41 ++-- docs/source/modules/zos_encode.rst | 9 +- docs/source/modules/zos_fetch.rst | 2 +- docs/source/modules/zos_find.rst | 68 ++++++- docs/source/modules/zos_job_submit.rst | 10 +- docs/source/modules/zos_lineinfile.rst | 18 ++ docs/source/modules/zos_mvs_raw.rst | 39 ++++ docs/source/modules/zos_operator.rst | 2 +- docs/source/modules/zos_unarchive.rst | 15 +- docs/source/release_notes.rst | 139 +++++++++++-- .../source/resources/releases_maintenance.rst | 5 + galaxy.yml | 2 +- meta/ibm_zos_core_meta.yml | 4 +- plugins/action/zos_fetch.py | 2 +- plugins/action/zos_job_submit.py | 5 + plugins/action/zos_script.py | 2 +- plugins/doc_fragments/template.py | 2 +- plugins/module_utils/backup.py | 5 +- plugins/module_utils/data_set.py | 74 +++---- plugins/module_utils/vtoc.py | 2 +- plugins/modules/zos_apf.py | 26 ++- plugins/modules/zos_archive.py | 4 +- plugins/modules/zos_blockinfile.py | 2 +- plugins/modules/zos_encode.py | 2 +- plugins/modules/zos_find.py | 1 - plugins/modules/zos_lineinfile.py | 4 +- plugins/modules/zos_mvs_raw.py | 4 +- plugins/modules/zos_unarchive.py | 6 +- .../modules/test_module_security.py | 2 +- tests/functional/modules/test_zos_apf_func.py | 5 - .../modules/test_zos_backup_restore.py | 4 +- .../modules/test_zos_blockinfile_func.py | 50 ++++- .../functional/modules/test_zos_copy_func.py | 93 +++++++++ .../modules/test_zos_mvs_raw_func.py | 184 ++++++++++++++++++ 43 files changed, 974 insertions(+), 152 deletions(-) rename changelogs/fragments/{v1.10.0_summary.yml => v1.11.0-beta.1_summary.yml} (92%) diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 9efc1ea61..d23ceb7ed 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -4,6 +4,51 @@ ibm.ibm\_zos\_core Release Notes .. contents:: Topics +v1.11.0-beta.1 +============== + +Release Summary +--------------- + +Release Date: '2024-08-05' +This changelog describes all changes made to the modules and plugins included +in this collection. The release date is the date the changelog is created. +For additional details such as required dependencies and availability review +the collections `release notes `__ + +Minor Changes +------------- + +- zos_apf - Change input to auto-escape 'library' names containing symbols (https://github.com/ansible-collections/ibm_zos_core/pull/1493). +- zos_archive - Added support for GDG and GDS relative name notation to archive data sets. Added support for data set names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1511). +- zos_backup_restore - Added support for GDS relative name notation to include or exclude data sets when operation is backup. Added support for data set names with special characters like $, /#, and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1527). +- zos_blockinfile - Added support for GDG and GDS relative name notation to use a data set. And backup in new generations. Added support for data set names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1516). +- zos_copy - add support for copying generation data sets (GDS) and generation data groups (GDG), as well as using a GDS for backup. (https://github.com/ansible-collections/ibm_zos_core/pull/1564). +- zos_data_set - Added support for GDG and GDS relative name notation to create, delete, catalog and uncatalog a data set. Added support for data set names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1504). +- zos_encode - add support for encoding generation data sets (GDS), as well as using a GDS for backup. (https://github.com/ansible-collections/ibm_zos_core/pull/1531). +- zos_fetch - add support for fetching generation data groups and generation data sets. (https://github.com/ansible-collections/ibm_zos_core/pull/1519) +- zos_find - added support for GDG/GDS and special characters (https://github.com/ansible-collections/ibm_zos_core/pull/1518). +- zos_job_submit - Improved the copy to remote mechanic to avoid using deepcopy that could result in failure for some systems. (https://github.com/ansible-collections/ibm_zos_core/pull/1561). +- zos_job_submit - add support for generation data groups and generation data sets as sources for jobs. (https://github.com/ansible-collections/ibm_zos_core/pull/1497) +- zos_lineinfile - Added support for GDG and GDS relative name notation to use a data set. And backup in new generations. Added support for data set names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1516). +- zos_mount - Added support for data set names with special characters ($, /#, /- and @). This is for both src and backup data set names. (https://github.com/ansible-collections/ibm_zos_core/pull/1631). +- zos_tso_command - Added support for GDG and GDS relative name notation to use a data set name. Added support for data set names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1563). +- zos_mvs_raw - Added support for GDG and GDS relative name notation to use a data set. Added support for data set names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1525). +- zos_mvs_raw - Added support for GDG and GDS relative positive name notation to use a data set. (https://github.com/ansible-collections/ibm_zos_core/pull/1541). +- zos_mvs_raw - Redesign the wrappers of dd clases to use properly the arguments. (https://github.com/ansible-collections/ibm_zos_core/pull/1470). +- zos_script - Improved the copy to remote mechanic to avoid using deepcopy that could result in failure for some systems. (https://github.com/ansible-collections/ibm_zos_core/pull/1561). +- zos_unarchive - Added support for data set names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1511). +- zos_unarchive - Improved the copy to remote mechanic to avoid using deepcopy that could result in failure for some systems. (https://github.com/ansible-collections/ibm_zos_core/pull/1561). + +Bugfixes +-------- + +- module_util/data_set.py - DataSet.data_set_cataloged function previously only returned True or False, but failed to account for exceptions which occurred during the LISTCAT. The fix now raises an MVSCmdExecError if the return code from LISTCAT is too high. (https://github.com/ansible-collections/ibm_zos_core/pull/1535). +- zos_copy - a regression in version 1.4.0 made the module stop automatically computing member names when copying a single file into a PDS/E. Fix now lets a user copy a single file into a PDS/E without adding a member in the dest option. (https://github.com/ansible-collections/ibm_zos_core/pull/1570). +- zos_copy - module would use opercmd to check if a non existent destination data set is locked. Fix now only checks if the destination is already present. (https://github.com/ansible-collections/ibm_zos_core/pull/1623). +- zos_job_submit - Was not propagating any error types UnicodeDecodeError, JSONDecodeError, TypeError, KeyError when encountered, now the error message shares the type error. (https://github.com/ansible-collections/ibm_zos_core/pull/1560). +- zos_mvs_raw - DD_output first character from each line was missing. Change now includes the first character of each line. (https://github.com/ansible-collections/ibm_zos_core/pull/1543). + v1.10.0 ======= diff --git a/README.md b/README.md index 629ce15b4..e0d274bad 100644 --- a/README.md +++ b/README.md @@ -36,7 +36,7 @@ To upgrade the collection to the latest available version, run the following com ansible-galaxy collection install ibm.ibm_zos_core --upgrade ``` -
You can also install a specific version of the collection, for example, if you need to downgrade for some reason. Use the following syntax to install version 1.0.0: +
You can also install a specific version of the collection, for example, if you need to install a different version. Use the following syntax to install version 1.0.0: ```sh ansible-galaxy collection install ibm.ibm_zos_core:1.0.0 @@ -123,7 +123,7 @@ environment_vars: ## Testing -All releases, will meet the following test criteria. +All releases will meet the following test criteria. * 100% success for [Functional](https://github.com/ansible-collections/ibm_zos_core/tree/dev/tests/functional) tests. * 100% success for [Unit](https://github.com/ansible-collections/ibm_zos_core/tree/dev/tests/unit) tests. @@ -134,9 +134,9 @@ All releases, will meet the following test criteria.
This release of the collection was tested with following dependencies. * ansible-core v2.15.x -* Python 3.9.x +* Python 3.11.x * IBM Open Enterprise SDK for Python 3.11.x -* IBM Z Open Automation Utilities (ZOAU) 1.3.0.x +* IBM Z Open Automation Utilities (ZOAU) 1.3.1.x * z/OS V2R5 This release introduces case sensitivity for option values and includes a porting guide in the [release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html) to assist with which option values will need to be updated. @@ -177,9 +177,10 @@ For Galaxy and GitHub users, to see the supported ansible-core versions, review | Version | Status | Release notes | Changelogs | |----------|----------------|---------------|------------| -| 1.11.x | In development | unreleased | unreleased | +| 1.12.x | In development | unreleased | unreleased | +| 1.11.x | In preview | [Release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html#version-1-11-0-beta.1) | [Changelogs](https://github.com/ansible-collections/ibm_zos_core/blob/v1.11.0-beta.1/CHANGELOG.rst) | | 1.10.x | Current | [Release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html#version-1-10-0) | [Changelogs](https://github.com/ansible-collections/ibm_zos_core/blob/v1.10.0/CHANGELOG.rst) | -| 1.9.x | Released | [Release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html#version-1-9-0) | [Changelogs](https://github.com/ansible-collections/ibm_zos_core/blob/v1.9.0/CHANGELOG.rst) | +| 1.9.x | Released | [Release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html#version-1-9-2) | [Changelogs](https://github.com/ansible-collections/ibm_zos_core/blob/v1.9.2/CHANGELOG.rst) | | 1.8.x | Released | [Release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html#version-1-8-0) | [Changelogs](https://github.com/ansible-collections/ibm_zos_core/blob/v1.8.0/CHANGELOG.rst) | | 1.7.x | Released | [Release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html#version-1-7-0) | [Changelogs](https://github.com/ansible-collections/ibm_zos_core/blob/v1.7.0/CHANGELOG.rst) | | 1.6.x | Released | [Release notes](https://ibm.github.io/z_ansible_collections_doc/ibm_zos_core/docs/source/release_notes.html#version-1-6-0) | [Changelogs](https://github.com/ansible-collections/ibm_zos_core/blob/v1.6.0/CHANGELOG.rst) | diff --git a/changelogs/.plugin-cache.yaml b/changelogs/.plugin-cache.yaml index e5bd167b7..dcc631cd0 100644 --- a/changelogs/.plugin-cache.yaml +++ b/changelogs/.plugin-cache.yaml @@ -135,4 +135,4 @@ plugins: strategy: {} test: {} vars: {} -version: 1.10.0-beta.1 +version: 1.11.0-beta.1 diff --git a/changelogs/changelog.yaml b/changelogs/changelog.yaml index 4d9648079..3c48425d7 100644 --- a/changelogs/changelog.yaml +++ b/changelogs/changelog.yaml @@ -259,6 +259,131 @@ releases: - 992-fix-sanity4to6.yml - v1.10.0-beta.1_summary.yml release_date: '2024-05-08' + 1.11.0-beta.1: + changes: + bugfixes: + - module_util/data_set.py - DataSet.data_set_cataloged function previously only + returned True or False, but failed to account for exceptions which occurred + during the LISTCAT. The fix now raises an MVSCmdExecError if the return code + from LISTCAT is too high. (https://github.com/ansible-collections/ibm_zos_core/pull/1535). + - zos_copy - a regression in version 1.4.0 made the module stop automatically + computing member names when copying a single file into a PDS/E. Fix now lets + a user copy a single file into a PDS/E without adding a member in the dest + option. (https://github.com/ansible-collections/ibm_zos_core/pull/1570). + - zos_copy - module would use opercmd to check if a non existent destination + data set is locked. Fix now only checks if the destination is already present. + (https://github.com/ansible-collections/ibm_zos_core/pull/1623). + - zos_job_submit - Was not propagating any error types UnicodeDecodeError, JSONDecodeError, + TypeError, KeyError when encountered, now the error message shares the type + error. (https://github.com/ansible-collections/ibm_zos_core/pull/1560). + - zos_mvs_raw - DD_output first character from each line was missing. Change + now includes the first character of each line. (https://github.com/ansible-collections/ibm_zos_core/pull/1543). + minor_changes: + - zos_apf - Change input to auto-escape 'library' names containing symbols (https://github.com/ansible-collections/ibm_zos_core/pull/1493). + - zos_archive - Added support for GDG and GDS relative name notation to archive + data sets. Added support for data set names with special characters like $, + /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1511). + - zos_backup_restore - Added support for GDS relative name notation to include or + exclude data sets when operation is backup. Added support for data set names + with special characters like $, /#, and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1527). + - zos_blockinfile - Added support for GDG and GDS relative name notation to + use a data set. And backup in new generations. Added support for data set + names with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1516). + - zos_copy - add support for copying generation data sets (GDS) and generation + data groups (GDG), as well as using a GDS for backup. (https://github.com/ansible-collections/ibm_zos_core/pull/1564). + - zos_data_set - Added support for GDG and GDS relative name notation to create, + delete, catalog and uncatalog a data set. Added support for data set names + with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1504). + - zos_encode - add support for encoding generation data sets (GDS), as well + as using a GDS for backup. (https://github.com/ansible-collections/ibm_zos_core/pull/1531). + - zos_fetch - add support for fetching generation data groups and generation + data sets. (https://github.com/ansible-collections/ibm_zos_core/pull/1519) + - zos_find - added support for GDG/GDS and special characters (https://github.com/ansible-collections/ibm_zos_core/pull/1518). + - zos_job_submit - Improved the copy to remote mechanic to avoid using deepcopy + that could result in failure for some systems. (https://github.com/ansible-collections/ibm_zos_core/pull/1561). + - zos_job_submit - add support for generation data groups and generation data + sets as sources for jobs. (https://github.com/ansible-collections/ibm_zos_core/pull/1497) + - zos_lineinfile - Added support for GDG and GDS relative name notation to use + a data set. And backup in new generations. Added support for data set names + with special characters like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1516). + - zos_mount - Added support for data set names with special characters ($, /#, + /- and @). This is for both src and backup data set names. (https://github.com/ansible-collections/ibm_zos_core/pull/1631). + - zos_mvs_raw - Added support for GDG and GDS relative name notation to use + a data set. Added support for data set names with special characters like + $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1525). + - zos_mvs_raw - Added support for GDG and GDS relative positive name notation + to use a data set. (https://github.com/ansible-collections/ibm_zos_core/pull/1541). + - zos_mvs_raw - Redesign the wrappers of dd clases to use properly the arguments. + (https://github.com/ansible-collections/ibm_zos_core/pull/1470). + - zos_tso_command - Added support for GDG and GDS relative name notation to use + a data set name. Added support for data set names with special characters + like $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1563). + - zos_script - Improved the copy to remote mechanic to avoid using deepcopy + that could result in failure for some systems. (https://github.com/ansible-collections/ibm_zos_core/pull/1561). + - zos_unarchive - Added support for data set names with special characters like + $, /#, /- and @. (https://github.com/ansible-collections/ibm_zos_core/pull/1511). + - zos_unarchive - Improved the copy to remote mechanic to avoid using deepcopy + that could result in failure for some systems. (https://github.com/ansible-collections/ibm_zos_core/pull/1561). + release_summary: 'Release Date: ''2024-08-05'' + + This changelog describes all changes made to the modules and plugins included + + in this collection. The release date is the date the changelog is created. + + For additional details such as required dependencies and availability review + + the collections `release notes `__' + fragments: + - 1170-enhancememt-make-pipeline-217-compatible.yml + - 1323-Update_docstring-dd_statement.yml + - 1334-update-docstring-mcs_cmd.yml + - 1335-update-docstring-template.yml + - 1337-update-docstring-vtoc.yml + - 1338-update-docstring-zoau_version_checker.yml + - 1342-update-docstring-zos_backup_restore.yml + - 1343-update-docstring-zos_blockinline.yml + - 1344-update-docstring-zos_copy.yml + - 1361-update-docstring-zos_operator.yml + - 1362-update-docstring-file.yml + - 1363-update-docstring-system.yml + - 1374-enhancement-zos-find-gdg-gds-special-chars.yml + - 1380-enhancement-add-sybols-zos_apf.yml + - 1384-update-docstring-backup.yml + - 1385-update-docstring-better_arg_parser.yml + - 1386-gdg-symbols-support.yml + - 1387-update-docstring-copy.yml + - 1415-Update_docstring-zos_archive.yml + - 1470-redesign_mvs_raw.yml + - 1484-update-ac-tool-ansible-lint.yml + - 1488-zos_copy-refactor-force.yml + - 1495-default-values-data-set-class.yml + - 1496-fix-gds-resolve.yml + - 1497-gdg-support-zos-job-submit.yml + - 1504-zos_data_set-gdg-support.yml + - 1507-zos_operator-docs.yml + - 1511-zos_archive_unarchive-gdg-support.yml + - 1512-bugfix-zos_job_submit-error-type.yml + - 1515-gdg_batch_creation.yml + - 1516-lineinfile_blockinfile_gdgsgds_and_special_character_support.yml + - 1519-zos_fetch-gdg-support.yml + - 1525-mvs_raw_support_gdg_gds_special_character.yml + - 1527-zos_backup-gdg.yml + - 1531-zos_encode_gdg_support.yml + - 1535-raise-error-in-module-util-data_set-function-data_set_cataloged.yml + - 1541-output_mvs_raw_gds_positive_was_false_positive.yml + - 1543-mvs_raw_fix_verbose_and_first_character.yml + - 1550-lower_case_idcams_utility.yml + - 1552-readme-support-updates.yml + - 1553-Console_parallel.yml + - 1561-remove_deep_copy.yml + - 1563-zos_tso_command-gdg-support.yml + - 1564-zos_copy_gdg_support.yml + - 1565-remove-deprecated-pipes-library.yml + - 1570-compute-member-name-zos_copy.yml + - 1623-zos_copy-avoid-opercmd.yml + - 1631-enabler-zos_mount-special-character-support.yml + - v1.11.0-beta.1_summary.yml + release_date: '2024-08-05' 1.2.1: changes: bugfixes: diff --git a/changelogs/fragments/v1.10.0_summary.yml b/changelogs/fragments/v1.11.0-beta.1_summary.yml similarity index 92% rename from changelogs/fragments/v1.10.0_summary.yml rename to changelogs/fragments/v1.11.0-beta.1_summary.yml index 129c40746..5c1d60f94 100644 --- a/changelogs/fragments/v1.10.0_summary.yml +++ b/changelogs/fragments/v1.11.0-beta.1_summary.yml @@ -1,5 +1,5 @@ release_summary: | - Release Date: '2024-06-11' + Release Date: '2024-08-05' This changelog describes all changes made to the modules and plugins included in this collection. The release date is the date the changelog is created. For additional details such as required dependencies and availability review diff --git a/docs/source/modules/zos_archive.rst b/docs/source/modules/zos_archive.rst index b900fdcdb..8676d4cb7 100644 --- a/docs/source/modules/zos_archive.rst +++ b/docs/source/modules/zos_archive.rst @@ -35,7 +35,9 @@ src USS file paths should be absolute paths. - MVS data sets supported types are: \ :literal:`SEQ`\ , \ :literal:`PDS`\ , \ :literal:`PDSE`\ . + GDS relative notation is supported. + + MVS data sets supported types are: ``SEQ``, ``PDS``, ``PDSE``. VSAMs are not supported. @@ -126,7 +128,7 @@ dest exclude - Remote absolute path, glob, or list of paths, globs or data set name patterns for the file, files or data sets to exclude from src list and glob expansion. + Remote absolute path, glob, or list of paths, globs, data set name patterns or generation data sets (GDSs) in relative notation for the file, files or data sets to exclude from src list and glob expansion. Patterns (wildcards) can contain one of the following, \`?\`, \`\*\`. @@ -348,7 +350,7 @@ Examples name: tar # Archive multiple files - - name: Compress list of files into a zip + - name: Archive list of files into a zip zos_archive: src: - /tmp/archive/foo.txt @@ -358,7 +360,7 @@ Examples name: zip # Archive one data set into terse - - name: Compress data set into a terse + - name: Archive data set into a terse zos_archive: src: "USER.ARCHIVE.TEST" dest: "USER.ARCHIVE.RESULT.TRS" @@ -366,7 +368,7 @@ Examples name: terse # Use terse with different options - - name: Compress data set into a terse, specify pack algorithm and use adrdssu + - name: Archive data set into a terse, specify pack algorithm and use adrdssu zos_archive: src: "USER.ARCHIVE.TEST" dest: "USER.ARCHIVE.RESULT.TRS" @@ -377,7 +379,7 @@ Examples use_adrdssu: true # Use a pattern to store - - name: Compress data set pattern using xmit + - name: Archive data set pattern using xmit zos_archive: src: "USER.ARCHIVE.*" exclude_sources: "USER.ARCHIVE.EXCLUDE.*" @@ -385,6 +387,27 @@ Examples format: name: xmit + - name: Archive multiple GDSs into a terse + zos_archive: + src: + - "USER.GDG(0)" + - "USER.GDG(-1)" + - "USER.GDG(-2)" + dest: "USER.ARCHIVE.RESULT.TRS" + format: + name: terse + format_options: + use_adrdssu: True + + - name: Archive multiple data sets into a new GDS + zos_archive: + src: "USER.ARCHIVE.*" + dest: "USER.GDG(+1)" + format: + name: terse + format_options: + use_adrdssu: True + diff --git a/docs/source/modules/zos_backup_restore.rst b/docs/source/modules/zos_backup_restore.rst index e8216dd3e..68ca12aa5 100644 --- a/docs/source/modules/zos_backup_restore.rst +++ b/docs/source/modules/zos_backup_restore.rst @@ -49,7 +49,9 @@ data_sets include When \ :emphasis:`operation=backup`\ , specifies a list of data sets or data set patterns to include in the backup. - When \ :emphasis:`operation=restore`\ , specifies a list of data sets or data set patterns to include when restoring from a backup. + When *operation=backup* GDS relative names are supported. + + When *operation=restore*, specifies a list of data sets or data set patterns to include when restoring from a backup. The single asterisk, \ :literal:`\*`\ , is used in place of exactly one qualifier. In addition, it can be used to indicate to DFSMSdss that only part of a qualifier has been specified. @@ -66,7 +68,9 @@ data_sets exclude When \ :emphasis:`operation=backup`\ , specifies a list of data sets or data set patterns to exclude from the backup. - When \ :emphasis:`operation=restore`\ , specifies a list of data sets or data set patterns to exclude when restoring from a backup. + When *operation=backup* GDS relative names are supported. + + When *operation=restore*, specifies a list of data sets or data set patterns to exclude when restoring from a backup. The single asterisk, \ :literal:`\*`\ , is used in place of exactly one qualifier. In addition, it can be used to indicate that only part of a qualifier has been specified." @@ -122,6 +126,8 @@ backup_name There are no enforced conventions for backup names. However, using a common extension like \ :literal:`.dzp`\ for UNIX files and \ :literal:`.DZP`\ for data sets will improve readability. + GDS relative names are supported when *operation=restore*. + | **required**: True | **type**: str @@ -235,6 +241,15 @@ Examples exclude: user.private.* backup_name: MY.BACKUP.DZP + - name: Backup a list of GDDs to data set my.backup.dzp + zos_backup_restore: + operation: backup + data_sets: + include: + - user.gdg(-1) + - user.gdg(0) + backup_name: my.backup.dzp + - name: Backup all datasets matching the pattern USER.** to UNIX file /tmp/temp_backup.dzp, ignore recoverable errors. zos_backup_restore: operation: backup diff --git a/docs/source/modules/zos_blockinfile.rst b/docs/source/modules/zos_blockinfile.rst index 8cd6f756c..3bf2cd85b 100644 --- a/docs/source/modules/zos_blockinfile.rst +++ b/docs/source/modules/zos_blockinfile.rst @@ -33,6 +33,8 @@ src The USS file must be an absolute pathname. + Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1``.) + | **required**: True | **type**: str @@ -122,6 +124,8 @@ backup The backup file name will be returned on either success or failure of module execution such that data can be retrieved. + Use generation data set (GDS) relative positive name. ``e.g. SOME.CREATION(+1``) + | **required**: False | **type**: bool | **default**: False @@ -281,6 +285,20 @@ Examples marker_end: "End Ansible Block Insertion 2" block: "{{ CONTENT }}" + - name: Add a block to a gds + zos_blockinfile: + src: TEST.SOME.CREATION(0) + insertafter: EOF + block: "{{ CONTENT }}" + + - name: Add a block to dataset and backup in a new generation of gds + zos_blockinfile: + src: SOME.CREATION.TEST + insertbefore: BOF + backup: True + backup_name: CREATION.GDS(+1) + block: "{{ CONTENT }}" + diff --git a/docs/source/modules/zos_copy.rst b/docs/source/modules/zos_copy.rst index 69639e39a..f424548f7 100644 --- a/docs/source/modules/zos_copy.rst +++ b/docs/source/modules/zos_copy.rst @@ -69,6 +69,8 @@ backup_name If \ :emphasis:`backup\_name`\ is a generation data set (GDS), it must be a relative positive name (for example, \ :literal:`HLQ.USER.GDG(+1)`\ ). + If *backup_name* is a generation data set (GDS), it must be a relative positive name (for example, V(HLQ.USER.GDG(+1\))). + | **required**: False | **type**: str @@ -99,7 +101,9 @@ dest If \ :literal:`dest`\ is a nonexistent data set, the attributes assigned will depend on the type of \ :literal:`src`\ . If \ :literal:`src`\ is a USS file, \ :literal:`dest`\ will have a Fixed Block (FB) record format and the remaining attributes will be computed. If \ :emphasis:`is\_binary=true`\ , \ :literal:`dest`\ will have a Fixed Block (FB) record format with a record length of 80, block size of 32760, and the remaining attributes will be computed. If \ :emphasis:`executable=true`\ ,\ :literal:`dest`\ will have an Undefined (U) record format with a record length of 0, block size of 32760, and the remaining attributes will be computed. - When \ :literal:`dest`\ is a data set, precedence rules apply. If \ :literal:`dest\_data\_set`\ is set, this will take precedence over an existing data set. If \ :literal:`dest`\ is an empty data set, the empty data set will be written with the expectation its attributes satisfy the copy. Lastly, if no precendent rule has been exercised, \ :literal:`dest`\ will be created with the same attributes of \ :literal:`src`\ . + If ``src`` is a file and ``dest`` a partitioned data set, ``dest`` does not need to include a member in its value, the module can automatically compute the resulting member name from ``src``. + + When ``dest`` is a data set, precedence rules apply. If ``dest_data_set`` is set, this will take precedence over an existing data set. If ``dest`` is an empty data set, the empty data set will be written with the expectation its attributes satisfy the copy. Lastly, if no precendent rule has been exercised, ``dest`` will be created with the same attributes of ``src``. When the \ :literal:`dest`\ is an existing VSAM (KSDS) or VSAM (ESDS), then source can be an ESDS, a KSDS or an RRDS. The VSAM (KSDS) or VSAM (ESDS) \ :literal:`dest`\ will be deleted and recreated following the process outlined in the \ :literal:`volume`\ option. @@ -107,11 +111,11 @@ dest When \ :literal:`dest`\ is and existing VSAM (LDS), then source must be an LDS. The VSAM (LDS) will be deleted and recreated following the process outlined in the \ :literal:`volume`\ option. - \ :literal:`dest`\ can be a previously allocated generation data set (GDS) or a new GDS. + ``dest`` can be a previously allocated generation data set (GDS) or a new GDS. - When \ :literal:`dest`\ is a generation data group (GDG), \ :literal:`src`\ must be a GDG too. The copy will allocate successive new generations in \ :literal:`dest`\ , the module will verify it has enough available generations before starting the copy operations. + When ``dest`` is a generation data group (GDG), ``src`` must be a GDG too. The copy will allocate successive new generations in ``dest``, the module will verify it has enough available generations before starting the copy operations. - When \ :literal:`dest`\ is a data set, you can override storage management rules by specifying \ :literal:`volume`\ if the storage class being used has GUARANTEED\_SPACE=YES specified, otherwise, the allocation will fail. See \ :literal:`volume`\ for more volume related processes. + When ``dest`` is a data set, you can override storage management rules by specifying ``volume`` if the storage class being used has GUARANTEED_SPACE=YES specified, otherwise, the allocation will fail. See ``volume`` for more volume related processes. | **required**: True | **type**: str @@ -308,6 +312,10 @@ src If \ :literal:`src`\ is a generation data group (GDG), \ :literal:`dest`\ can be another GDG or a USS directory. + If ``src`` is a generation data set (GDS), it must be a previously allocated one. + + If ``src`` is a generation data group (GDG), ``dest`` can be another GDG or a USS directory. + Wildcards can be used to copy multiple PDS/PDSE members to another PDS/PDSE. Required unless using \ :literal:`content`\ . @@ -346,6 +354,8 @@ dest_data_set Some attributes only apply when \ :literal:`dest`\ is a generation data group (GDG). + Some attributes only apply when ``dest`` is a generation data group (GDG). + | **required**: False | **type**: dict @@ -483,18 +493,18 @@ dest_data_set limit - Sets the \ :emphasis:`limit`\ attribute for a GDG. + Sets the *limit* attribute for a GDG. Specifies the maximum number, from 1 to 255(up to 999 if extended), of generations that can be associated with the GDG being defined. - \ :emphasis:`limit`\ is required when \ :emphasis:`type=gdg`\ . + *limit* is required when *type=gdg*. | **required**: False | **type**: int empty - Sets the \ :emphasis:`empty`\ attribute for a GDG. + Sets the *empty* attribute for a GDG. If false, removes only the oldest GDS entry when a new GDS is created that causes GDG limit to be exceeded. @@ -505,7 +515,7 @@ dest_data_set scratch - Sets the \ :emphasis:`scratch`\ attribute for a GDG. + Sets the *scratch* attribute for a GDG. Specifies what action is to be taken for a generation data set located on disk volumes when the data set is uncataloged from the GDG base as a result of EMPTY/NOEMPTY processing. @@ -514,16 +524,16 @@ dest_data_set purge - Sets the \ :emphasis:`purge`\ attribute for a GDG. + Sets the *purge* attribute for a GDG. - Specifies whether to override expiration dates when a generation data set (GDS) is rolled off and the \ :literal:`scratch`\ option is set. + Specifies whether to override expiration dates when a generation data set (GDS) is rolled off and the ``scratch`` option is set. | **required**: False | **type**: bool extended - Sets the \ :emphasis:`extended`\ attribute for a GDG. + Sets the *extended* attribute for a GDG. If false, allow up to 255 generation data sets (GDSs) to be associated with the GDG. @@ -534,7 +544,7 @@ dest_data_set fifo - Sets the \ :emphasis:`fifo`\ attribute for a GDG. + Sets the *fifo* attribute for a GDG. If false, the order is the newest GDS defined to the oldest GDS. This is the default value. diff --git a/docs/source/modules/zos_data_set.rst b/docs/source/modules/zos_data_set.rst index caed66ba9..3b1b64870 100644 --- a/docs/source/modules/zos_data_set.rst +++ b/docs/source/modules/zos_data_set.rst @@ -59,7 +59,10 @@ state If \ :emphasis:`state=absent`\ and \ :emphasis:`volumes`\ is provided, and the data set is found in the catalog, the module compares the catalog volume attributes to the provided \ :emphasis:`volumes`\ . If the volume attributes are different, the cataloged data set will be uncataloged temporarily while the requested data set be deleted is cataloged. The module will catalog the original data set on completion, if the attempts to catalog fail, no action is taken. Module completes successfully with \ :emphasis:`changed=False`\ . - If \ :emphasis:`state=absent`\ and \ :emphasis:`type=gdg`\ and the GDG base has active generations the module will complete successfully with \ :emphasis:`changed=False`\ . To remove it option \ :emphasis:`force`\ needs to be used. If the GDG base does not have active generations the module will complete successfully with \ :emphasis:`changed=True`\ . + If *state=absent* and *type=gdg* and the GDG base has active generations the module will complete successfully with *changed=False*. To remove it option *force* needs to be used. If the GDG base does not have active generations the module will complete successfully with *changed=True*. + + + If *state=present* and the data set does not exist on the managed node, create and catalog the data set, module completes successfully with *changed=True*. If \ :emphasis:`state=present`\ and the data set does not exist on the managed node, create and catalog the data set, module completes successfully with \ :emphasis:`changed=True`\ . @@ -239,7 +242,7 @@ key_length empty - Sets the \ :emphasis:`empty`\ attribute for Generation Data Groups. + Sets the *empty* attribute for Generation Data Groups. If false, removes only the oldest GDS entry when a new GDS is created that causes GDG limit to be exceeded. @@ -252,7 +255,7 @@ empty extended - Sets the \ :emphasis:`extended`\ attribute for Generation Data Groups. + Sets the *extended* attribute for Generation Data Groups. If false, allow up to 255 generation data sets (GDSs) to be associated with the GDG. @@ -265,7 +268,7 @@ extended fifo - Sets the \ :emphasis:`fifo`\ attribute for Generation Data Groups. + Sets the *fifo* attribute for Generation Data Groups. If false, the order is the newest GDS defined to the oldest GDS. This is the default value. @@ -278,27 +281,27 @@ fifo limit - Sets the \ :emphasis:`limit`\ attribute for Generation Data Groups. + Sets the *limit* attribute for Generation Data Groups. Specifies the maximum number, from 1 to 255(up to 999 if extended), of GDS that can be associated with the GDG being defined. - \ :emphasis:`limit`\ is required when \ :emphasis:`type=gdg`\ . + *limit* is required when *type=gdg*. | **required**: False | **type**: int purge - Sets the \ :emphasis:`purge`\ attribute for Generation Data Groups. + Sets the *purge* attribute for Generation Data Groups. - Specifies whether to override expiration dates when a generation data set (GDS) is rolled off and the \ :literal:`scratch`\ option is set. + Specifies whether to override expiration dates when a generation data set (GDS) is rolled off and the ``scratch`` option is set. | **required**: False | **type**: bool scratch - Sets the \ :emphasis:`scratch`\ attribute for Generation Data Groups. + Sets the *scratch* attribute for Generation Data Groups. Specifies what action is to be taken for a generation data set located on disk volumes when the data set is uncataloged from the GDG base as a result of EMPTY/NOEMPTY processing. @@ -356,9 +359,9 @@ force The \ :emphasis:`force=True`\ option enables sharing of data sets through the disposition \ :emphasis:`DISP=SHR`\ . - The \ :emphasis:`force=True`\ only applies to data set members when \ :emphasis:`state=absent`\ and \ :emphasis:`type=member`\ and when removing a GDG base with active generations. + The *force=True* only applies to data set members when *state=absent* and *type=member* and when removing a GDG base with active generations. - If \ :emphasis:`force=True`\ , \ :emphasis:`type=gdg`\ and \ :emphasis:`state=absent`\ it will force remove a GDG base with active generations. + If *force=True*, *type=gdg* and *state=absent* it will force remove a GDG base with active generations. | **required**: False | **type**: bool @@ -582,7 +585,7 @@ batch empty - Sets the \ :emphasis:`empty`\ attribute for Generation Data Groups. + Sets the *empty* attribute for Generation Data Groups. If false, removes only the oldest GDS entry when a new GDS is created that causes GDG limit to be exceeded. @@ -595,7 +598,7 @@ batch extended - Sets the \ :emphasis:`extended`\ attribute for Generation Data Groups. + Sets the *extended* attribute for Generation Data Groups. If false, allow up to 255 generation data sets (GDSs) to be associated with the GDG. @@ -608,7 +611,7 @@ batch fifo - Sets the \ :emphasis:`fifo`\ attribute for Generation Data Groups. + Sets the *fifo* attribute for Generation Data Groups. If false, the order is the newest GDS defined to the oldest GDS. This is the default value. @@ -621,27 +624,27 @@ batch limit - Sets the \ :emphasis:`limit`\ attribute for Generation Data Groups. + Sets the *limit* attribute for Generation Data Groups. Specifies the maximum number, from 1 to 255(up to 999 if extended), of GDS that can be associated with the GDG being defined. - \ :emphasis:`limit`\ is required when \ :emphasis:`type=gdg`\ . + *limit* is required when *type=gdg*. | **required**: False | **type**: int purge - Sets the \ :emphasis:`purge`\ attribute for Generation Data Groups. + Sets the *purge* attribute for Generation Data Groups. - Specifies whether to override expiration dates when a generation data set (GDS) is rolled off and the \ :literal:`scratch`\ option is set. + Specifies whether to override expiration dates when a generation data set (GDS) is rolled off and the ``scratch`` option is set. | **required**: False | **type**: bool scratch - Sets the \ :emphasis:`scratch`\ attribute for Generation Data Groups. + Sets the *scratch* attribute for Generation Data Groups. Specifies what action is to be taken for a generation data set located on disk volumes when the data set is uncataloged from the GDG base as a result of EMPTY/NOEMPTY processing. diff --git a/docs/source/modules/zos_encode.rst b/docs/source/modules/zos_encode.rst index 51bcca12d..2c5bd4e1d 100644 --- a/docs/source/modules/zos_encode.rst +++ b/docs/source/modules/zos_encode.rst @@ -62,6 +62,8 @@ src Encoding a whole generation data group (GDG) is not supported. + Encoding a whole generation data group (GDG) is not supported. + | **required**: True | **type**: str @@ -69,7 +71,7 @@ src dest The location where the converted characters are output. - The destination \ :emphasis:`dest`\ can be a UNIX System Services (USS) file or path, PS (sequential data set), PDS, PDSE, member of a PDS or PDSE, a generation data set (GDS) or KSDS (VSAM data set). + The destination *dest* can be a UNIX System Services (USS) file or path, PS (sequential data set), PDS, PDSE, member of a PDS or PDSE, a generation data set (GDS) or KSDS (VSAM data set). If the length of the PDSE member name used in \ :emphasis:`dest`\ is greater than 8 characters, the member name will be truncated when written out. @@ -77,7 +79,7 @@ dest The USS file or path must be an absolute pathname. - If \ :emphasis:`dest`\ is a data set, it must be already allocated. + If *dest* is a data set, it must be already allocated. | **required**: False | **type**: str @@ -106,6 +108,8 @@ backup_name If \ :emphasis:`backup\_name`\ is a generation data set (GDS), it must be a relative positive name (for example, \ :literal:`HLQ.USER.GDG(+1)`\ ). + If *backup_name* is a generation data set (GDS), it must be a relative positive name (for example, V(HLQ.USER.GDG(+1\))). + | **required**: False | **type**: str @@ -279,7 +283,6 @@ Examples - Notes ----- diff --git a/docs/source/modules/zos_fetch.rst b/docs/source/modules/zos_fetch.rst index 23d58c864..800eee88f 100644 --- a/docs/source/modules/zos_fetch.rst +++ b/docs/source/modules/zos_fetch.rst @@ -20,7 +20,7 @@ Synopsis - When fetching a sequential data set, the destination file name will be the same as the data set name. - When fetching a PDS or PDSE, the destination will be a directory with the same name as the PDS or PDSE. - When fetching a PDS/PDSE member, destination will be a file. -- Files that already exist at \ :literal:`dest`\ will be overwritten if they are different than \ :literal:`src`\ . +- Files that already exist at ``dest`` will be overwritten if they are different than ``src``. - When fetching a GDS, the relative name will be resolved to its absolute one. - When fetching a generation data group, the destination will be a directory with the same name as the GDG. diff --git a/docs/source/modules/zos_find.rst b/docs/source/modules/zos_find.rst index 83082b5c0..027940ff5 100644 --- a/docs/source/modules/zos_find.rst +++ b/docs/source/modules/zos_find.rst @@ -121,10 +121,12 @@ resource_type \ :literal:`cluster`\ refers to a VSAM cluster. The \ :literal:`data`\ and \ :literal:`index`\ are the data and index components of a VSAM cluster. + ``gdg`` refers to Generation Data Groups. The module searches based on the GDG base name. + | **required**: False | **type**: str | **default**: nonvsam - | **choices**: nonvsam, cluster, data, index + | **choices**: nonvsam, cluster, data, index, gdg volume @@ -135,6 +137,60 @@ volume | **elements**: str +empty + A GDG attribute, only valid when ``resource_type=gdg``. + + If provided, will search for data sets with *empty* attribute set as provided. + + | **required**: False + | **type**: bool + + +extended + A GDG attribute, only valid when ``resource_type=gdg``. + + If provided, will search for data sets with *extended* attribute set as provided. + + | **required**: False + | **type**: bool + + +fifo + A GDG attribute, only valid when ``resource_type=gdg``. + + If provided, will search for data sets with *fifo* attribute set as provided. + + | **required**: False + | **type**: bool + + +limit + A GDG attribute, only valid when ``resource_type=gdg``. + + If provided, will search for data sets with *limit* attribute set as provided. + + | **required**: False + | **type**: int + + +purge + A GDG attribute, only valid when ``resource_type=gdg``. + + If provided, will search for data sets with *purge* attribute set as provided. + + | **required**: False + | **type**: bool + + +scratch + A GDG attribute, only valid when ``resource_type=gdg``. + + If provided, will search for data sets with *scratch* attribute set as provided. + + | **required**: False + | **type**: bool + + Examples @@ -185,6 +241,16 @@ Examples - USER.* resource_type: cluster + - name: Find all Generation Data Groups starting with the word 'USER' and specific GDG attributes. + zos_find: + patterns: + - USER.* + resource_type: gdg + limit: 30 + scratch: true + purge: true + + diff --git a/docs/source/modules/zos_job_submit.rst b/docs/source/modules/zos_job_submit.rst index bec95cb54..b848365e2 100644 --- a/docs/source/modules/zos_job_submit.rst +++ b/docs/source/modules/zos_job_submit.rst @@ -31,11 +31,11 @@ Parameters src The source file or data set containing the JCL to submit. - It could be a physical sequential data set, a partitioned data set qualified by a member or a path (e.g. \ :literal:`USER.TEST`\ , \ :literal:`USER.JCL(TEST)`\ ), or a generation data set from a generation data group (for example, \ :literal:`USER.TEST.GDG(-2)`\ ). + It could be a physical sequential data set, a partitioned data set qualified by a member or a path (e.g. ``USER.TEST``, V(USER.JCL(TEST\))), or a generation data set from a generation data group (for example, V(USER.TEST.GDG(-2\))). - Or a USS file. (e.g \ :literal:`/u/tester/demo/sample.jcl`\ ) + Or a USS file. (e.g ``/u/tester/demo/sample.jcl``) - Or a LOCAL file in ansible control node. (e.g \ :literal:`/User/tester/ansible-playbook/sample.jcl`\ ) + Or a LOCAL file in ansible control node. (e.g ``/User/tester/ansible-playbook/sample.jcl``) When using a generation data set, only already created generations are valid. If either the relative name is positive, or negative but not found, the module will fail. @@ -46,11 +46,11 @@ src location The JCL location. Supported choices are \ :literal:`data\_set`\ , \ :literal:`uss`\ or \ :literal:`local`\ . - \ :literal:`data\_set`\ can be a PDS, PDSE, sequential data set, or a generation data set. + ``data_set`` can be a PDS, PDSE, sequential data set, or a generation data set. \ :literal:`uss`\ means the JCL location is located in UNIX System Services (USS). - \ :literal:`local`\ means locally to the Ansible control node. + ``local`` means locally to the Ansible control node. | **required**: False | **type**: str diff --git a/docs/source/modules/zos_lineinfile.rst b/docs/source/modules/zos_lineinfile.rst index e8d0b0eb2..c1ed7284d 100644 --- a/docs/source/modules/zos_lineinfile.rst +++ b/docs/source/modules/zos_lineinfile.rst @@ -33,6 +33,8 @@ src The USS file must be an absolute pathname. + Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1.)`` + | **required**: True | **type**: str @@ -139,6 +141,8 @@ backup The backup file name will be return on either success or failure of module execution such that data can be retrieved. + Use generation data set (GDS) relative positive name SOME.CREATION(+1) + | **required**: False | **type**: bool | **default**: False @@ -248,6 +252,20 @@ Examples line: 'Should be a working test now' force: true + - name: Add a line to a gds + zos_lineinfile: + src: SOME.CREATION(-2) + insertafter: EOF + line: 'Should be a working test now' + + - name: Add a line to dataset and backup in a new generation of gds + zos_lineinfile: + src: SOME.CREATION.TEST + insertafter: EOF + backup: True + backup_name: CREATION.GDS(+1) + line: 'Should be a working test now' + diff --git a/docs/source/modules/zos_mvs_raw.rst b/docs/source/modules/zos_mvs_raw.rst index f48418264..2c5b65a61 100644 --- a/docs/source/modules/zos_mvs_raw.rst +++ b/docs/source/modules/zos_mvs_raw.rst @@ -105,6 +105,10 @@ dds data_set_name The data set name. + A data set name can be a GDS relative name. + + When using GDS relative name and it is a positive generation, *disposition=new* must be used. + | **required**: False | **type**: str @@ -840,6 +844,10 @@ dds data_set_name The data set name. + A data set name can be a GDS relative name. + + When using GDS relative name and it is a positive generation, *disposition=new* must be used. + | **required**: False | **type**: str @@ -1748,6 +1756,37 @@ Examples VOLUMES(222222) - UNIQUE) + - name: List data sets matching pattern in catalog, + save output to a new generation of gdgs. + zos_mvs_raw: + program_name: idcams + auth: true + dds: + - dd_data_set: + dd_name: sysprint + data_set_name: TEST.CREATION(+1) + disposition: new + return_content: + type: text + - dd_input: + dd_name: sysin + content: " LISTCAT ENTRIES('SOME.DATASET.*')" + + - name: List data sets matching pattern in catalog, + save output to a gds already created. + zos_mvs_raw: + program_name: idcams + auth: true + dds: + - dd_data_set: + dd_name: sysprint + data_set_name: TEST.CREATION(-2) + return_content: + type: text + - dd_input: + dd_name: sysin + content: " LISTCAT ENTRIES('SOME.DATASET.*')" + diff --git a/docs/source/modules/zos_operator.rst b/docs/source/modules/zos_operator.rst index 8f7e76df1..e29c59346 100644 --- a/docs/source/modules/zos_operator.rst +++ b/docs/source/modules/zos_operator.rst @@ -100,7 +100,7 @@ Notes ----- .. note:: - Commands may need to use specific prefixes like $, they can be discovered by issuing the following command \ :literal:`D OPDATA,PREFIX`\ . + Commands may need to use specific prefixes like $, they can be discovered by issuing the following command ``D OPDATA,PREFIX``. diff --git a/docs/source/modules/zos_unarchive.rst b/docs/source/modules/zos_unarchive.rst index ed6a26a8f..42a4db897 100644 --- a/docs/source/modules/zos_unarchive.rst +++ b/docs/source/modules/zos_unarchive.rst @@ -39,6 +39,8 @@ src MVS data sets supported types are \ :literal:`SEQ`\ , \ :literal:`PDS`\ , \ :literal:`PDSE`\ . + GDS relative names are supported ``e.g. USER.GDG(-1)``. + | **required**: True | **type**: str @@ -149,7 +151,9 @@ owner include A list of directories, files or data set names to extract from the archive. - When \ :literal:`include`\ is set, only those files will we be extracted leaving the remaining files in the archive. + GDS relative names are supported ``e.g. USER.GDG(-1)``. + + When ``include`` is set, only those files will we be extracted leaving the remaining files in the archive. Mutually exclusive with exclude. @@ -161,6 +165,8 @@ include exclude List the directory and file or data set names that you would like to exclude from the unarchive action. + GDS relative names are supported ``e.g. USER.GDG(-1)``. + Mutually exclusive with include. | **required**: False @@ -385,6 +391,13 @@ Examples - USER.ARCHIVE.TEST1 - USER.ARCHIVE.TEST2 + # Unarchive a GDS + - name: Unarchive a terse data set and excluding data sets from unpacking. + zos_unarchive: + src: "USER.ARCHIVE(0)" + format: + name: terse + # List option - name: List content from XMIT zos_unarchive: diff --git a/docs/source/release_notes.rst b/docs/source/release_notes.rst index c8c2f6e96..521a8f9da 100644 --- a/docs/source/release_notes.rst +++ b/docs/source/release_notes.rst @@ -6,6 +6,119 @@ Releases ======== +Version 1.11.0-beta.1 +===================== + +Release Summary +--------------- + +Release Date: '2024-08-05' +This changelog describes all changes made to the modules and plugins included +in this collection. The release date is the date the changelog is created. +For additional details such as required dependencies and availability review +the collections `release notes `__ + +Minor Changes +------------- + +- ``zos_apf`` - Added support that auto-escapes 'library' names containing symbols. +- ``zos_archive`` - Added support for GDG and GDS relative name notation to archive data sets. Added support for data set names with special characters like $, /#, /- and @. +- ``zos_backup_restore`` - Added support for GDS relative name notation to include or exclude data sets when operation is backup. Added support for data set names with special characters like $, /#, and @. +- ``zos_blockinfile`` - Added support for GDG and GDS relative name notation to specify a data set. And backup in new generations. Added support for data set names with special characters like $, /#, /- and @. +- ``zos_copy`` - Added support for copying from and copying to generation data sets (GDS) and generation data groups (GDG) including using a GDS for backup. +- ``zos_data_set`` - Added support for GDG and GDS relative name notation to create, delete, catalog and uncatalog a data set. Added support for data set names with special characters like $, /#, /- and @. +- ``zos_encode`` - Added support for converting the encodings of generation data sets (GDS). Also added support to backup into GDS. +- ``zos_fetch`` - Added support for fetching generation data groups (GDG) and generation data sets (GDS). Added support for specifying data set names with special characters like $, /#, /- and @. +- ``zos_find`` - Added support for finding generation data groups (GDG) and generation data sets (GDS). Added support for specifying data set names with special characters like $, /#, /- and @. +- ``zos_job_submit`` + + - Improved the mechanism for copying to remote systems by removing the use of deepcopy, which had previously resulted in the module failing on some systems. + - Added support for running JCL stored in generation data groups (GDG) and generation data sets (GDS). + +- ``zos_lineinfile`` - Added support for GDG and GDS relative name notation to specify the target data set and to backup into new generations. Added support for data set names with special characters like $, /#, /- and @. +- ``zos_mount`` - Added support for data set names with special characters ($, /#, /- and @). +- ``zos_mvs_raw`` - Added support for GDG and GDS relative name notation to specify data set names. Added support for data set names with special characters like $, /#, /- and @. +- ``zos_script`` - Improved the mechanism for copying to remote systems by removing the use of deepcopy, which had previously resulted in the module failing on some systems. +- ``zos_tso_command`` - Added support for using GDG and GDS relative name notation in running TSO commands. Added support for data set names with special characters like $, /#, /- and @. +- ``zos_unarchive`` + + - Added support for data set names with special characters like $, /#, /- and @. + - Improved the mechanism for copying to remote systems by removing the use of deepcopy, which had previously resulted in the module failing on some systems. + +Bugfixes +-------- + +- ``zos_copy`` + + - a regression in version 1.4.0 made the module stop automatically computing member names when copying a single file into a PDS/E. Fix now lets a user copy a single file into a PDS/E without adding a member in the dest option. + - module would use opercmd to check if a non existent destination data set is locked. Fix now only checks if the destination is already present. + +- ``zos_data_set`` - When checking if a data set is cataloged, module failed to account for exceptions which occurred during the LISTCAT. The fix now raises an MVSCmdExecError if the return code from LISTCAT is too high. +- ``zos_job_submit`` - The module was not propagating any error types including UnicodeDecodeError, JSONDecodeError, TypeError, KeyError when encountered. The fix now shares the type error in the error message. +- ``zos_mvs_raw`` - The first character of each line in dd_output was missing. The fix now includes the first character of each line. + +Availability +------------ + +* `Galaxy`_ +* `GitHub`_ + +Requirements +------------ + +The IBM z/OS core collection has several dependencies, please review the `z/OS core support matrix`_ to understand both the +controller and z/OS managed node dependencies. + +Known Issues +------------ +- ``zos_job_submit`` - when setting 'location' to 'local' and not specifying the from and to encoding, the modules defaults are not read leaving the file in its original encoding; explicitly set the encodings instead of relying on the default. +- ``zos_job_submit`` - when submitting JCL, the response value returned for **byte_count** is incorrect. +- ``zos_apf`` - When trying to remove a library that contains the '$' character in the name from APF(authorized program facility), operation will fail. +- In the past, choices could be defined in either lower or upper case. Now, only the case that is identified in the docs can be set, this is so that the collection can continue to maintain certified status. + + +Version 1.9.2 +============= + +Bugfixes +-------- + +- ``zos_copy`` - when creating the destination data set, the module would unnecessarily check if a data set is locked by another process. The module no longer performs this check when it creates the data set. + +Availability +------------ + +* `Automation Hub`_ +* `Galaxy`_ +* `GitHub`_ + +Requirements +------------ + +The IBM z/OS core collection has several dependencies, please review the `z/OS core support matrix`_ to understand both the +controller and z/OS managed node dependencies. + +Known Issues +------------ + +- ``zos_job_submit`` - when setting 'location' to 'LOCAL' and not specifying the from and to encoding, the modules defaults are not read leaving the file in its original encoding; explicitly set the encodings instead of relying on the default. +- ``zos_job_submit`` - when submitting JCL, the response value returned for **byte_count** is incorrect. + +- ``zos_job_submit``, ``zos_job_output``, ``zos_operator_action_query`` - encounters UTF-8 decoding errors when interacting with results that contain non-printable UTF-8 characters in the response. This has been addressed in this release and corrected with **ZOAU version 1.2.5.6** or later. + + - If the appropriate level of ZOAU can not be installed, some options are to: + + - Specify that the ASA assembler option be enabled to instruct the assembler to use ANSI control characters instead of machine code control characters. + - Ignore module errors by using **ignore_errors:true** for a specific playbook task. + - If the error is resulting from a batch job, add **ignore_errors:true** to the task and capture the output into a registered variable to extract the + job ID with a regular expression. Then use ``zos_job_output`` to display the DD without the non-printable character such as the DD **JESMSGLG**. + - If the error is the result of a batch job, set option **return_output** to false so that no DDs are read which could contain the non-printable UTF-8 characters. + +- ``zos_data_set`` - An undocumented option **size** was defined in module **zos_data_set**, this has been removed to satisfy collection certification, use the intended and documented **space_primary** option. + +- In the past, choices could be defined in either lower or upper case. Now, only the case that is identified in the docs can be set, this is so that the collection can continue to maintain certified status. + + Version 1.10.0 ============== @@ -134,19 +247,6 @@ Bugfixes - ``zos_find`` - Option size failed if a PDS/E matched the pattern, now filtering on utilized size for a PDS/E is supported. - ``zos_mvs_raw`` - Option **tmp_hlq** when creating temporary data sets was previously ignored, now the option honors the High Level Qualifier for temporary data sets created during the module execution. -Availability ------------- - -* `Automation Hub`_ -* `Galaxy`_ -* `GitHub`_ - -Requirements ------------- - -The IBM z/OS core collection has several dependencies, please review the `z/OS core support matrix`_ to understand both the -controller and z/OS managed node dependencies. - Known Issues ------------ @@ -165,7 +265,18 @@ Known Issues - ``zos_data_set`` - An undocumented option **size** was defined in module **zos_data_set**, this has been removed to satisfy collection certification, use the intended and documented **space_primary** option. -- In the past, choices could be defined in either lower or upper case. Now, only the case that is identified in the docs can be set, this is so that the collection can continue to maintain certified status. +Availability +------------ + +* `Automation Hub`_ +* `Galaxy`_ +* `GitHub`_ + +Requirements +------------ + +The IBM z/OS core collection has several dependencies, please review the `z/OS core support matrix`_ to understand both the +controller and z/OS managed node dependencies. Version 1.9.0 ============= diff --git a/docs/source/resources/releases_maintenance.rst b/docs/source/resources/releases_maintenance.rst index 391456769..df4ee6754 100644 --- a/docs/source/resources/releases_maintenance.rst +++ b/docs/source/resources/releases_maintenance.rst @@ -89,6 +89,11 @@ The z/OS managed node includes several shells, currently the only supported shel +---------+----------------------------+---------------------------------------------------+---------------+---------------+ | Version | Controller | Managed Node | GA | End of Life | +=========+============================+===================================================+===============+===============+ +| 1.11.x |- `ansible-core`_ >=2.15.x |- `z/OS`_ V2R4 - V2Rx | In preview | TBD | +| |- `Ansible`_ >=8.0.x |- `z/OS shell`_ | | | +| |- `AAP`_ >=2.4 |- IBM `Open Enterprise SDK for Python`_ | | | +| | |- IBM `Z Open Automation Utilities`_ >=1.3.1 | | | ++---------+----------------------------+---------------------------------------------------+---------------+---------------+ | 1.10.x |- `ansible-core`_ >=2.15.x |- `z/OS`_ V2R4 - V2Rx | 21 June 2024 | 21 June 2026 | | |- `Ansible`_ >=8.0.x |- `z/OS shell`_ | | | | |- `AAP`_ >=2.4 |- IBM `Open Enterprise SDK for Python`_ | | | diff --git a/galaxy.yml b/galaxy.yml index 2e9d280dc..910442ef8 100644 --- a/galaxy.yml +++ b/galaxy.yml @@ -6,7 +6,7 @@ namespace: ibm name: ibm_zos_core # The collection version -version: "1.10.0" +version: "1.11.0-beta.1" # Collection README file readme: README.md diff --git a/meta/ibm_zos_core_meta.yml b/meta/ibm_zos_core_meta.yml index 5bc58ec94..16ee31ca9 100644 --- a/meta/ibm_zos_core_meta.yml +++ b/meta/ibm_zos_core_meta.yml @@ -1,5 +1,5 @@ name: ibm_zos_core -version: "1.10.0" +version: "1.11.0-beta.1" managed_requirements: - name: "IBM Open Enterprise SDK for Python" @@ -7,4 +7,4 @@ managed_requirements: - name: "Z Open Automation Utilities" version: - - ">=1.3.0" + - ">=1.3.1" diff --git a/plugins/action/zos_fetch.py b/plugins/action/zos_fetch.py index c3e4ec1ee..4d0a0c11b 100644 --- a/plugins/action/zos_fetch.py +++ b/plugins/action/zos_fetch.py @@ -276,7 +276,7 @@ def run(self, tmp=None, task_vars=None): local_checksum = _get_file_checksum(dest) # ********************************************************** # - # Fetch remote data. + # Fetch remote data. # # ********************************************************** # try: if ds_type in SUPPORTED_DS_TYPES: diff --git a/plugins/action/zos_job_submit.py b/plugins/action/zos_job_submit.py index 20c8e28db..3f9006a34 100644 --- a/plugins/action/zos_job_submit.py +++ b/plugins/action/zos_job_submit.py @@ -24,6 +24,11 @@ from ansible_collections.ibm.ibm_zos_core.plugins.module_utils import template +display = Display() + +from ansible_collections.ibm.ibm_zos_core.plugins.module_utils import template + + display = Display() diff --git a/plugins/action/zos_script.py b/plugins/action/zos_script.py index e481052a5..d51c48ddf 100644 --- a/plugins/action/zos_script.py +++ b/plugins/action/zos_script.py @@ -1,4 +1,4 @@ -# Copyright (c) IBM Corporation 2023 +# Copyright (c) IBM Corporation 2023, 2024 # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at diff --git a/plugins/doc_fragments/template.py b/plugins/doc_fragments/template.py index 1eea4ad3d..2215c0a4a 100644 --- a/plugins/doc_fragments/template.py +++ b/plugins/doc_fragments/template.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- -# Copyright (c) IBM Corporation 2022, 2023 +# Copyright (c) IBM Corporation 2022, 2024 # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at diff --git a/plugins/module_utils/backup.py b/plugins/module_utils/backup.py index b881d6321..d94495231 100644 --- a/plugins/module_utils/backup.py +++ b/plugins/module_utils/backup.py @@ -139,7 +139,10 @@ def mvs_file_backup(dsn, bk_dsn=None, tmphlq=None): rc, out, err = _copy_pds(dsn, bk_dsn) if rc != 0: raise BackupError( - "Unable to backup data set {0} to {1}".format(dsn, bk_dsn) + "Unable to backup data set {0} to {1}.".format(dsn, bk_dsn), + rc=rc, + stdout=out, + stderr=err ) return bk_dsn diff --git a/plugins/module_utils/data_set.py b/plugins/module_utils/data_set.py index 60cf56061..89a5fc2ac 100644 --- a/plugins/module_utils/data_set.py +++ b/plugins/module_utils/data_set.py @@ -1999,24 +1999,24 @@ def create(self, tmp_hlq=None, replace=True, force=False): Indicates if changes were made. """ arguments = { - "name" : self.name, - "raw_name" : self.raw_name, - "type" : self.data_set_type, - "space_primary" : self.space_primary, - "space_secondary" : self.space_secondary, - "space_type" : self.space_type, - "record_format" : self.record_format, - "record_length" : self.record_length, - "block_size" : self.block_size, - "directory_blocks" : self.directory_blocks, - "key_length" : self.key_length, - "key_offset" : self.key_offset, - "sms_storage_class" : self.sms_storage_class, - "sms_data_class" : self.sms_data_class, - "sms_management_class" : self.sms_management_class, - "volumes" : self.volumes, - "tmp_hlq" : tmp_hlq, - "force" : force, + "name": self.name, + "raw_name": self.raw_name, + "type": self.data_set_type, + "space_primary": self.space_primary, + "space_secondary": self.space_secondary, + "space_type": self.space_type, + "record_format": self.record_format, + "record_length": self.record_length, + "block_size": self.block_size, + "directory_blocks": self.directory_blocks, + "key_length": self.key_length, + "key_offset": self.key_offset, + "sms_storage_class": self.sms_storage_class, + "sms_data_class": self.sms_data_class, + "sms_management_class": self.sms_management_class, + "volumes": self.volumes, + "tmp_hlq": tmp_hlq, + "force": force, } formatted_args = DataSet._build_zoau_args(**arguments) changed = False @@ -2048,25 +2048,25 @@ def ensure_present(self, tmp_hlq=None, replace=False, force=False): Indicates if changes were made. """ arguments = { - "name" : self.name, - "raw_name" : self.raw_name, - "type" : self.data_set_type, - "space_primary" : self.space_primary, - "space_secondary" : self.space_secondary, - "space_type" : self.space_type, - "record_format" : self.record_format, - "record_length" : self.record_length, - "block_size" : self.block_size, - "directory_blocks" : self.directory_blocks, - "key_length" : self.key_length, - "key_offset" : self.key_offset, - "sms_storage_class" : self.sms_storage_class, - "sms_data_class" : self.sms_data_class, - "sms_management_class" : self.sms_management_class, - "volumes" : self.volumes, - "replace" : replace, - "tmp_hlq" : tmp_hlq, - "force" : force, + "name": self.name, + "raw_name": self.raw_name, + "type": self.data_set_type, + "space_primary": self.space_primary, + "space_secondary": self.space_secondary, + "space_type": self.space_type, + "record_format": self.record_format, + "record_length": self.record_length, + "block_size": self.block_size, + "directory_blocks": self.directory_blocks, + "key_length": self.key_length, + "key_offset": self.key_offset, + "sms_storage_class": self.sms_storage_class, + "sms_data_class": self.sms_data_class, + "sms_management_class": self.sms_management_class, + "volumes": self.volumes, + "replace": replace, + "tmp_hlq": tmp_hlq, + "force": force, } rc = DataSet.ensure_present(**arguments) self.set_state("present") diff --git a/plugins/module_utils/vtoc.py b/plugins/module_utils/vtoc.py index 12cd25656..34a9c7c3a 100644 --- a/plugins/module_utils/vtoc.py +++ b/plugins/module_utils/vtoc.py @@ -1,4 +1,4 @@ -# Copyright (c) IBM Corporation 2020 +# Copyright (c) IBM Corporation 2020, 2024 # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at diff --git a/plugins/modules/zos_apf.py b/plugins/modules/zos_apf.py index 024ef8baa..ceeea04de 100644 --- a/plugins/modules/zos_apf.py +++ b/plugins/modules/zos_apf.py @@ -292,6 +292,7 @@ type: str ''' +import os import re import json from ansible.module_utils._text import to_text @@ -312,7 +313,7 @@ # supported data set types -DS_TYPE = ['PS', 'PO'] +DS_TYPE = data_set.DataSet.MVS_SEQ.union(data_set.DataSet.MVS_PARTITIONED) def backupOper(module, src, backup, tmphlq=None): @@ -340,11 +341,15 @@ def backupOper(module, src, backup, tmphlq=None): fail_json Creating backup has failed. """ - # analysis the file type - ds_utils = data_set.DataSetUtils(src) - file_type = ds_utils.ds_type() + file_type = None + if data_set.is_data_set(src): + file_type = data_set.DataSet.data_set_type(src) + else: + if os.path.exists(src): + file_type = 'USS' + if file_type != 'USS' and file_type not in DS_TYPE: - message = "{0} data set type is NOT supported".format(str(file_type)) + message = "Dataset {0} of type {1} is NOT supported".format(src, str(file_type)) module.fail_json(msg=message) # backup can be True(bool) or none-zero length string. string indicates that backup_name was provided. @@ -357,8 +362,17 @@ def backupOper(module, src, backup, tmphlq=None): backup_name = Backup.uss_file_backup(src, backup_name=backup, compress=False) else: backup_name = Backup.mvs_file_backup(dsn=src, bk_dsn=backup, tmphlq=tmphlq) + except Backup.BackupError as exc: + module.fail_json( + msg=exc.msg, + rc=exc.rc, + stdout=exc.stdout, + stderr=exc.stderr + ) except Exception: - module.fail_json(msg="creating backup has failed") + module.fail_json( + msg="An error ocurred during backup." + ) return backup_name diff --git a/plugins/modules/zos_archive.py b/plugins/modules/zos_archive.py index 08e2111a9..fb0ef7100 100644 --- a/plugins/modules/zos_archive.py +++ b/plugins/modules/zos_archive.py @@ -380,7 +380,7 @@ format: name: terse format_options: - use_adrdssu: True + use_adrdssu: true - name: Archive multiple data sets into a new GDS zos_archive: @@ -389,7 +389,7 @@ format: name: terse format_options: - use_adrdssu: True + use_adrdssu: true ''' RETURN = r''' diff --git a/plugins/modules/zos_blockinfile.py b/plugins/modules/zos_blockinfile.py index 8c1485152..3c89162cd 100644 --- a/plugins/modules/zos_blockinfile.py +++ b/plugins/modules/zos_blockinfile.py @@ -293,7 +293,7 @@ zos_blockinfile: src: SOME.CREATION.TEST insertbefore: BOF - backup: True + backup: true backup_name: CREATION.GDS(+1) block: "{{ CONTENT }}" ''' diff --git a/plugins/modules/zos_encode.py b/plugins/modules/zos_encode.py index 40b70a0fd..a17fcb7ed 100644 --- a/plugins/modules/zos_encode.py +++ b/plugins/modules/zos_encode.py @@ -616,7 +616,7 @@ def run_module(): result["dest"] = dest if ds_type_dest == "GDG": - raise EncodeError("Encoding of a whole generation data group is not yet supported.") + raise EncodeError("Encoding of a whole generation data group is not supported.") new_src = src_data_set.name if src_data_set else src new_dest = dest_data_set.name if dest_data_set else dest diff --git a/plugins/modules/zos_find.py b/plugins/modules/zos_find.py index de272bfd0..8b50a157d 100644 --- a/plugins/modules/zos_find.py +++ b/plugins/modules/zos_find.py @@ -234,7 +234,6 @@ limit: 30 scratch: true purge: true - """ diff --git a/plugins/modules/zos_lineinfile.py b/plugins/modules/zos_lineinfile.py index 38fb5d116..0decb77a6 100644 --- a/plugins/modules/zos_lineinfile.py +++ b/plugins/modules/zos_lineinfile.py @@ -37,7 +37,7 @@ PS (sequential data set), member of a PDS or PDSE, PDS, PDSE. - The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already - created. C(e.g. SOME.CREATION(-1).) + created. C(e.g. SOME.CREATION(-1\).) type: str aliases: [ path, destfile, name ] required: true @@ -251,7 +251,7 @@ zos_lineinfile: src: SOME.CREATION.TEST insertafter: EOF - backup: True + backup: true backup_name: CREATION.GDS(+1) line: 'Should be a working test now' """ diff --git a/plugins/modules/zos_mvs_raw.py b/plugins/modules/zos_mvs_raw.py index b382baf25..a79b8832d 100644 --- a/plugins/modules/zos_mvs_raw.py +++ b/plugins/modules/zos_mvs_raw.py @@ -89,7 +89,7 @@ description: - The data set name. - A data set name can be a GDS relative name. - - When using GDS relative name and it is a positive generation, disposition new must be used. + - When using GDS relative name and it is a positive generation, I(disposition=new) must be used. type: str required: false type: @@ -708,7 +708,7 @@ description: - The data set name. - A data set name can be a GDS relative name. - - When using GDS relative name and it is a positive generation, disposition new must be used. + - When using GDS relative name and it is a positive generation, I(disposition=new) must be used. type: str required: false type: diff --git a/plugins/modules/zos_unarchive.py b/plugins/modules/zos_unarchive.py index 43312f449..f180e069f 100644 --- a/plugins/modules/zos_unarchive.py +++ b/plugins/modules/zos_unarchive.py @@ -36,7 +36,7 @@ - I(src) can be a USS file or MVS data set name. - USS file paths should be absolute paths. - MVS data sets supported types are C(SEQ), C(PDS), C(PDSE). - - GDS relative names are supported C(e.g. USER.GDG(-1)). + - GDS relative names are supported C(e.g. USER.GDG(-1\)). type: str required: true format: @@ -146,7 +146,7 @@ description: - A list of directories, files or data set names to extract from the archive. - - GDS relative names are supported C(e.g. USER.GDG(-1)). + - GDS relative names are supported C(e.g. USER.GDG(-1\)). - When C(include) is set, only those files will we be extracted leaving the remaining files in the archive. - Mutually exclusive with exclude. @@ -157,7 +157,7 @@ description: - List the directory and file or data set names that you would like to exclude from the unarchive action. - - GDS relative names are supported C(e.g. USER.GDG(-1)). + - GDS relative names are supported C(e.g. USER.GDG(-1\)). - Mutually exclusive with include. type: list elements: str diff --git a/tests/functional/modules/test_module_security.py b/tests/functional/modules/test_module_security.py index 744d8f595..4c3af3c15 100644 --- a/tests/functional/modules/test_module_security.py +++ b/tests/functional/modules/test_module_security.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- -# Copyright (c) IBM Corporation 2020 +# Copyright (c) IBM Corporation 2020, 2024 # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at diff --git a/tests/functional/modules/test_zos_apf_func.py b/tests/functional/modules/test_zos_apf_func.py index 4bb0e9041..29a25aee1 100644 --- a/tests/functional/modules/test_zos_apf_func.py +++ b/tests/functional/modules/test_zos_apf_func.py @@ -264,11 +264,6 @@ def test_add_del_volume_persist(ansible_zos_module, volumes_with_vvds): clean_test_env(hosts, test_info) -# keyword: ENABLE-FOR-1-3 -# Test commented because there is a failure in ZOAU 1.2.x, that should be fixed in 1.3.x, so -# whoever works in issue https://github.com/ansible-collections/ibm_zos_core/issues/726 -# should uncomment this test as part of the validation process. - def test_batch_add_del(ansible_zos_module, volumes_with_vvds): try: hosts = ansible_zos_module diff --git a/tests/functional/modules/test_zos_backup_restore.py b/tests/functional/modules/test_zos_backup_restore.py index fff5bd6aa..1b01bebc7 100644 --- a/tests/functional/modules/test_zos_backup_restore.py +++ b/tests/functional/modules/test_zos_backup_restore.py @@ -828,7 +828,7 @@ def test_backup_into_gds(ansible_zos_module, dstype): assert result.get("changed") is True assert result.get("module_stderr") is None ds_to_write = f"{ds_name}(MEM)" if dstype in ['pds', 'pdse'] else ds_name - results = hosts.all.shell(cmd=f"decho 'test line' \"{ds_to_write}\"") + results = hosts.all.shell(cmd=f"decho 'test line' '{ds_to_write}'") for result in results.contacted.values(): assert result.get("changed") is True assert result.get("module_stderr") is None @@ -852,5 +852,5 @@ def test_backup_into_gds(ansible_zos_module, dstype): assert result.get("changed") is True assert result.get("module_stderr") is None finally: - hosts.all.shell(cmd=f"drm ANSIBLE.* ") + hosts.all.shell(cmd=f"drm ANSIBLE.* ; drm OMVSADM.*") diff --git a/tests/functional/modules/test_zos_blockinfile_func.py b/tests/functional/modules/test_zos_blockinfile_func.py index 2f9e6d3c2..84d0850da 100644 --- a/tests/functional/modules/test_zos_blockinfile_func.py +++ b/tests/functional/modules/test_zos_blockinfile_func.py @@ -1539,17 +1539,39 @@ def test_uss_encoding(ansible_zos_module, encoding): results = hosts.all.zos_blockinfile(**params) for result in results.contacted.values(): assert result.get("changed") == 1 - results = hosts.all.shell(cmd="cat {0}".format(params["path"])) + results = hosts.all.shell(cmd="cat \"//'{0}'\" ".format(params["src"])) + for result in results.contacted.values(): + assert result.get("stdout") == "# BEGIN ANSIBLE MANAGED BLOCK\nZOAU_ROOT=/mvsutil-develop_dsed\nZOAU_HOME=$ZOAU_ROOT\nZOAU_DIR=$ZOAU_ROOT\n# END ANSIBLE MANAGED BLOCK" + + params["src"] = ds_name + "(-1)" + results = hosts.all.zos_blockinfile(**params) + for result in results.contacted.values(): + assert result.get("changed") == 1 + results = hosts.all.shell(cmd="cat \"//'{0}'\" ".format(params["src"])) + for result in results.contacted.values(): + assert result.get("stdout") == "# BEGIN ANSIBLE MANAGED BLOCK\nZOAU_ROOT=/mvsutil-develop_dsed\nZOAU_HOME=$ZOAU_ROOT\nZOAU_DIR=$ZOAU_ROOT\n# END ANSIBLE MANAGED BLOCK" + + params_w_bck = dict(insertafter="eof", block="export ZOAU_ROOT\nexport ZOAU_HOME\nexport ZOAU_DIR", state="present", backup=True, backup_name=ds_name + "(+1)") + params_w_bck["src"] = ds_name + "(-1)" + results = hosts.all.zos_blockinfile(**params_w_bck) + for result in results.contacted.values(): + assert result.get("changed") == 1 + assert result.get("rc") == 0 + backup = ds_name + "(0)" + results = hosts.all.shell(cmd="cat \"//'{0}'\" ".format(backup)) + for result in results.contacted.values(): + assert result.get("stdout") == "# BEGIN ANSIBLE MANAGED BLOCK\nZOAU_ROOT=/mvsutil-develop_dsed\nZOAU_HOME=$ZOAU_ROOT\nZOAU_DIR=$ZOAU_ROOT\n# END ANSIBLE MANAGED BLOCK" + + params["src"] = ds_name + "(-3)" + results = hosts.all.zos_blockinfile(**params) for result in results.contacted.values(): - assert result.get("stdout") == EXPECTED_ENCODING + assert result.get("changed") == 0 finally: - remove_uss_environment(ansible_zos_module) + hosts.all.shell(cmd="""drm "ANSIBLE.*" """) @pytest.mark.ds -@pytest.mark.parametrize("dstype", DS_TYPE) -@pytest.mark.parametrize("encoding", ["IBM-1047"]) -def test_ds_encoding(ansible_zos_module, encoding, dstype): +def test_special_characters_ds_insert_block(ansible_zos_module): hosts = ansible_zos_module ds_type = dstype insert_data = "Insert this string" @@ -1592,9 +1614,21 @@ def test_ds_encoding(ansible_zos_module, encoding, dstype): ) results = hosts.all.shell(cmd="cat \"//'{0}'\" ".format(params["path"])) for result in results.contacted.values(): - assert result.get("stdout") == EXPECTED_ENCODING + assert result.get("stdout") == "# BEGIN ANSIBLE MANAGED BLOCK\nZOAU_ROOT=/mvsutil-develop_dsed\nZOAU_HOME=$ZOAU_ROOT\nZOAU_DIR=$ZOAU_ROOT\n# END ANSIBLE MANAGED BLOCK" + + params_w_bck = dict(insertafter="eof", block="export ZOAU_ROOT\nexport ZOAU_HOME\nexport ZOAU_DIR", state="present", backup=True, backup_name=backup) + params_w_bck["src"] = ds_name + results = hosts.all.zos_blockinfile(**params_w_bck) + for result in results.contacted.values(): + assert result.get("changed") == 1 + assert result.get("rc") == 0 + backup = backup.replace('$', "\$") + results = hosts.all.shell(cmd="cat \"//'{0}'\" ".format(backup)) + for result in results.contacted.values(): + assert result.get("stdout") == "# BEGIN ANSIBLE MANAGED BLOCK\nZOAU_ROOT=/mvsutil-develop_dsed\nZOAU_HOME=$ZOAU_ROOT\nZOAU_DIR=$ZOAU_ROOT\n# END ANSIBLE MANAGED BLOCK" + finally: - remove_ds_environment(ansible_zos_module, ds_name) + hosts.all.shell(cmd="""drm "ANSIBLE.*" """) ######################### diff --git a/tests/functional/modules/test_zos_copy_func.py b/tests/functional/modules/test_zos_copy_func.py index e8e37375c..76c75dd32 100644 --- a/tests/functional/modules/test_zos_copy_func.py +++ b/tests/functional/modules/test_zos_copy_func.py @@ -2348,6 +2348,18 @@ def test_copy_ps_to_existing_uss_file(ansible_zos_module, force): src_ds = TEST_PS dest = "/tmp/ddchkpt" + hosts = ansible_zos_module + mlq_size = 3 + cobol_src_pds = get_tmp_ds_name(mlq_size) + cobol_src_mem = "HELLOCBL" + cobol_src_mem2 = "HICBL2" + src_lib = get_tmp_ds_name(mlq_size) + dest_lib = get_tmp_ds_name(mlq_size) + dest_lib_aliases = get_tmp_ds_name(mlq_size) + pgm_mem = "HELLO" + pgm2_mem = "HELLO2" + pgm_mem_alias = "ALIAS1" + pgm2_mem_alias = "ALIAS2" try: hosts.all.file(path=dest, state="touch") @@ -2372,6 +2384,23 @@ def test_copy_ps_to_existing_uss_file(ansible_zos_module, force): finally: hosts.all.file(path=dest, state="absent") + else: + # copy src loadlib to dest library pds w/o aliases + copy_res = hosts.all.zos_copy( + src="{0}".format(src_lib), + dest="{0}".format(dest_lib), + remote_src=True, + executable=True, + aliases=False + ) + # copy src loadlib to dest library pds w aliases + copy_res_aliases = hosts.all.zos_copy( + src="{0}".format(src_lib), + dest="{0}".format(dest_lib_aliases), + remote_src=True, + executable=True, + aliases=True + ) @pytest.mark.uss @pytest.mark.seq @@ -2414,6 +2443,69 @@ def test_copy_ps_to_non_existing_ps(ansible_zos_module): cmd="cat \"//'{0}'\"".format(dest), executable=SHELL_EXECUTABLE ) + # Copying the remote loadlibs in USS to a local dir. + # This section ONLY handles ONE host, so if we ever use multiple hosts to + # test, we will need to update this code. + remote_user = hosts["options"]["user"] + # Removing a trailing comma because the framework saves the hosts list as a + # string instead of a list. + remote_host = hosts["options"]["inventory"].replace(",", "") + + tmp_folder = tempfile.TemporaryDirectory(prefix="tmpfetch") + cmd = [ + "sftp", + "-r", + f"{remote_user}@{remote_host}:{uss_location}", + f"{tmp_folder.name}" + ] + with subprocess.Popen(cmd, shell=False, stdout=subprocess.PIPE) as sftp_proc: + result = sftp_proc.stdout.read() + + source_path = os.path.join(tmp_folder.name, os.path.basename(uss_location)) + + if not is_created: + # ensure dest data sets absent for this variation of the test case. + hosts.all.zos_data_set(name=dest_lib, state="absent") + else: + # allocate dest loadlib to copy over without an alias. + hosts.all.zos_data_set( + name=dest_lib, + state="present", + type="pdse", + record_format="u", + record_length=0, + block_size=32760, + space_primary=2, + space_type="m", + replace=True + ) + + if not is_created: + # dest data set does not exist, specify it in dest_dataset param. + # copy src loadlib to dest library pds w/o aliases + copy_res = hosts.all.zos_copy( + src=source_path, + dest="{0}".format(dest_lib), + executable=True, + aliases=False, + dest_data_set={ + 'type': "pdse", + 'record_format': "u", + 'record_length': 0, + 'block_size': 32760, + 'space_primary': 2, + 'space_type': "m", + } + ) + else: + # copy src loadlib to dest library pds w/o aliases + copy_res = hosts.all.zos_copy( + src=source_path, + dest="{0}".format(dest_lib), + executable=True, + aliases=False + ) + for result in copy_res.contacted.values(): assert result.get("msg") is None assert result.get("changed") is True @@ -2480,6 +2572,7 @@ def test_copy_ps_to_non_empty_ps(ansible_zos_module, force): assert result.get("rc") == 0 assert result.get("stdout") != "" finally: + hosts.all.shell(cmd='rm -r /tmp/c') hosts.all.zos_data_set(name=dest, state="absent") diff --git a/tests/functional/modules/test_zos_mvs_raw_func.py b/tests/functional/modules/test_zos_mvs_raw_func.py index 00dd56e31..64152915a 100644 --- a/tests/functional/modules/test_zos_mvs_raw_func.py +++ b/tests/functional/modules/test_zos_mvs_raw_func.py @@ -24,6 +24,7 @@ EXISTING_DATA_SET = "user.private.proclib" DEFAULT_PATH = "/tmp/testdir" DEFAULT_PATH_WITH_FILE = f"{DEFAULT_PATH}/testfile" +DEFAULT_PATH_WITH_FILE = f"{DEFAULT_PATH}/testfile" DEFAULT_DD = "MYDD" SYSIN_DD = "SYSIN" SYSPRINT_DD = "SYSPRINT" @@ -55,6 +56,12 @@ def test_failing_name_format(ansible_zos_module): "data_set_name":"!!^&.BAD.NAME" } }], + dds=[{ + "dd_data_set":{ + "dd_name":DEFAULT_DD, + "data_set_name":"!!^&.BAD.NAME" + } + }], ) for result in results.contacted.values(): assert "ValueError" in result.get("msg") @@ -209,6 +216,7 @@ def test_new_disposition_for_data_set_members(ansible_zos_module): hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_with_member = default_data_set + '(MEM)' + default_data_set_with_member = default_data_set + '(MEM)' hosts.all.zos_data_set(name=default_data_set, state="absent") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -254,6 +262,7 @@ def test_dispositions_for_existing_data_set_members(ansible_zos_module, disposit hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_with_member = default_data_set + '(MEM)' + default_data_set_with_member = default_data_set + '(MEM)' hosts.all.zos_data_set( name=default_data_set, type="pds", state="present", replace=True ) @@ -356,6 +365,7 @@ def test_normal_dispositions_data_set( ("b", 3, 1, 56664), ("k", 3, 1, 56664), ("m", 3, 1, 3003192), + ("m", 3, 1, 3003192), ], ) def test_space_types(ansible_zos_module, space_type, primary, secondary, expected): @@ -393,6 +403,7 @@ def test_space_types(ansible_zos_module, space_type, primary, secondary, expecte ], ) + results2 = hosts.all.command(cmd=f"dls -l -s {default_data_set}") results2 = hosts.all.command(cmd=f"dls -l -s {default_data_set}") for result in results.contacted.values(): @@ -443,6 +454,7 @@ def test_data_set_types_non_vsam(ansible_zos_module, data_set_type, volumes_on_s ], ) results = hosts.all.command(cmd=f"dls {default_data_set}") + results = hosts.all.command(cmd=f"dls {default_data_set}") for result in results.contacted.values(): assert "BGYSC1103E" not in result.get("stderr", "") @@ -480,6 +492,15 @@ def test_data_set_types_vsam(ansible_zos_module, data_set_type, volumes_on_syste "volumes":[volume_1], }, } + { + "dd_data_set":{ + "dd_name":SYSPRINT_DD, + "data_set_name":default_data_set, + "disposition":"new", + "type":data_set_type, + "volumes":[volume_1], + }, + } if data_set_type != "ksds" else { "dd_data_set":{ @@ -503,6 +524,7 @@ def test_data_set_types_vsam(ansible_zos_module, data_set_type, volumes_on_syste # * we hope to see EDC5041I An error was detected at the system level when opening a file. # * because that means data set exists and is VSAM so we can't read it results = hosts.all.command(cmd=f"head \"//'{default_data_set}'\"") + results = hosts.all.command(cmd=f"head \"//'{default_data_set}'\"") for result in results.contacted.values(): assert "EDC5041I" in result.get("stderr", "") or "EDC5049I" in result.get("stderr", "") finally: @@ -547,10 +569,12 @@ def test_record_formats(ansible_zos_module, record_format, volumes_on_systems): ], ) + results = hosts.all.command(cmd=f"dls -l {default_data_set}") results = hosts.all.command(cmd=f"dls -l {default_data_set}") for result in results.contacted.values(): assert str(f" {record_format.upper()} ") in result.get("stdout", "") + assert str(f" {record_format.upper()} ") in result.get("stdout", "") finally: hosts.all.zos_data_set(name=default_data_set, state="absent") if idcams_dataset: @@ -1014,6 +1038,7 @@ def test_input_large(ansible_zos_module): contents = "" for i in range(50000): contents += f"this is line {i}\n" + contents += f"this is line {i}\n" results = hosts.all.zos_mvs_raw( program_name="idcams", auth=True, @@ -1035,6 +1060,23 @@ def test_input_large(ansible_zos_module): "content":contents } }, + { + "dd_data_set":{ + "dd_name":SYSPRINT_DD, + "data_set_name":default_data_set, + "disposition":"new", + "type":"seq", + "return_content":{ + "type":"text" + }, + }, + }, + { + "dd_input":{ + "dd_name":SYSIN_DD, + "content":contents + } + }, ], ) for result in results.contacted.values(): @@ -1078,6 +1120,23 @@ def test_input_provided_as_list(ansible_zos_module): "content":contents } }, + { + "dd_data_set":{ + "dd_name":SYSPRINT_DD, + "data_set_name":default_data_set, + "disposition":"new", + "type":"seq", + "return_content":{ + "type":"text" + }, + }, + }, + { + "dd_input":{ + "dd_name":SYSIN_DD, + "content":contents + } + }, ], ) for result in results.contacted.values(): @@ -1264,6 +1323,7 @@ def test_create_new_file(ansible_zos_module): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") + results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1301,6 +1361,7 @@ def test_write_to_existing_file(ansible_zos_module): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") + results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1426,6 +1487,7 @@ def test_file_path_options(ansible_zos_module, access_group, status_group): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") + results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1468,6 +1530,7 @@ def test_file_block_size(ansible_zos_module, block_size): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") + results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1510,6 +1573,7 @@ def test_file_record_length(ansible_zos_module, record_length): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") + results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1552,6 +1616,7 @@ def test_file_record_format(ansible_zos_module, record_format): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") + results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1715,6 +1780,7 @@ def test_concatenation_with_data_set_dd_and_response(ansible_zos_module): hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_2 = get_tmp_ds_name() + default_data_set_2 = get_tmp_ds_name() hosts.all.zos_data_set(name=default_data_set, state="absent") hosts.all.zos_data_set(name=default_data_set_2, state="absent") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -1723,6 +1789,27 @@ def test_concatenation_with_data_set_dd_and_response(ansible_zos_module): program_name="idcams", auth=True, dds=[ + { + "dd_concat":{ + "dd_name":SYSPRINT_DD, + "dds":[ + { + "dd_data_set":{ + "data_set_name":default_data_set, + "disposition":"new", + "type":"seq", + "return_content":{ + "type":"text" + }, + } + }, + { + "dd_data_set":{ + "data_set_name":default_data_set_2, + "disposition":"new", + "type":"seq", + } + }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -1773,6 +1860,7 @@ def test_concatenation_with_data_set_dd_with_replace_and_backup(ansible_zos_modu hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_2 = get_tmp_ds_name() + default_data_set_2 = get_tmp_ds_name() hosts.all.zos_data_set(name=default_data_set, state="present", type="seq") hosts.all.zos_data_set(name=default_data_set_2, state="present", type="seq") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -1781,6 +1869,31 @@ def test_concatenation_with_data_set_dd_with_replace_and_backup(ansible_zos_modu program_name="idcams", auth=True, dds=[ + { + "dd_concat":{ + "dd_name":SYSPRINT_DD, + "dds":[ + { + "dd_data_set":{ + "data_set_name":default_data_set, + "disposition":"new", + "type":"seq", + "replace":True, + "backup":True, + "return_content":{ + "type":"text" + }, + } + }, + { + "dd_data_set":{ + "data_set_name":default_data_set_2, + "disposition":"new", + "type":"seq", + "replace":True, + "backup":True, + } + }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -1832,6 +1945,7 @@ def test_concatenation_with_data_set_dd_with_replace_and_backup(ansible_zos_modu assert ( result.get("backups")[1].get("original_name").lower() == default_data_set_2.lower() + == default_data_set_2.lower() ) assert result.get("ret_code", {}).get("code", -1) == 0 assert len(result.get("dd_names", [])) > 0 @@ -1850,6 +1964,8 @@ def test_concatenation_with_data_set_member(ansible_zos_module): default_data_set = get_tmp_ds_name() default_data_set_2 = get_tmp_ds_name() default_data_set_with_member = default_data_set + '(MEM)' + default_data_set_2 = get_tmp_ds_name() + default_data_set_with_member = default_data_set + '(MEM)' hosts.all.zos_data_set(name=default_data_set, state="present", type="pds") hosts.all.zos_data_set(name=default_data_set_2, state="absent") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -1858,6 +1974,25 @@ def test_concatenation_with_data_set_member(ansible_zos_module): program_name="idcams", auth=True, dds=[ + { + "dd_concat":{ + "dd_name":SYSPRINT_DD, + "dds":[ + { + "dd_data_set":{ + "data_set_name":default_data_set_with_member, + "return_content":{ + "type":"text" + }, + } + }, + { + "dd_data_set":{ + "data_set_name":default_data_set_2, + "disposition":"new", + "type":"seq", + } + }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -1890,6 +2025,7 @@ def test_concatenation_with_data_set_member(ansible_zos_module): ) results2 = hosts.all.shell( cmd=f"cat \"//'{default_data_set_with_member}'\"" + cmd=f"cat \"//'{default_data_set_with_member}'\"" ) for result in results.contacted.values(): @@ -1910,6 +2046,7 @@ def test_concatenation_with_unix_dd_and_response_datasets(ansible_zos_module): try: hosts = ansible_zos_module default_data_set_2 = get_tmp_ds_name() + default_data_set_2 = get_tmp_ds_name() hosts.all.file(path=DEFAULT_PATH, state="directory") hosts.all.file(path=DEFAULT_PATH_WITH_FILE, state="absent") hosts.all.zos_data_set(name=default_data_set_2, state="absent") @@ -1919,6 +2056,25 @@ def test_concatenation_with_unix_dd_and_response_datasets(ansible_zos_module): program_name="idcams", auth=True, dds=[ + { + "dd_concat":{ + "dd_name":SYSPRINT_DD, + "dds":[ + { + "dd_unix":{ + "path":DEFAULT_PATH_WITH_FILE, + "return_content":{ + "type":"text" + }, + } + }, + { + "dd_data_set":{ + "data_set_name":default_data_set_2, + "disposition":"new", + "type":"seq", + } + }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -1973,6 +2129,26 @@ def test_concatenation_with_unix_dd_and_response_uss(ansible_zos_module): program_name="idcams", auth=True, dds=[ + { + "dd_concat":{ + "dd_name":SYSPRINT_DD, + "dds":[ + { + "dd_unix":{ + "path":DEFAULT_PATH_WITH_FILE, + "return_content":{ + "type":"text" + }, + } + }, + { + "dd_input":{ + "content":"Hello world!", + "return_content":{ + "type":"text" + }, + } + }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -2269,6 +2445,14 @@ def test_authorized_program_run_authorized(ansible_zos_module): }, }, }, + { + "dd_output":{ + "dd_name":SYSPRINT_DD, + "return_content":{ + "type":"text" + }, + }, + }, ], ) for result in results.contacted.values(): From 176e63b4e8182a0a728f4edd12b170f75168539c Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Mon, 12 Aug 2024 15:11:59 -0600 Subject: [PATCH 02/13] Updated release notes --- docs/source/release_notes.rst | 9 --------- 1 file changed, 9 deletions(-) diff --git a/docs/source/release_notes.rst b/docs/source/release_notes.rst index 521a8f9da..45f3f100a 100644 --- a/docs/source/release_notes.rst +++ b/docs/source/release_notes.rst @@ -9,15 +9,6 @@ Releases Version 1.11.0-beta.1 ===================== -Release Summary ---------------- - -Release Date: '2024-08-05' -This changelog describes all changes made to the modules and plugins included -in this collection. The release date is the date the changelog is created. -For additional details such as required dependencies and availability review -the collections `release notes `__ - Minor Changes ------------- From 0ac81703227ec17d6a4931da7c6be23d64b53a74 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Mon, 12 Aug 2024 15:17:34 -0600 Subject: [PATCH 03/13] Fixed trailing parenthesis --- docs/source/modules/zos_blockinfile.rst | 2 +- plugins/modules/zos_blockinfile.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/modules/zos_blockinfile.rst b/docs/source/modules/zos_blockinfile.rst index 3bf2cd85b..6c07f4e22 100644 --- a/docs/source/modules/zos_blockinfile.rst +++ b/docs/source/modules/zos_blockinfile.rst @@ -33,7 +33,7 @@ src The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1``.) + Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1.)`` | **required**: True | **type**: str diff --git a/plugins/modules/zos_blockinfile.py b/plugins/modules/zos_blockinfile.py index 3c89162cd..78cf68770 100644 --- a/plugins/modules/zos_blockinfile.py +++ b/plugins/modules/zos_blockinfile.py @@ -39,7 +39,7 @@ PS (sequential data set), member of a PDS or PDSE, PDS, PDSE. - The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already - created. C(e.g. SOME.CREATION(-1).) + created. C(e.g. SOME.CREATION(-1\).) type: str aliases: [ path, destfile, name ] required: true From 58c8d2ea53e30221100de639e56a5a26d5836d37 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Tue, 13 Aug 2024 17:06:02 -0600 Subject: [PATCH 04/13] Updated zos_mvs_raw merge conflicts --- .../modules/test_zos_mvs_raw_func.py | 197 +----------------- 1 file changed, 7 insertions(+), 190 deletions(-) diff --git a/tests/functional/modules/test_zos_mvs_raw_func.py b/tests/functional/modules/test_zos_mvs_raw_func.py index 64152915a..230367175 100644 --- a/tests/functional/modules/test_zos_mvs_raw_func.py +++ b/tests/functional/modules/test_zos_mvs_raw_func.py @@ -24,7 +24,6 @@ EXISTING_DATA_SET = "user.private.proclib" DEFAULT_PATH = "/tmp/testdir" DEFAULT_PATH_WITH_FILE = f"{DEFAULT_PATH}/testfile" -DEFAULT_PATH_WITH_FILE = f"{DEFAULT_PATH}/testfile" DEFAULT_DD = "MYDD" SYSIN_DD = "SYSIN" SYSPRINT_DD = "SYSPRINT" @@ -56,12 +55,6 @@ def test_failing_name_format(ansible_zos_module): "data_set_name":"!!^&.BAD.NAME" } }], - dds=[{ - "dd_data_set":{ - "dd_name":DEFAULT_DD, - "data_set_name":"!!^&.BAD.NAME" - } - }], ) for result in results.contacted.values(): assert "ValueError" in result.get("msg") @@ -216,7 +209,6 @@ def test_new_disposition_for_data_set_members(ansible_zos_module): hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_with_member = default_data_set + '(MEM)' - default_data_set_with_member = default_data_set + '(MEM)' hosts.all.zos_data_set(name=default_data_set, state="absent") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -262,7 +254,6 @@ def test_dispositions_for_existing_data_set_members(ansible_zos_module, disposit hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_with_member = default_data_set + '(MEM)' - default_data_set_with_member = default_data_set + '(MEM)' hosts.all.zos_data_set( name=default_data_set, type="pds", state="present", replace=True ) @@ -365,7 +356,6 @@ def test_normal_dispositions_data_set( ("b", 3, 1, 56664), ("k", 3, 1, 56664), ("m", 3, 1, 3003192), - ("m", 3, 1, 3003192), ], ) def test_space_types(ansible_zos_module, space_type, primary, secondary, expected): @@ -403,7 +393,6 @@ def test_space_types(ansible_zos_module, space_type, primary, secondary, expecte ], ) - results2 = hosts.all.command(cmd=f"dls -l -s {default_data_set}") results2 = hosts.all.command(cmd=f"dls -l -s {default_data_set}") for result in results.contacted.values(): @@ -454,7 +443,6 @@ def test_data_set_types_non_vsam(ansible_zos_module, data_set_type, volumes_on_s ], ) results = hosts.all.command(cmd=f"dls {default_data_set}") - results = hosts.all.command(cmd=f"dls {default_data_set}") for result in results.contacted.values(): assert "BGYSC1103E" not in result.get("stderr", "") @@ -492,15 +480,6 @@ def test_data_set_types_vsam(ansible_zos_module, data_set_type, volumes_on_syste "volumes":[volume_1], }, } - { - "dd_data_set":{ - "dd_name":SYSPRINT_DD, - "data_set_name":default_data_set, - "disposition":"new", - "type":data_set_type, - "volumes":[volume_1], - }, - } if data_set_type != "ksds" else { "dd_data_set":{ @@ -524,7 +503,6 @@ def test_data_set_types_vsam(ansible_zos_module, data_set_type, volumes_on_syste # * we hope to see EDC5041I An error was detected at the system level when opening a file. # * because that means data set exists and is VSAM so we can't read it results = hosts.all.command(cmd=f"head \"//'{default_data_set}'\"") - results = hosts.all.command(cmd=f"head \"//'{default_data_set}'\"") for result in results.contacted.values(): assert "EDC5041I" in result.get("stderr", "") or "EDC5049I" in result.get("stderr", "") finally: @@ -569,12 +547,10 @@ def test_record_formats(ansible_zos_module, record_format, volumes_on_systems): ], ) - results = hosts.all.command(cmd=f"dls -l {default_data_set}") results = hosts.all.command(cmd=f"dls -l {default_data_set}") for result in results.contacted.values(): assert str(f" {record_format.upper()} ") in result.get("stdout", "") - assert str(f" {record_format.upper()} ") in result.get("stdout", "") finally: hosts.all.zos_data_set(name=default_data_set, state="absent") if idcams_dataset: @@ -587,7 +563,7 @@ def test_record_formats(ansible_zos_module, record_format, volumes_on_systems): ("text", "IDCAMS SYSTEM"), ( "base64", - "\udcc9\udcc4\udcc3\udcc1\udcd4\udce2@@\udce2\udce8\udce2\udce3\udcc5", + "������@@������", ), ], ) @@ -644,7 +620,7 @@ def test_return_content_type(ansible_zos_module, return_content_type, expected, @pytest.mark.parametrize( "src_encoding,response_encoding,expected", [ - ("iso8859-1", "ibm-1047", "qcfe\udcebB||BTBFg\udceb|Bg\udcfdGqfgB"), + ("iso8859-1", "ibm-1047", "qcfe�B||BTBFg�|Bg�GqfgB||"), ( "ibm-1047", "iso8859-1", @@ -1038,7 +1014,6 @@ def test_input_large(ansible_zos_module): contents = "" for i in range(50000): contents += f"this is line {i}\n" - contents += f"this is line {i}\n" results = hosts.all.zos_mvs_raw( program_name="idcams", auth=True, @@ -1060,23 +1035,6 @@ def test_input_large(ansible_zos_module): "content":contents } }, - { - "dd_data_set":{ - "dd_name":SYSPRINT_DD, - "data_set_name":default_data_set, - "disposition":"new", - "type":"seq", - "return_content":{ - "type":"text" - }, - }, - }, - { - "dd_input":{ - "dd_name":SYSIN_DD, - "content":contents - } - }, ], ) for result in results.contacted.values(): @@ -1120,23 +1078,6 @@ def test_input_provided_as_list(ansible_zos_module): "content":contents } }, - { - "dd_data_set":{ - "dd_name":SYSPRINT_DD, - "data_set_name":default_data_set, - "disposition":"new", - "type":"seq", - "return_content":{ - "type":"text" - }, - }, - }, - { - "dd_input":{ - "dd_name":SYSIN_DD, - "content":contents - } - }, ], ) for result in results.contacted.values(): @@ -1155,7 +1096,7 @@ def test_input_provided_as_list(ansible_zos_module): ("text", "LISTCAT ENTRIES"), ( "base64", - "@\udcd3\udcc9\udce2\udce3\udcc3\udcc1\udce3@\udcc5\udcd5\udce3\udcd9\udcc9\udcc5", + "@�������@�������", ), ], ) @@ -1206,7 +1147,8 @@ def test_input_return_content_types(ansible_zos_module, return_content_type, exp ( "iso8859-1", "ibm-1047", - "|\udceeqBFfeF|g\udcefF\udcfdqgB\udcd4\udcd0", + "|�qBFfeF|g�F�qgB��", + ), ( "ibm-1047", @@ -1323,7 +1265,6 @@ def test_create_new_file(ansible_zos_module): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") - results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1361,7 +1302,6 @@ def test_write_to_existing_file(ansible_zos_module): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") - results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1487,7 +1427,6 @@ def test_file_path_options(ansible_zos_module, access_group, status_group): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") - results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1530,7 +1469,6 @@ def test_file_block_size(ansible_zos_module, block_size): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") - results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1573,7 +1511,6 @@ def test_file_record_length(ansible_zos_module, record_length): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") - results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1616,7 +1553,6 @@ def test_file_record_format(ansible_zos_module, record_format): ], ) results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") - results2 = hosts.all.command(cmd=f"cat {DEFAULT_PATH_WITH_FILE}") for result in results.contacted.values(): assert result.get("ret_code", {}).get("code", -1) == 0 for result in results2.contacted.values(): @@ -1633,7 +1569,7 @@ def test_file_record_format(ansible_zos_module, record_format): ("text", "IDCAMS SYSTEM"), ( "base64", - "@\udcd3\udcc9\udce2\udce3\udcc3\udcc1\udce3@\udcc5\udcd5\udce3\udcd9\udcc9\udcc5", + "�������@@������@��������@", ), ], ) @@ -1679,7 +1615,7 @@ def test_file_return_content(ansible_zos_module, return_content_type, expected): @pytest.mark.parametrize( "src_encoding,response_encoding,expected", [ - ("iso8859-1", "ibm-1047", "qcfe\udcebB||BTBFg\udceb|Bg\udcfdGqfgB"), + ("iso8859-1", "ibm-1047", "qcfe�B||BTBFg�|Bg�GqfgB|"), ( "ibm-1047", "iso8859-1", @@ -1780,7 +1716,6 @@ def test_concatenation_with_data_set_dd_and_response(ansible_zos_module): hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_2 = get_tmp_ds_name() - default_data_set_2 = get_tmp_ds_name() hosts.all.zos_data_set(name=default_data_set, state="absent") hosts.all.zos_data_set(name=default_data_set_2, state="absent") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -1789,27 +1724,6 @@ def test_concatenation_with_data_set_dd_and_response(ansible_zos_module): program_name="idcams", auth=True, dds=[ - { - "dd_concat":{ - "dd_name":SYSPRINT_DD, - "dds":[ - { - "dd_data_set":{ - "data_set_name":default_data_set, - "disposition":"new", - "type":"seq", - "return_content":{ - "type":"text" - }, - } - }, - { - "dd_data_set":{ - "data_set_name":default_data_set_2, - "disposition":"new", - "type":"seq", - } - }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -1860,7 +1774,6 @@ def test_concatenation_with_data_set_dd_with_replace_and_backup(ansible_zos_modu hosts = ansible_zos_module default_data_set = get_tmp_ds_name() default_data_set_2 = get_tmp_ds_name() - default_data_set_2 = get_tmp_ds_name() hosts.all.zos_data_set(name=default_data_set, state="present", type="seq") hosts.all.zos_data_set(name=default_data_set_2, state="present", type="seq") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -1869,31 +1782,6 @@ def test_concatenation_with_data_set_dd_with_replace_and_backup(ansible_zos_modu program_name="idcams", auth=True, dds=[ - { - "dd_concat":{ - "dd_name":SYSPRINT_DD, - "dds":[ - { - "dd_data_set":{ - "data_set_name":default_data_set, - "disposition":"new", - "type":"seq", - "replace":True, - "backup":True, - "return_content":{ - "type":"text" - }, - } - }, - { - "dd_data_set":{ - "data_set_name":default_data_set_2, - "disposition":"new", - "type":"seq", - "replace":True, - "backup":True, - } - }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -1945,7 +1833,6 @@ def test_concatenation_with_data_set_dd_with_replace_and_backup(ansible_zos_modu assert ( result.get("backups")[1].get("original_name").lower() == default_data_set_2.lower() - == default_data_set_2.lower() ) assert result.get("ret_code", {}).get("code", -1) == 0 assert len(result.get("dd_names", [])) > 0 @@ -1964,8 +1851,6 @@ def test_concatenation_with_data_set_member(ansible_zos_module): default_data_set = get_tmp_ds_name() default_data_set_2 = get_tmp_ds_name() default_data_set_with_member = default_data_set + '(MEM)' - default_data_set_2 = get_tmp_ds_name() - default_data_set_with_member = default_data_set + '(MEM)' hosts.all.zos_data_set(name=default_data_set, state="present", type="pds") hosts.all.zos_data_set(name=default_data_set_2, state="absent") idcams_dataset, idcams_listcat_dataset_cmd = get_temp_idcams_dataset(hosts) @@ -1974,25 +1859,6 @@ def test_concatenation_with_data_set_member(ansible_zos_module): program_name="idcams", auth=True, dds=[ - { - "dd_concat":{ - "dd_name":SYSPRINT_DD, - "dds":[ - { - "dd_data_set":{ - "data_set_name":default_data_set_with_member, - "return_content":{ - "type":"text" - }, - } - }, - { - "dd_data_set":{ - "data_set_name":default_data_set_2, - "disposition":"new", - "type":"seq", - } - }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -2025,7 +1891,6 @@ def test_concatenation_with_data_set_member(ansible_zos_module): ) results2 = hosts.all.shell( cmd=f"cat \"//'{default_data_set_with_member}'\"" - cmd=f"cat \"//'{default_data_set_with_member}'\"" ) for result in results.contacted.values(): @@ -2046,7 +1911,6 @@ def test_concatenation_with_unix_dd_and_response_datasets(ansible_zos_module): try: hosts = ansible_zos_module default_data_set_2 = get_tmp_ds_name() - default_data_set_2 = get_tmp_ds_name() hosts.all.file(path=DEFAULT_PATH, state="directory") hosts.all.file(path=DEFAULT_PATH_WITH_FILE, state="absent") hosts.all.zos_data_set(name=default_data_set_2, state="absent") @@ -2056,25 +1920,6 @@ def test_concatenation_with_unix_dd_and_response_datasets(ansible_zos_module): program_name="idcams", auth=True, dds=[ - { - "dd_concat":{ - "dd_name":SYSPRINT_DD, - "dds":[ - { - "dd_unix":{ - "path":DEFAULT_PATH_WITH_FILE, - "return_content":{ - "type":"text" - }, - } - }, - { - "dd_data_set":{ - "data_set_name":default_data_set_2, - "disposition":"new", - "type":"seq", - } - }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -2129,26 +1974,6 @@ def test_concatenation_with_unix_dd_and_response_uss(ansible_zos_module): program_name="idcams", auth=True, dds=[ - { - "dd_concat":{ - "dd_name":SYSPRINT_DD, - "dds":[ - { - "dd_unix":{ - "path":DEFAULT_PATH_WITH_FILE, - "return_content":{ - "type":"text" - }, - } - }, - { - "dd_input":{ - "content":"Hello world!", - "return_content":{ - "type":"text" - }, - } - }, { "dd_concat":{ "dd_name":SYSPRINT_DD, @@ -2445,14 +2270,6 @@ def test_authorized_program_run_authorized(ansible_zos_module): }, }, }, - { - "dd_output":{ - "dd_name":SYSPRINT_DD, - "return_content":{ - "type":"text" - }, - }, - }, ], ) for result in results.contacted.values(): From b80a809075309d323c0ad0c23a0de20ddcb1c9b6 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Tue, 13 Aug 2024 17:09:20 -0600 Subject: [PATCH 05/13] Updated test_zos_mvs_raw git merge from dev branch --- tests/functional/modules/test_zos_mvs_raw_func.py | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/tests/functional/modules/test_zos_mvs_raw_func.py b/tests/functional/modules/test_zos_mvs_raw_func.py index 230367175..00dd56e31 100644 --- a/tests/functional/modules/test_zos_mvs_raw_func.py +++ b/tests/functional/modules/test_zos_mvs_raw_func.py @@ -563,7 +563,7 @@ def test_record_formats(ansible_zos_module, record_format, volumes_on_systems): ("text", "IDCAMS SYSTEM"), ( "base64", - "������@@������", + "\udcc9\udcc4\udcc3\udcc1\udcd4\udce2@@\udce2\udce8\udce2\udce3\udcc5", ), ], ) @@ -620,7 +620,7 @@ def test_return_content_type(ansible_zos_module, return_content_type, expected, @pytest.mark.parametrize( "src_encoding,response_encoding,expected", [ - ("iso8859-1", "ibm-1047", "qcfe�B||BTBFg�|Bg�GqfgB||"), + ("iso8859-1", "ibm-1047", "qcfe\udcebB||BTBFg\udceb|Bg\udcfdGqfgB"), ( "ibm-1047", "iso8859-1", @@ -1096,7 +1096,7 @@ def test_input_provided_as_list(ansible_zos_module): ("text", "LISTCAT ENTRIES"), ( "base64", - "@�������@�������", + "@\udcd3\udcc9\udce2\udce3\udcc3\udcc1\udce3@\udcc5\udcd5\udce3\udcd9\udcc9\udcc5", ), ], ) @@ -1147,8 +1147,7 @@ def test_input_return_content_types(ansible_zos_module, return_content_type, exp ( "iso8859-1", "ibm-1047", - "|�qBFfeF|g�F�qgB��", - + "|\udceeqBFfeF|g\udcefF\udcfdqgB\udcd4\udcd0", ), ( "ibm-1047", @@ -1569,7 +1568,7 @@ def test_file_record_format(ansible_zos_module, record_format): ("text", "IDCAMS SYSTEM"), ( "base64", - "�������@@������@��������@", + "@\udcd3\udcc9\udce2\udce3\udcc3\udcc1\udce3@\udcc5\udcd5\udce3\udcd9\udcc9\udcc5", ), ], ) @@ -1615,7 +1614,7 @@ def test_file_return_content(ansible_zos_module, return_content_type, expected): @pytest.mark.parametrize( "src_encoding,response_encoding,expected", [ - ("iso8859-1", "ibm-1047", "qcfe�B||BTBFg�|Bg�GqfgB|"), + ("iso8859-1", "ibm-1047", "qcfe\udcebB||BTBFg\udceb|Bg\udcfdGqfgB"), ( "ibm-1047", "iso8859-1", From 6e0b8201babb10ed3c0041d6d467a1ad27a144ac Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Tue, 13 Aug 2024 17:11:02 -0600 Subject: [PATCH 06/13] Removed unused import --- plugins/action/zos_job_submit.py | 4 ---- 1 file changed, 4 deletions(-) diff --git a/plugins/action/zos_job_submit.py b/plugins/action/zos_job_submit.py index 3f9006a34..90b0670ac 100644 --- a/plugins/action/zos_job_submit.py +++ b/plugins/action/zos_job_submit.py @@ -26,10 +26,6 @@ display = Display() -from ansible_collections.ibm.ibm_zos_core.plugins.module_utils import template - - -display = Display() class ActionModule(ActionBase): From 87368140794fd331e2cc26570f128787bed6c77c Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Tue, 13 Aug 2024 17:14:45 -0600 Subject: [PATCH 07/13] Updated RST files --- docs/source/modules/zos_apf.rst | 68 ++-- docs/source/modules/zos_archive.rst | 90 ++--- docs/source/modules/zos_backup_restore.rst | 66 ++-- docs/source/modules/zos_blockinfile.rst | 56 +-- docs/source/modules/zos_copy.rst | 193 +++++----- docs/source/modules/zos_data_set.rst | 153 ++++---- docs/source/modules/zos_encode.rst | 35 +- docs/source/modules/zos_fetch.rst | 18 +- docs/source/modules/zos_find.rst | 21 +- docs/source/modules/zos_gather_facts.rst | 14 +- docs/source/modules/zos_job_output.rst | 16 +- docs/source/modules/zos_job_query.rst | 20 +- docs/source/modules/zos_job_submit.rst | 61 +-- docs/source/modules/zos_lineinfile.rst | 72 ++-- docs/source/modules/zos_mount.rst | 86 ++--- docs/source/modules/zos_mvs_raw.rst | 354 +++++++++--------- docs/source/modules/zos_operator.rst | 2 +- .../modules/zos_operator_action_query.rst | 20 +- docs/source/modules/zos_ping.rst | 8 +- docs/source/modules/zos_script.rst | 39 +- docs/source/modules/zos_tso_command.rst | 4 +- docs/source/modules/zos_unarchive.rst | 62 +-- docs/source/modules/zos_volume_init.rst | 34 +- 23 files changed, 743 insertions(+), 749 deletions(-) diff --git a/docs/source/modules/zos_apf.rst b/docs/source/modules/zos_apf.rst index 265d3fff5..a94fdc95e 100644 --- a/docs/source/modules/zos_apf.rst +++ b/docs/source/modules/zos_apf.rst @@ -37,7 +37,7 @@ library state - Ensure that the library is added \ :literal:`state=present`\ or removed \ :literal:`state=absent`\ . + Ensure that the library is added ``state=present`` or removed ``state=absent``. The APF list format has to be "DYNAMIC". @@ -58,24 +58,24 @@ force_dynamic volume - The identifier for the volume containing the library specified in the \ :literal:`library`\ parameter. The values must be one the following. + The identifier for the volume containing the library specified in the ``library`` parameter. The values must be one the following. 1. The volume serial number. - 2. Six asterisks \ :literal:`\*\*\*\*\*\*`\ , indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. + 2. Six asterisks ``******``, indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. - 3. \*MCAT\*, indicating that the system must use the volume serial number of the volume containing the master catalog. + 3. *MCAT*, indicating that the system must use the volume serial number of the volume containing the master catalog. - If \ :literal:`volume`\ is not specified, \ :literal:`library`\ has to be cataloged. + If ``volume`` is not specified, ``library`` has to be cataloged. | **required**: False | **type**: str sms - Indicates that the library specified in the \ :literal:`library`\ parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. + Indicates that the library specified in the ``library`` parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. - If \ :literal:`sms=True`\ , \ :literal:`volume`\ value will be ignored. + If ``sms=True``, ``volume`` value will be ignored. | **required**: False | **type**: bool @@ -83,13 +83,13 @@ sms operation - Change APF list format to "DYNAMIC" \ :literal:`operation=set\_dynamic`\ or "STATIC" \ :literal:`operation=set\_static`\ + Change APF list format to "DYNAMIC" ``operation=set_dynamic`` or "STATIC" ``operation=set_static`` - Display APF list current format \ :literal:`operation=check\_format`\ + Display APF list current format ``operation=check_format`` - Display APF list entries when \ :literal:`operation=list`\ \ :literal:`library`\ , \ :literal:`volume`\ and \ :literal:`sms`\ will be used as filters. + Display APF list entries when ``operation=list`` ``library``, ``volume`` and ``sms`` will be used as filters. - If \ :literal:`operation`\ is not set, add or remove operation will be ignored. + If ``operation`` is not set, add or remove operation will be ignored. | **required**: False | **type**: str @@ -99,23 +99,23 @@ operation tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str persistent - Add/remove persistent entries to or from \ :emphasis:`data\_set\_name`\ + Add/remove persistent entries to or from *data_set_name* - \ :literal:`library`\ will not be persisted or removed if \ :literal:`persistent=None`\ + ``library`` will not be persisted or removed if ``persistent=None`` | **required**: False | **type**: dict data_set_name - The data set name used for persisting or removing a \ :literal:`library`\ from the APF list. + The data set name used for persisting or removing a ``library`` from the APF list. | **required**: True | **type**: str @@ -124,13 +124,13 @@ persistent marker The marker line template. - \ :literal:`{mark}`\ will be replaced with "BEGIN" and "END". + ``{mark}`` will be replaced with "BEGIN" and "END". - Using a custom marker without the \ :literal:`{mark}`\ variable may result in the block being repeatedly inserted on subsequent playbook runs. + Using a custom marker without the ``{mark}`` variable may result in the block being repeatedly inserted on subsequent playbook runs. - \ :literal:`{mark}`\ length may not exceed 72 characters. + ``{mark}`` length may not exceed 72 characters. - The timestamp (\) used in the default marker follows the '+%Y%m%d-%H%M%S' date format + The timestamp () used in the default marker follows the '+%Y%m%d-%H%M%S' date format | **required**: False | **type**: str @@ -138,9 +138,9 @@ persistent backup - Creates a backup file or backup data set for \ :emphasis:`data\_set\_name`\ , including the timestamp information to ensure that you retrieve the original APF list defined in \ :emphasis:`data\_set\_name`\ ". + Creates a backup file or backup data set for *data_set_name*, including the timestamp information to ensure that you retrieve the original APF list defined in *data_set_name*". - \ :emphasis:`backup\_name`\ can be used to specify a backup file name if \ :emphasis:`backup=true`\ . + *backup_name* can be used to specify a backup file name if *backup=true*. The backup file name will be return on either success or failure of module execution such that data can be retrieved. @@ -152,11 +152,11 @@ persistent backup_name Specify the USS file name or data set name for the destination backup. - If the source \ :emphasis:`data\_set\_name`\ is a USS file or path, the backup\_name name must be a file or path name, and the USS file or path must be an absolute path name. + If the source *data_set_name* is a USS file or path, the backup_name name must be a file or path name, and the USS file or path must be an absolute path name. - If the source is an MVS data set, the backup\_name must be an MVS data set name. + If the source is an MVS data set, the backup_name must be an MVS data set name. - If the backup\_name is not provided, the default backup\_name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp. For example, \ :literal:`/path/file\_name.2020-04-23-08-32-29-bak.tar`\ . + If the backup_name is not provided, the default backup_name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp. For example, ``/path/file_name.2020-04-23-08-32-29-bak.tar``. If the source is an MVS data set, it will be a data set with a random name generated by calling the ZOAU API. The MVS backup data set recovery can be done by renaming it. @@ -168,9 +168,9 @@ persistent batch A list of dictionaries for adding or removing libraries. - This is mutually exclusive with \ :literal:`library`\ , \ :literal:`volume`\ , \ :literal:`sms`\ + This is mutually exclusive with ``library``, ``volume``, ``sms`` - Can be used with \ :literal:`persistent`\ + Can be used with ``persistent`` | **required**: False | **type**: list @@ -185,24 +185,24 @@ batch volume - The identifier for the volume containing the library specified on the \ :literal:`library`\ parameter. The values must be one of the following. + The identifier for the volume containing the library specified on the ``library`` parameter. The values must be one of the following. 1. The volume serial number - 2. Six asterisks \ :literal:`\*\*\*\*\*\*`\ , indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. + 2. Six asterisks ``******``, indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. - 3. \*MCAT\*, indicating that the system must use the volume serial number of the volume containing the master catalog. + 3. *MCAT*, indicating that the system must use the volume serial number of the volume containing the master catalog. - If \ :literal:`volume`\ is not specified, \ :literal:`library`\ has to be cataloged. + If ``volume`` is not specified, ``library`` has to be cataloged. | **required**: False | **type**: str sms - Indicates that the library specified in the \ :literal:`library`\ parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. + Indicates that the library specified in the ``library`` parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. - If true \ :literal:`volume`\ will be ignored. + If true ``volume`` will be ignored. | **required**: False | **type**: bool @@ -283,9 +283,9 @@ Return Values stdout The stdout from ZOAU command apfadm. Output varies based on the type of operation. - state\> stdout of the executed operator command (opercmd), "SETPROG" from ZOAU command apfadm + state> stdout of the executed operator command (opercmd), "SETPROG" from ZOAU command apfadm - operation\> stdout of operation options list\> Returns a list of dictionaries of APF list entries [{'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFHAUTH'}, {'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFJAUTH'}, ...] set\_dynamic\> Set to DYNAMIC set\_static\> Set to STATIC check\_format\> DYNAMIC or STATIC + operation> stdout of operation options list> Returns a list of dictionaries of APF list entries [{'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFHAUTH'}, {'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFJAUTH'}, ...] set_dynamic> Set to DYNAMIC set_static> Set to STATIC check_format> DYNAMIC or STATIC | **returned**: always | **type**: str diff --git a/docs/source/modules/zos_archive.rst b/docs/source/modules/zos_archive.rst index 8676d4cb7..bca1c5e82 100644 --- a/docs/source/modules/zos_archive.rst +++ b/docs/source/modules/zos_archive.rst @@ -20,7 +20,7 @@ Synopsis - Sources for archiving must be on the remote z/OS system. - Supported sources are USS (UNIX System Services) or z/OS data sets. - The archive remains on the remote z/OS system. -- For supported archive formats, see option \ :literal:`format`\ . +- For supported archive formats, see option ``format``. @@ -70,7 +70,7 @@ format terse_pack - Compression option for use with the terse format, \ :emphasis:`name=terse`\ . + Compression option for use with the terse format, *name=terse*. Pack will compress records in a data set so that the output results in lossless data compression. @@ -90,14 +90,14 @@ format If the data set provided exists, the data set must have the following attributes: LRECL=255, BLKSIZE=3120, and RECFM=VB - When providing the \ :emphasis:`xmit\_log\_data\_set`\ name, ensure there is adequate space. + When providing the *xmit_log_data_set* name, ensure there is adequate space. | **required**: False | **type**: str use_adrdssu - If set to true, the \ :literal:`zos\_archive`\ module will use Data Facility Storage Management Subsystem data set services (DFSMSdss) program ADRDSSU to compress data sets into a portable format before using \ :literal:`xmit`\ or \ :literal:`terse`\ . + If set to true, the ``zos_archive`` module will use Data Facility Storage Management Subsystem data set services (DFSMSdss) program ADRDSSU to compress data sets into a portable format before using ``xmit`` or ``terse``. | **required**: False | **type**: bool @@ -109,19 +109,19 @@ format dest The remote absolute path or data set where the archive should be created. - \ :emphasis:`dest`\ can be a USS file or MVS data set name. + *dest* can be a USS file or MVS data set name. - If \ :emphasis:`dest`\ has missing parent directories, they will be created. + If *dest* has missing parent directories, they will be created. - If \ :emphasis:`dest`\ is a nonexistent USS file, it will be created. + If *dest* is a nonexistent USS file, it will be created. - If \ :emphasis:`dest`\ is an existing file or data set and \ :emphasis:`force=true`\ , the existing \ :emphasis:`dest`\ will be deleted and recreated with attributes defined in the \ :emphasis:`dest\_data\_set`\ option or computed by the module. + If *dest* is an existing file or data set and *force=true*, the existing *dest* will be deleted and recreated with attributes defined in the *dest_data_set* option or computed by the module. - If \ :emphasis:`dest`\ is an existing file or data set and \ :emphasis:`force=false`\ or not specified, the module exits with a note to the user. + If *dest* is an existing file or data set and *force=false* or not specified, the module exits with a note to the user. - Destination data set attributes can be set using \ :emphasis:`dest\_data\_set`\ . + Destination data set attributes can be set using *dest_data_set*. - Destination data set space will be calculated based on space of source data sets provided and/or found by expanding the pattern name. Calculating space can impact module performance. Specifying space attributes in the \ :emphasis:`dest\_data\_set`\ option will improve performance. + Destination data set space will be calculated based on space of source data sets provided and/or found by expanding the pattern name. Calculating space can impact module performance. Specifying space attributes in the *dest_data_set* option will improve performance. | **required**: True | **type**: str @@ -130,9 +130,9 @@ dest exclude Remote absolute path, glob, or list of paths, globs, data set name patterns or generation data sets (GDSs) in relative notation for the file, files or data sets to exclude from src list and glob expansion. - Patterns (wildcards) can contain one of the following, \`?\`, \`\*\`. + Patterns (wildcards) can contain one of the following, `?`, `*`. - \* matches everything. + * matches everything. ? matches any single character. @@ -146,7 +146,7 @@ group When left unspecified, it uses the current group of the current use unless you are root, in which case it can preserve the previous ownership. - This option is only applicable if \ :literal:`dest`\ is USS, otherwise ignored. + This option is only applicable if ``dest`` is USS, otherwise ignored. | **required**: False | **type**: str @@ -155,13 +155,13 @@ group mode The permission of the destination archive file. - If \ :literal:`dest`\ is USS, this will act as Unix file mode, otherwise ignored. + If ``dest`` is USS, this will act as Unix file mode, otherwise ignored. - It should be noted that modes are octal numbers. The user must either add a leading zero so that Ansible's YAML parser knows it is an octal number (like \ :literal:`0644`\ or \ :literal:`01777`\ )or quote it (like \ :literal:`'644'`\ or \ :literal:`'1777'`\ ) so Ansible receives a string and can do its own conversion from string into number. Giving Ansible a number without following one of these rules will end up with a decimal number which will have unexpected results. + It should be noted that modes are octal numbers. The user must either add a leading zero so that Ansible's YAML parser knows it is an octal number (like ``0644`` or ``01777``)or quote it (like ``'644'`` or ``'1777'``) so Ansible receives a string and can do its own conversion from string into number. Giving Ansible a number without following one of these rules will end up with a decimal number which will have unexpected results. The mode may also be specified as a symbolic mode (for example, 'u+rwx' or 'u=rw,g=r,o=r') or a special string 'preserve'. - \ :emphasis:`mode=preserve`\ means that the file will be given the same permissions as the src file. + *mode=preserve* means that the file will be given the same permissions as the src file. | **required**: False | **type**: str @@ -172,14 +172,14 @@ owner When left unspecified, it uses the current user unless you are root, in which case it can preserve the previous ownership. - This option is only applicable if \ :literal:`dest`\ is USS, otherwise ignored. + This option is only applicable if ``dest`` is USS, otherwise ignored. | **required**: False | **type**: str remove - Remove any added source files , trees or data sets after module \ `zos\_archive <./zos_archive.html>`__\ adds them to the archive. Source files, trees and data sets are identified with option \ :emphasis:`src`\ . + Remove any added source files , trees or data sets after module `zos_archive <./zos_archive.html>`_ adds them to the archive. Source files, trees and data sets are identified with option *src*. | **required**: False | **type**: bool @@ -187,7 +187,7 @@ remove dest_data_set - Data set attributes to customize a \ :literal:`dest`\ data set to be archived into. + Data set attributes to customize a ``dest`` data set to be archived into. | **required**: False | **type**: dict @@ -210,18 +210,18 @@ dest_data_set space_primary - If the destination \ :emphasis:`dest`\ data set does not exist , this sets the primary space allocated for the data set. + If the destination *dest* data set does not exist , this sets the primary space allocated for the data set. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int space_secondary - If the destination \ :emphasis:`dest`\ data set does not exist , this sets the secondary space allocated for the data set. + If the destination *dest* data set does not exist , this sets the secondary space allocated for the data set. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int @@ -230,7 +230,7 @@ dest_data_set space_type If the destination data set does not exist, this sets the unit of measurement to use when defining primary and secondary space. - Valid units of size are \ :literal:`k`\ , \ :literal:`m`\ , \ :literal:`g`\ , \ :literal:`cyl`\ , and \ :literal:`trk`\ . + Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. | **required**: False | **type**: str @@ -238,7 +238,7 @@ dest_data_set record_format - If the destination data set does not exist, this sets the format of the data set. (e.g \ :literal:`FB`\ ) + If the destination data set does not exist, this sets the format of the data set. (e.g ``FB``) Choices are case-sensitive. @@ -315,18 +315,18 @@ dest_data_set tmp_hlq Override the default high level qualifier (HLQ) for temporary data sets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the environment variable value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the environment variable value ``TMPHLQ`` is used. | **required**: False | **type**: str force - If set to \ :literal:`true`\ and the remote file or data set \ :literal:`dest`\ will be deleted. Otherwise it will be created with the \ :literal:`dest\_data\_set`\ attributes or default values if \ :literal:`dest\_data\_set`\ is not specified. + If set to ``true`` and the remote file or data set ``dest`` will be deleted. Otherwise it will be created with the ``dest_data_set`` attributes or default values if ``dest_data_set`` is not specified. - If set to \ :literal:`false`\ , the file or data set will only be copied if the destination does not exist. + If set to ``false``, the file or data set will only be copied if the destination does not exist. - If set to \ :literal:`false`\ and destination exists, the module exits with a note to the user. + If set to ``false`` and destination exists, the module exits with a note to the user. | **required**: False | **type**: bool @@ -397,7 +397,7 @@ Examples format: name: terse format_options: - use_adrdssu: True + use_adrdssu: true - name: Archive multiple data sets into a new GDS zos_archive: @@ -406,7 +406,7 @@ Examples format: name: terse format_options: - use_adrdssu: True + use_adrdssu: true @@ -415,11 +415,11 @@ Notes ----- .. note:: - This module does not perform a send or transmit operation to a remote node. If you want to transport the archive you can use zos\_fetch to retrieve to the controller and then zos\_copy or zos\_unarchive for copying to a remote or send to the remote and then unpack the archive respectively. + This module does not perform a send or transmit operation to a remote node. If you want to transport the archive you can use zos_fetch to retrieve to the controller and then zos_copy or zos_unarchive for copying to a remote or send to the remote and then unpack the archive respectively. - When packing and using \ :literal:`use\_adrdssu`\ flag the module will take up to two times the space indicated in \ :literal:`dest\_data\_set`\ . + When packing and using ``use_adrdssu`` flag the module will take up to two times the space indicated in ``dest_data_set``. - tar, zip, bz2 and pax are archived using python \ :literal:`tarfile`\ library which uses the latest version available for each format, for compatibility when opening from system make sure to use the latest available version for the intended format. + tar, zip, bz2 and pax are archived using python ``tarfile`` library which uses the latest version available for each format, for compatibility when opening from system make sure to use the latest available version for the intended format. @@ -439,27 +439,27 @@ Return Values state - The state of the input \ :literal:`src`\ . + The state of the input ``src``. - \ :literal:`absent`\ when the source files or data sets were removed. + ``absent`` when the source files or data sets were removed. - \ :literal:`present`\ when the source files or data sets were not removed. + ``present`` when the source files or data sets were not removed. - \ :literal:`incomplete`\ when \ :literal:`remove`\ was true and the source files or data sets were not removed. + ``incomplete`` when ``remove`` was true and the source files or data sets were not removed. | **returned**: always | **type**: str dest_state - The state of the \ :emphasis:`dest`\ file or data set. + The state of the *dest* file or data set. - \ :literal:`absent`\ when the file does not exist. + ``absent`` when the file does not exist. - \ :literal:`archive`\ when the file is an archive. + ``archive`` when the file is an archive. - \ :literal:`compress`\ when the file is compressed, but not an archive. + ``compress`` when the file is compressed, but not an archive. - \ :literal:`incomplete`\ when the file is an archive, but some files under \ :emphasis:`src`\ were not found. + ``incomplete`` when the file is an archive, but some files under *src* were not found. | **returned**: success | **type**: str @@ -477,7 +477,7 @@ archived | **type**: list arcroot - If \ :literal:`src`\ is a list of USS files, this returns the top most parent folder of the list of files, otherwise is empty. + If ``src`` is a list of USS files, this returns the top most parent folder of the list of files, otherwise is empty. | **returned**: always | **type**: str diff --git a/docs/source/modules/zos_backup_restore.rst b/docs/source/modules/zos_backup_restore.rst index 68ca12aa5..f1183a84c 100644 --- a/docs/source/modules/zos_backup_restore.rst +++ b/docs/source/modules/zos_backup_restore.rst @@ -47,38 +47,38 @@ data_sets include - When \ :emphasis:`operation=backup`\ , specifies a list of data sets or data set patterns to include in the backup. + When *operation=backup*, specifies a list of data sets or data set patterns to include in the backup. When *operation=backup* GDS relative names are supported. When *operation=restore*, specifies a list of data sets or data set patterns to include when restoring from a backup. - The single asterisk, \ :literal:`\*`\ , is used in place of exactly one qualifier. In addition, it can be used to indicate to DFSMSdss that only part of a qualifier has been specified. + The single asterisk, ``*``, is used in place of exactly one qualifier. In addition, it can be used to indicate to DFSMSdss that only part of a qualifier has been specified. - When used with other qualifiers, the double asterisk, \ :literal:`\*\*`\ , indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. + When used with other qualifiers, the double asterisk, ``**``, indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. Two asterisks are the maximum permissible in a qualifier. If there are two asterisks in a qualifier, they must be the first and last characters. - A question mark \ :literal:`?`\ or percent sign \ :literal:`%`\ matches a single character. + A question mark ``?`` or percent sign ``%`` matches a single character. | **required**: False | **type**: raw exclude - When \ :emphasis:`operation=backup`\ , specifies a list of data sets or data set patterns to exclude from the backup. + When *operation=backup*, specifies a list of data sets or data set patterns to exclude from the backup. When *operation=backup* GDS relative names are supported. When *operation=restore*, specifies a list of data sets or data set patterns to exclude when restoring from a backup. - The single asterisk, \ :literal:`\*`\ , is used in place of exactly one qualifier. In addition, it can be used to indicate that only part of a qualifier has been specified." + The single asterisk, ``*``, is used in place of exactly one qualifier. In addition, it can be used to indicate that only part of a qualifier has been specified." - When used with other qualifiers, the double asterisk, \ :literal:`\*\*`\ , indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. + When used with other qualifiers, the double asterisk, ``**``, indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. Two asterisks are the maximum permissible in a qualifier. If there are two asterisks in a qualifier, they must be the first and last characters. - A question mark \ :literal:`?`\ or percent sign \ :literal:`%`\ matches a single character. + A question mark ``?`` or percent sign ``%`` matches a single character. | **required**: False | **type**: raw @@ -88,22 +88,22 @@ data_sets volume This applies to both data set restores and volume restores. - When \ :emphasis:`operation=backup`\ and \ :emphasis:`data\_sets`\ are provided, specifies the volume that contains the data sets to backup. + When *operation=backup* and *data_sets* are provided, specifies the volume that contains the data sets to backup. - When \ :emphasis:`operation=restore`\ , specifies the volume the backup should be restored to. + When *operation=restore*, specifies the volume the backup should be restored to. - \ :emphasis:`volume`\ is required when restoring a full volume backup. + *volume* is required when restoring a full volume backup. | **required**: False | **type**: str full_volume - When \ :emphasis:`operation=backup`\ and \ :emphasis:`full\_volume=True`\ , specifies that the entire volume provided to \ :emphasis:`volume`\ should be backed up. + When *operation=backup* and *full_volume=True*, specifies that the entire volume provided to *volume* should be backed up. - When \ :emphasis:`operation=restore`\ and \ :emphasis:`full\_volume=True`\ , specifies that the volume should be restored (default is dataset). + When *operation=restore* and *full_volume=True*, specifies that the volume should be restored (default is dataset). - \ :emphasis:`volume`\ must be provided when \ :emphasis:`full\_volume=True`\ . + *volume* must be provided when *full_volume=True*. | **required**: False | **type**: bool @@ -113,18 +113,18 @@ full_volume temp_volume Specifies a particular volume on which the temporary data sets should be created during the backup and restore process. - When \ :emphasis:`operation=backup`\ and \ :emphasis:`backup\_name`\ is a data set, specifies the volume the backup should be placed in. + When *operation=backup* and *backup_name* is a data set, specifies the volume the backup should be placed in. | **required**: False | **type**: str backup_name - When \ :emphasis:`operation=backup`\ , the destination data set or UNIX file to hold the backup. + When *operation=backup*, the destination data set or UNIX file to hold the backup. - When \ :emphasis:`operation=restore`\ , the destination data set or UNIX file backup to restore. + When *operation=restore*, the destination data set or UNIX file backup to restore. - There are no enforced conventions for backup names. However, using a common extension like \ :literal:`.dzp`\ for UNIX files and \ :literal:`.DZP`\ for data sets will improve readability. + There are no enforced conventions for backup names. However, using a common extension like ``.dzp`` for UNIX files and ``.DZP`` for data sets will improve readability. GDS relative names are supported when *operation=restore*. @@ -141,9 +141,9 @@ recover overwrite - When \ :emphasis:`operation=backup`\ , specifies if an existing data set or UNIX file matching \ :emphasis:`backup\_name`\ should be deleted. + When *operation=backup*, specifies if an existing data set or UNIX file matching *backup_name* should be deleted. - When \ :emphasis:`operation=restore`\ , specifies if the module should overwrite existing data sets with matching name on the target device. + When *operation=restore*, specifies if the module should overwrite existing data sets with matching name on the target device. | **required**: False | **type**: bool @@ -151,35 +151,35 @@ overwrite sms_storage_class - When \ :emphasis:`operation=restore`\ , specifies the storage class to use. The storage class will also be used for temporary data sets created during restore process. + When *operation=restore*, specifies the storage class to use. The storage class will also be used for temporary data sets created during restore process. - When \ :emphasis:`operation=backup`\ , specifies the storage class to use for temporary data sets created during backup process. + When *operation=backup*, specifies the storage class to use for temporary data sets created during backup process. - If neither of \ :emphasis:`sms\_storage\_class`\ or \ :emphasis:`sms\_management\_class`\ are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. + If neither of *sms_storage_class* or *sms_management_class* are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. | **required**: False | **type**: str sms_management_class - When \ :emphasis:`operation=restore`\ , specifies the management class to use. The management class will also be used for temporary data sets created during restore process. + When *operation=restore*, specifies the management class to use. The management class will also be used for temporary data sets created during restore process. - When \ :emphasis:`operation=backup`\ , specifies the management class to use for temporary data sets created during backup process. + When *operation=backup*, specifies the management class to use for temporary data sets created during backup process. - If neither of \ :emphasis:`sms\_storage\_class`\ or \ :emphasis:`sms\_management\_class`\ are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. + If neither of *sms_storage_class* or *sms_management_class* are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. | **required**: False | **type**: str space - If \ :emphasis:`operation=backup`\ , specifies the amount of space to allocate for the backup. Please note that even when backing up to a UNIX file, backup contents will be temporarily held in a data set. + If *operation=backup*, specifies the amount of space to allocate for the backup. Please note that even when backing up to a UNIX file, backup contents will be temporarily held in a data set. - If \ :emphasis:`operation=restore`\ , specifies the amount of space to allocate for data sets temporarily created during the restore process. + If *operation=restore*, specifies the amount of space to allocate for data sets temporarily created during the restore process. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. - When \ :emphasis:`full\_volume=True`\ , \ :emphasis:`space`\ defaults to \ :literal:`1`\ , otherwise default is \ :literal:`25`\ + When *full_volume=True*, *space* defaults to ``1``, otherwise default is ``25`` | **required**: False | **type**: int @@ -188,9 +188,9 @@ space space_type The unit of measurement to use when defining data set space. - Valid units of size are \ :literal:`k`\ , \ :literal:`m`\ , \ :literal:`g`\ , \ :literal:`cyl`\ , and \ :literal:`trk`\ . + Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. - When \ :emphasis:`full\_volume=True`\ , \ :emphasis:`space\_type`\ defaults to \ :literal:`g`\ , otherwise default is \ :literal:`m`\ + When *full_volume=True*, *space_type* defaults to ``g``, otherwise default is ``m`` | **required**: False | **type**: str @@ -209,7 +209,7 @@ hlq tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup data sets. - The default HLQ is the Ansible user that executes the module and if that is not available, then the value of \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user that executes the module and if that is not available, then the value of ``TMPHLQ`` is used. | **required**: False | **type**: str diff --git a/docs/source/modules/zos_blockinfile.rst b/docs/source/modules/zos_blockinfile.rst index 6c07f4e22..deacb25e3 100644 --- a/docs/source/modules/zos_blockinfile.rst +++ b/docs/source/modules/zos_blockinfile.rst @@ -33,16 +33,16 @@ src The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1.)`` + Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1\``.) | **required**: True | **type**: str state - Whether the block should be inserted or replaced using \ :emphasis:`state=present`\ . + Whether the block should be inserted or replaced using *state=present*. - Whether the block should be removed using \ :emphasis:`state=absent`\ . + Whether the block should be removed using *state=absent*. | **required**: False | **type**: str @@ -53,9 +53,9 @@ state marker The marker line template. - \ :literal:`{mark}`\ will be replaced with the values \ :literal:`in marker\_begin`\ (default="BEGIN") and \ :literal:`marker\_end`\ (default="END"). + ``{mark}`` will be replaced with the values ``in marker_begin`` (default="BEGIN") and ``marker_end`` (default="END"). - Using a custom marker without the \ :literal:`{mark}`\ variable may result in the block being repeatedly inserted on subsequent playbook runs. + Using a custom marker without the ``{mark}`` variable may result in the block being repeatedly inserted on subsequent playbook runs. | **required**: False | **type**: str @@ -65,7 +65,7 @@ marker block The text to insert inside the marker lines. - Multi-line can be separated by '\\n'. + Multi-line can be separated by '\n'. Any double-quotation marks will be removed. @@ -76,11 +76,11 @@ block insertafter If specified, the block will be inserted after the last match of the specified regular expression. - A special value \ :literal:`EOF`\ for inserting a block at the end of the file is available. + A special value ``EOF`` for inserting a block at the end of the file is available. - If a specified regular expression has no matches, \ :literal:`EOF`\ will be used instead. + If a specified regular expression has no matches, ``EOF`` will be used instead. - Choices are EOF or '\*regex\*'. + Choices are EOF or '*regex*'. Default is EOF. @@ -91,18 +91,18 @@ insertafter insertbefore If specified, the block will be inserted before the last match of specified regular expression. - A special value \ :literal:`BOF`\ for inserting the block at the beginning of the file is available. + A special value ``BOF`` for inserting the block at the beginning of the file is available. If a specified regular expression has no matches, the block will be inserted at the end of the file. - Choices are BOF or '\*regex\*'. + Choices are BOF or '*regex*'. | **required**: False | **type**: str marker_begin - This will be inserted at \ :literal:`{mark}`\ in the opening ansible block marker. + This will be inserted at ``{mark}`` in the opening ansible block marker. | **required**: False | **type**: str @@ -110,7 +110,7 @@ marker_begin marker_end - This will be inserted at \ :literal:`{mark}`\ in the closing ansible block marker. + This will be inserted at ``{mark}`` in the closing ansible block marker. | **required**: False | **type**: str @@ -118,9 +118,9 @@ marker_end backup - Specifies whether a backup of destination should be created before editing the source \ :emphasis:`src`\ . + Specifies whether a backup of destination should be created before editing the source *src*. - When set to \ :literal:`true`\ , the module creates a backup file or data set. + When set to ``true``, the module creates a backup file or data set. The backup file name will be returned on either success or failure of module execution such that data can be retrieved. @@ -134,15 +134,15 @@ backup backup_name Specify the USS file name or data set name for the destination backup. - If the source \ :emphasis:`src`\ is a USS file or path, the backup\_name name must be a file or path name, and the USS file or path must be an absolute path name. + If the source *src* is a USS file or path, the backup_name name must be a file or path name, and the USS file or path must be an absolute path name. - If the source is an MVS data set, the backup\_name name must be an MVS data set name, and the dataset must not be preallocated. + If the source is an MVS data set, the backup_name name must be an MVS data set name, and the dataset must not be preallocated. - If the backup\_name is not provided, the default backup\_name name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp, e.g. \ :literal:`/path/file\_name.2020-04-23-08-32-29-bak.tar`\ . + If the backup_name is not provided, the default backup_name name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp, e.g. ``/path/file_name.2020-04-23-08-32-29-bak.tar``. If the source is an MVS data set, it will be a data set with a random name generated by calling the ZOAU API. The MVS backup data set recovery can be done by renaming it. - If \ :emphasis:`src`\ is a data set member and backup\_name is not provided, the data set member will be backed up to the same partitioned data set with a randomly generated member name. + If *src* is a data set member and backup_name is not provided, the data set member will be backed up to the same partitioned data set with a randomly generated member name. | **required**: False | **type**: str @@ -151,14 +151,14 @@ backup_name tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str encoding - The character set of the source \ :emphasis:`src`\ . \ `zos\_blockinfile <./zos_blockinfile.html>`__\ requires it to be provided with correct encoding to read the content of a USS file or data set. If this parameter is not provided, this module assumes that USS file or data set is encoded in IBM-1047. + The character set of the source *src*. `zos_blockinfile <./zos_blockinfile.html>`_ requires it to be provided with correct encoding to read the content of a USS file or data set. If this parameter is not provided, this module assumes that USS file or data set is encoded in IBM-1047. Supported character sets rely on the charset conversion utility (iconv) version; the most common character sets are supported. @@ -172,7 +172,7 @@ force This is helpful when a data set is being used in a long running process such as a started task and you are wanting to update or read. - The \ :literal:`force`\ option enables sharing of data sets through the disposition \ :emphasis:`DISP=SHR`\ . + The ``force`` option enables sharing of data sets through the disposition *DISP=SHR*. | **required**: False | **type**: bool @@ -295,7 +295,7 @@ Examples zos_blockinfile: src: SOME.CREATION.TEST insertbefore: BOF - backup: True + backup: true backup_name: CREATION.GDS(+1) block: "{{ CONTENT }}" @@ -308,13 +308,13 @@ Notes .. note:: It is the playbook author or user's responsibility to avoid files that should not be encoded, such as binary files. A user is described as the remote user, configured either for the playbook or playbook tasks, who can also obtain escalated privileges to execute as root or another user. - All data sets are always assumed to be cataloged. If an uncataloged data set needs to be encoded, it should be cataloged first. The \ `zos\_data\_set <./zos_data_set.html>`__\ module can be used to catalog uncataloged data sets. + All data sets are always assumed to be cataloged. If an uncataloged data set needs to be encoded, it should be cataloged first. The `zos_data_set <./zos_data_set.html>`_ module can be used to catalog uncataloged data sets. - For supported character sets used to encode data, refer to the \ `documentation `__\ . + For supported character sets used to encode data, refer to the `documentation `_. - When using \`\`with\_\*\`\` loops be aware that if you do not set a unique mark the block will be overwritten on each iteration. + When using ``with_*`` loops be aware that if you do not set a unique mark the block will be overwritten on each iteration. - When more then one block should be handled in a file you must change the \ :emphasis:`marker`\ per task. + When more then one block should be handled in a file you must change the *marker* per task. @@ -333,7 +333,7 @@ Return Values changed - Indicates if the source was modified. Value of 1 represents \`true\`, otherwise \`false\`. + Indicates if the source was modified. Value of 1 represents `true`, otherwise `false`. | **returned**: success | **type**: bool diff --git a/docs/source/modules/zos_copy.rst b/docs/source/modules/zos_copy.rst index f424548f7..b6d164a84 100644 --- a/docs/source/modules/zos_copy.rst +++ b/docs/source/modules/zos_copy.rst @@ -16,7 +16,7 @@ zos_copy -- Copy data to z/OS Synopsis -------- -- The \ `zos\_copy <./zos_copy.html>`__\ module copies a file or data set from a local or a remote machine to a location on the remote machine. +- The `zos_copy <./zos_copy.html>`_ module copies a file or data set from a local or a remote machine to a location on the remote machine. @@ -27,17 +27,17 @@ Parameters asa_text - If set to \ :literal:`true`\ , indicates that either \ :literal:`src`\ or \ :literal:`dest`\ or both contain ASA control characters. + If set to ``true``, indicates that either ``src`` or ``dest`` or both contain ASA control characters. - When \ :literal:`src`\ is a USS file and \ :literal:`dest`\ is a data set, the copy will preserve ASA control characters in the destination. + When ``src`` is a USS file and ``dest`` is a data set, the copy will preserve ASA control characters in the destination. - When \ :literal:`src`\ is a data set containing ASA control characters and \ :literal:`dest`\ is a USS file, the copy will put all control characters as plain text in the destination. + When ``src`` is a data set containing ASA control characters and ``dest`` is a USS file, the copy will put all control characters as plain text in the destination. - If \ :literal:`dest`\ is a non-existent data set, it will be created with record format Fixed Block with ANSI format (FBA). + If ``dest`` is a non-existent data set, it will be created with record format Fixed Block with ANSI format (FBA). - If neither \ :literal:`src`\ or \ :literal:`dest`\ have record format Fixed Block with ANSI format (FBA) or Variable Block with ANSI format (VBA), the module will fail. + If neither ``src`` or ``dest`` have record format Fixed Block with ANSI format (FBA) or Variable Block with ANSI format (VBA), the module will fail. - This option is only valid for text files. If \ :literal:`is\_binary`\ is \ :literal:`true`\ or \ :literal:`executable`\ is \ :literal:`true`\ as well, the module will fail. + This option is only valid for text files. If ``is_binary`` is ``true`` or ``executable`` is ``true`` as well, the module will fail. | **required**: False | **type**: bool @@ -47,7 +47,7 @@ asa_text backup Specifies whether a backup of the destination should be created before copying data. - When set to \ :literal:`true`\ , the module creates a backup file or data set. + When set to ``true``, the module creates a backup file or data set. The backup file name will be returned on either success or failure of module execution such that data can be retrieved. @@ -59,15 +59,13 @@ backup backup_name Specify a unique USS file name or data set name for the destination backup. - If the destination \ :literal:`dest`\ is a USS file or path, the \ :literal:`backup\_name`\ must be an absolute path name. + If the destination ``dest`` is a USS file or path, the ``backup_name`` must be an absolute path name. - If the destination is an MVS data set name, the \ :literal:`backup\_name`\ provided must meet data set naming conventions of one or more qualifiers, each from one to eight characters long, that are delimited by periods. + If the destination is an MVS data set name, the ``backup_name`` provided must meet data set naming conventions of one or more qualifiers, each from one to eight characters long, that are delimited by periods. - If the \ :literal:`backup\_name`\ is not provided, the default \ :literal:`backup\_name`\ will be used. If the \ :literal:`dest`\ is a USS file or USS path, the name of the backup file will be the destination file or path name appended with a timestamp, e.g. \ :literal:`/path/file\_name.2020-04-23-08-32-29-bak.tar`\ . If the \ :literal:`dest`\ is an MVS data set, it will be a data set with a randomly generated name. + If the ``backup_name`` is not provided, the default ``backup_name`` will be used. If the ``dest`` is a USS file or USS path, the name of the backup file will be the destination file or path name appended with a timestamp, e.g. ``/path/file_name.2020-04-23-08-32-29-bak.tar``. If the ``dest`` is an MVS data set, it will be a data set with a randomly generated name. - If \ :literal:`dest`\ is a data set member and \ :literal:`backup\_name`\ is not provided, the data set member will be backed up to the same partitioned data set with a randomly generated member name. - - If \ :emphasis:`backup\_name`\ is a generation data set (GDS), it must be a relative positive name (for example, \ :literal:`HLQ.USER.GDG(+1)`\ ). + If ``dest`` is a data set member and ``backup_name`` is not provided, the data set member will be backed up to the same partitioned data set with a randomly generated member name. If *backup_name* is a generation data set (GDS), it must be a relative positive name (for example, V(HLQ.USER.GDG(+1\))). @@ -76,11 +74,11 @@ backup_name content - When used instead of \ :literal:`src`\ , sets the contents of a file or data set directly to the specified value. + When used instead of ``src``, sets the contents of a file or data set directly to the specified value. - Works only when \ :literal:`dest`\ is a USS file, sequential data set, or a partitioned data set member. + Works only when ``dest`` is a USS file, sequential data set, or a partitioned data set member. - If \ :literal:`dest`\ is a directory, then content will be copied to \ :literal:`/path/to/dest/inline\_copy`\ . + If ``dest`` is a directory, then content will be copied to ``/path/to/dest/inline_copy``. | **required**: False | **type**: str @@ -89,27 +87,27 @@ content dest The remote absolute path or data set where the content should be copied to. - \ :literal:`dest`\ can be a USS file, directory or MVS data set name. + ``dest`` can be a USS file, directory or MVS data set name. - If \ :literal:`dest`\ has missing parent directories, they will be created. + If ``dest`` has missing parent directories, they will be created. - If \ :literal:`dest`\ is a nonexistent USS file, it will be created. + If ``dest`` is a nonexistent USS file, it will be created. - If \ :literal:`dest`\ is a new USS file or replacement, the file will be appropriately tagged with either the system's default locale or the encoding option defined. If the USS file is a replacement, the user must have write authority to the file either through ownership, group or other permissions, else the module will fail. + If ``dest`` is a new USS file or replacement, the file will be appropriately tagged with either the system's default locale or the encoding option defined. If the USS file is a replacement, the user must have write authority to the file either through ownership, group or other permissions, else the module will fail. - If \ :literal:`dest`\ is a nonexistent data set, it will be created following the process outlined here and in the \ :literal:`volume`\ option. + If ``dest`` is a nonexistent data set, it will be created following the process outlined here and in the ``volume`` option. - If \ :literal:`dest`\ is a nonexistent data set, the attributes assigned will depend on the type of \ :literal:`src`\ . If \ :literal:`src`\ is a USS file, \ :literal:`dest`\ will have a Fixed Block (FB) record format and the remaining attributes will be computed. If \ :emphasis:`is\_binary=true`\ , \ :literal:`dest`\ will have a Fixed Block (FB) record format with a record length of 80, block size of 32760, and the remaining attributes will be computed. If \ :emphasis:`executable=true`\ ,\ :literal:`dest`\ will have an Undefined (U) record format with a record length of 0, block size of 32760, and the remaining attributes will be computed. + If ``dest`` is a nonexistent data set, the attributes assigned will depend on the type of ``src``. If ``src`` is a USS file, ``dest`` will have a Fixed Block (FB) record format and the remaining attributes will be computed. If *is_binary=true*, ``dest`` will have a Fixed Block (FB) record format with a record length of 80, block size of 32760, and the remaining attributes will be computed. If *executable=true*,``dest`` will have an Undefined (U) record format with a record length of 0, block size of 32760, and the remaining attributes will be computed. If ``src`` is a file and ``dest`` a partitioned data set, ``dest`` does not need to include a member in its value, the module can automatically compute the resulting member name from ``src``. When ``dest`` is a data set, precedence rules apply. If ``dest_data_set`` is set, this will take precedence over an existing data set. If ``dest`` is an empty data set, the empty data set will be written with the expectation its attributes satisfy the copy. Lastly, if no precendent rule has been exercised, ``dest`` will be created with the same attributes of ``src``. - When the \ :literal:`dest`\ is an existing VSAM (KSDS) or VSAM (ESDS), then source can be an ESDS, a KSDS or an RRDS. The VSAM (KSDS) or VSAM (ESDS) \ :literal:`dest`\ will be deleted and recreated following the process outlined in the \ :literal:`volume`\ option. + When the ``dest`` is an existing VSAM (KSDS) or VSAM (ESDS), then source can be an ESDS, a KSDS or an RRDS. The VSAM (KSDS) or VSAM (ESDS) ``dest`` will be deleted and recreated following the process outlined in the ``volume`` option. - When the \ :literal:`dest`\ is an existing VSAM (RRDS), then the source must be an RRDS. The VSAM (RRDS) will be deleted and recreated following the process outlined in the \ :literal:`volume`\ option. + When the ``dest`` is an existing VSAM (RRDS), then the source must be an RRDS. The VSAM (RRDS) will be deleted and recreated following the process outlined in the ``volume`` option. - When \ :literal:`dest`\ is and existing VSAM (LDS), then source must be an LDS. The VSAM (LDS) will be deleted and recreated following the process outlined in the \ :literal:`volume`\ option. + When ``dest`` is and existing VSAM (LDS), then source must be an LDS. The VSAM (LDS) will be deleted and recreated following the process outlined in the ``volume`` option. ``dest`` can be a previously allocated generation data set (GDS) or a new GDS. @@ -124,9 +122,9 @@ dest encoding Specifies which encodings the destination file or data set should be converted from and to. - If \ :literal:`encoding`\ is not provided, the module determines which local and remote charsets to convert the data from and to. Note that this is only done for text data and not binary data. + If ``encoding`` is not provided, the module determines which local and remote charsets to convert the data from and to. Note that this is only done for text data and not binary data. - Only valid if \ :literal:`is\_binary`\ is false. + Only valid if ``is_binary`` is false. | **required**: False | **type**: dict @@ -150,22 +148,22 @@ encoding tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str force - If set to \ :literal:`true`\ and the remote file or data set \ :literal:`dest`\ is empty, the \ :literal:`dest`\ will be reused. + If set to ``true`` and the remote file or data set ``dest`` is empty, the ``dest`` will be reused. - If set to \ :literal:`true`\ and the remote file or data set \ :literal:`dest`\ is NOT empty, the \ :literal:`dest`\ will be deleted and recreated with the \ :literal:`src`\ data set attributes, otherwise it will be recreated with the \ :literal:`dest`\ data set attributes. + If set to ``true`` and the remote file or data set ``dest`` is NOT empty, the ``dest`` will be deleted and recreated with the ``src`` data set attributes, otherwise it will be recreated with the ``dest`` data set attributes. - To backup data before any deletion, see parameters \ :literal:`backup`\ and \ :literal:`backup\_name`\ . + To backup data before any deletion, see parameters ``backup`` and ``backup_name``. - If set to \ :literal:`false`\ , the file or data set will only be copied if the destination does not exist. + If set to ``false``, the file or data set will only be copied if the destination does not exist. - If set to \ :literal:`false`\ and destination exists, the module exits with a note to the user. + If set to ``false`` and destination exists, the module exits with a note to the user. | **required**: False | **type**: bool @@ -173,11 +171,11 @@ force force_lock - By default, when \ :literal:`dest`\ is a MVS data set and is being used by another process with DISP=SHR or DISP=OLD the module will fail. Use \ :literal:`force\_lock`\ to bypass this check and continue with copy. + By default, when ``dest`` is a MVS data set and is being used by another process with DISP=SHR or DISP=OLD the module will fail. Use ``force_lock`` to bypass this check and continue with copy. - If set to \ :literal:`true`\ and destination is a MVS data set opened by another process then zos\_copy will try to copy using DISP=SHR. + If set to ``true`` and destination is a MVS data set opened by another process then zos_copy will try to copy using DISP=SHR. - Using \ :literal:`force\_lock`\ uses operations that are subject to race conditions and can lead to data loss, use with caution. + Using ``force_lock`` uses operations that are subject to race conditions and can lead to data loss, use with caution. If a data set member has aliases, and is not a program object, copying that member to a dataset that is in use will result in the aliases not being preserved in the target dataset. When this scenario occurs the module will fail. @@ -187,21 +185,21 @@ force_lock ignore_sftp_stderr - During data transfer through SFTP, the module fails if the SFTP command directs any content to stderr. The user is able to override this behavior by setting this parameter to \ :literal:`true`\ . By doing so, the module would essentially ignore the stderr stream produced by SFTP and continue execution. + During data transfer through SFTP, the SFTP command directs content to stderr. By default, the module essentially ignores the stderr stream produced by SFTP and continues execution. The user is able to override this behavior by setting this parameter to ``false``. By doing so, any content written to stderr is considered an error by Ansible and will have module fail. - When Ansible verbosity is set to greater than 3, either through the command line interface (CLI) using \ :strong:`-vvvv`\ or through environment variables such as \ :strong:`verbosity = 4`\ , then this parameter will automatically be set to \ :literal:`true`\ . + When Ansible verbosity is set to greater than 3, either through the command line interface (CLI) using **-vvvv** or through environment variables such as **verbosity = 4**, then this parameter will automatically be set to ``true``. | **required**: False | **type**: bool - | **default**: False + | **default**: True is_binary - If set to \ :literal:`true`\ , indicates that the file or data set to be copied is a binary file or data set. + If set to ``true``, indicates that the file or data set to be copied is a binary file or data set. - When \ :emphasis:`is\_binary=true`\ , no encoding conversion is applied to the content, all content transferred retains the original state. + When *is_binary=true*, no encoding conversion is applied to the content, all content transferred retains the original state. - Use \ :emphasis:`is\_binary=true`\ when copying a Database Request Module (DBRM) to retain the original state of the serialized SQL statements of a program. + Use *is_binary=true* when copying a Database Request Module (DBRM) to retain the original state of the serialized SQL statements of a program. | **required**: False | **type**: bool @@ -209,15 +207,15 @@ is_binary executable - If set to \ :literal:`true`\ , indicates that the file or library to be copied is an executable. + If set to ``true``, indicates that the file or library to be copied is an executable. - If the \ :literal:`src`\ executable has an alias, the alias information is also copied. If the \ :literal:`dest`\ is Unix, the alias is not visible in Unix, even though the information is there and will be visible if copied to a library. + If the ``src`` executable has an alias, the alias information is also copied. If the ``dest`` is Unix, the alias is not visible in Unix, even though the information is there and will be visible if copied to a library. - If \ :emphasis:`executable=true`\ , and \ :literal:`dest`\ is a data set, it must be a PDS or PDSE (library). + If *executable=true*, and ``dest`` is a data set, it must be a PDS or PDSE (library). - If \ :literal:`dest`\ is a nonexistent data set, the library attributes assigned will be Undefined (U) record format with a record length of 0, block size of 32760 and the remaining attributes will be computed. + If ``dest`` is a nonexistent data set, the library attributes assigned will be Undefined (U) record format with a record length of 0, block size of 32760 and the remaining attributes will be computed. - If \ :literal:`dest`\ is a file, execute permission for the user will be added to the file (\`\`u+x\`\`). + If ``dest`` is a file, execute permission for the user will be added to the file (``u+x``). | **required**: False | **type**: bool @@ -225,9 +223,9 @@ executable aliases - If set to \ :literal:`true`\ , indicates that any aliases found in the source (USS file, USS dir, PDS/E library or member) are to be preserved during the copy operation. + If set to ``true``, indicates that any aliases found in the source (USS file, USS dir, PDS/E library or member) are to be preserved during the copy operation. - Aliases are implicitly preserved when libraries are copied over to USS destinations. That is, when \ :literal:`executable=True`\ and \ :literal:`dest`\ is a USS file or directory, this option will be ignored. + Aliases are implicitly preserved when libraries are copied over to USS destinations. That is, when ``executable=True`` and ``dest`` is a USS file or directory, this option will be ignored. Copying of aliases for text-based data sets from USS sources or to USS destinations is not currently supported. @@ -249,7 +247,7 @@ group When left unspecified, it uses the current group of the current user unless you are root, in which case it can preserve the previous ownership. - This option is only applicable if \ :literal:`dest`\ is USS, otherwise ignored. + This option is only applicable if ``dest`` is USS, otherwise ignored. | **required**: False | **type**: str @@ -258,13 +256,13 @@ group mode The permission of the destination file or directory. - If \ :literal:`dest`\ is USS, this will act as Unix file mode, otherwise ignored. + If ``dest`` is USS, this will act as Unix file mode, otherwise ignored. - It should be noted that modes are octal numbers. The user must either add a leading zero so that Ansible's YAML parser knows it is an octal number (like \ :literal:`0644`\ or \ :literal:`01777`\ )or quote it (like \ :literal:`'644'`\ or \ :literal:`'1777'`\ ) so Ansible receives a string and can do its own conversion from string into number. Giving Ansible a number without following one of these rules will end up with a decimal number which will have unexpected results. + It should be noted that modes are octal numbers. The user must either add a leading zero so that Ansible's YAML parser knows it is an octal number (like ``0644`` or ``01777``)or quote it (like ``'644'`` or ``'1777'``) so Ansible receives a string and can do its own conversion from string into number. Giving Ansible a number without following one of these rules will end up with a decimal number which will have unexpected results. - The mode may also be specified as a symbolic mode (for example, \`\`u+rwx\`\` or \`\`u=rw,g=r,o=r\`\`) or a special string \`preserve\`. + The mode may also be specified as a symbolic mode (for example, ``u+rwx`` or ``u=rw,g=r,o=r``) or a special string `preserve`. - \ :emphasis:`mode=preserve`\ means that the file will be given the same permissions as the source file. + *mode=preserve* means that the file will be given the same permissions as the source file. | **required**: False | **type**: str @@ -275,16 +273,16 @@ owner When left unspecified, it uses the current user unless you are root, in which case it can preserve the previous ownership. - This option is only applicable if \ :literal:`dest`\ is USS, otherwise ignored. + This option is only applicable if ``dest`` is USS, otherwise ignored. | **required**: False | **type**: str remote_src - If set to \ :literal:`false`\ , the module searches for \ :literal:`src`\ at the local machine. + If set to ``false``, the module searches for ``src`` at the local machine. - If set to \ :literal:`true`\ , the module goes to the remote/target machine for \ :literal:`src`\ . + If set to ``true``, the module goes to the remote/target machine for ``src``. | **required**: False | **type**: bool @@ -294,23 +292,19 @@ remote_src src Path to a file/directory or name of a data set to copy to remote z/OS system. - If \ :literal:`remote\_src`\ is true, then \ :literal:`src`\ must be the path to a Unix System Services (USS) file, name of a data set, or data set member. - - If \ :literal:`src`\ is a local path or a USS path, it can be absolute or relative. + If ``remote_src`` is true, then ``src`` must be the path to a Unix System Services (USS) file, name of a data set, or data set member. - If \ :literal:`src`\ is a directory, \ :literal:`dest`\ must be a partitioned data set or a USS directory. + If ``src`` is a local path or a USS path, it can be absolute or relative. - If \ :literal:`src`\ is a file and \ :literal:`dest`\ ends with "/" or is a directory, the file is copied to the directory with the same filename as \ :literal:`src`\ . + If ``src`` is a directory, ``dest`` must be a partitioned data set or a USS directory. - If \ :literal:`src`\ is a directory and ends with "/", the contents of it will be copied into the root of \ :literal:`dest`\ . If it doesn't end with "/", the directory itself will be copied. + If ``src`` is a file and ``dest`` ends with "/" or is a directory, the file is copied to the directory with the same filename as ``src``. - If \ :literal:`src`\ is a directory or a file, file names will be truncated and/or modified to ensure a valid name for a data set or member. + If ``src`` is a directory and ends with "/", the contents of it will be copied into the root of ``dest``. If it doesn't end with "/", the directory itself will be copied. - If \ :literal:`src`\ is a VSAM data set, \ :literal:`dest`\ must also be a VSAM. + If ``src`` is a directory or a file, file names will be truncated and/or modified to ensure a valid name for a data set or member. - If \ :literal:`src`\ is a generation data set (GDS), it must be a previously allocated one. - - If \ :literal:`src`\ is a generation data group (GDG), \ :literal:`dest`\ can be another GDG or a USS directory. + If ``src`` is a VSAM data set, ``dest`` must also be a VSAM. If ``src`` is a generation data set (GDS), it must be a previously allocated one. @@ -318,7 +312,7 @@ src Wildcards can be used to copy multiple PDS/PDSE members to another PDS/PDSE. - Required unless using \ :literal:`content`\ . + Required unless using ``content``. | **required**: False | **type**: str @@ -335,24 +329,22 @@ validate volume - If \ :literal:`dest`\ does not exist, specify which volume \ :literal:`dest`\ should be allocated to. + If ``dest`` does not exist, specify which volume ``dest`` should be allocated to. Only valid when the destination is an MVS data set. The volume must already be present on the device. - If no volume is specified, storage management rules will be used to determine the volume where \ :literal:`dest`\ will be allocated. + If no volume is specified, storage management rules will be used to determine the volume where ``dest`` will be allocated. - If the storage administrator has specified a system default unit name and you do not set a \ :literal:`volume`\ name for non-system-managed data sets, then the system uses the volumes associated with the default unit name. Check with your storage administrator to determine whether a default unit name has been specified. + If the storage administrator has specified a system default unit name and you do not set a ``volume`` name for non-system-managed data sets, then the system uses the volumes associated with the default unit name. Check with your storage administrator to determine whether a default unit name has been specified. | **required**: False | **type**: str dest_data_set - Data set attributes to customize a \ :literal:`dest`\ data set to be copied into. - - Some attributes only apply when \ :literal:`dest`\ is a generation data group (GDG). + Data set attributes to customize a ``dest`` data set to be copied into. Some attributes only apply when ``dest`` is a generation data group (GDG). @@ -369,18 +361,18 @@ dest_data_set space_primary - If the destination \ :emphasis:`dest`\ data set does not exist , this sets the primary space allocated for the data set. + If the destination *dest* data set does not exist , this sets the primary space allocated for the data set. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int space_secondary - If the destination \ :emphasis:`dest`\ data set does not exist , this sets the secondary space allocated for the data set. + If the destination *dest* data set does not exist , this sets the secondary space allocated for the data set. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int @@ -389,7 +381,7 @@ dest_data_set space_type If the destination data set does not exist, this sets the unit of measurement to use when defining primary and secondary space. - Valid units of size are \ :literal:`k`\ , \ :literal:`m`\ , \ :literal:`g`\ , \ :literal:`cyl`\ , and \ :literal:`trk`\ . + Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. | **required**: False | **type**: str @@ -397,7 +389,7 @@ dest_data_set record_format - If the destination data set does not exist, this sets the format of the data set. (e.g \ :literal:`fb`\ ) + If the destination data set does not exist, this sets the format of the data set. (e.g ``fb``) Choices are case-sensitive. @@ -434,9 +426,9 @@ dest_data_set key_offset The key offset to use when creating a KSDS data set. - \ :emphasis:`key\_offset`\ is required when \ :emphasis:`type=ksds`\ . + *key_offset* is required when *type=ksds*. - \ :emphasis:`key\_offset`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_offset* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -445,9 +437,9 @@ dest_data_set key_length The key length to use when creating a KSDS data set. - \ :emphasis:`key\_length`\ is required when \ :emphasis:`type=ksds`\ . + *key_length* is required when *type=ksds*. - \ :emphasis:`key\_length`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_length* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -556,13 +548,13 @@ dest_data_set use_template - Whether the module should treat \ :literal:`src`\ as a Jinja2 template and render it before continuing with the rest of the module. + Whether the module should treat ``src`` as a Jinja2 template and render it before continuing with the rest of the module. - Only valid when \ :literal:`src`\ is a local file or directory. + Only valid when ``src`` is a local file or directory. - All variables defined in inventory files, vars files and the playbook will be passed to the template engine, as well as \ `Ansible special variables `__\ , such as \ :literal:`playbook\_dir`\ , \ :literal:`ansible\_version`\ , etc. + All variables defined in inventory files, vars files and the playbook will be passed to the template engine, as well as `Ansible special variables `_, such as ``playbook_dir``, ``ansible_version``, etc. - If variables defined in different scopes share the same name, Ansible will apply variable precedence to them. You can see the complete precedence order \ `in Ansible's documentation `__\ + If variables defined in different scopes share the same name, Ansible will apply variable precedence to them. You can see the complete precedence order `in Ansible's documentation `_ | **required**: False | **type**: bool @@ -572,9 +564,9 @@ use_template template_parameters Options to set the way Jinja2 will process templates. - Jinja2 already sets defaults for the markers it uses, you can find more information at its \ `official documentation `__\ . + Jinja2 already sets defaults for the markers it uses, you can find more information at its `official documentation `_. - These options are ignored unless \ :literal:`use\_template`\ is true. + These options are ignored unless ``use_template`` is true. | **required**: False | **type**: dict @@ -653,7 +645,7 @@ template_parameters trim_blocks Whether Jinja2 should remove the first newline after a block is removed. - Setting this option to \ :literal:`False`\ will result in newlines being added to the rendered template. This could create invalid code when working with JCL templates or empty records in destination data sets. + Setting this option to ``False`` will result in newlines being added to the rendered template. This could create invalid code when working with JCL templates or empty records in destination data sets. | **required**: False | **type**: bool @@ -673,8 +665,11 @@ template_parameters | **required**: False | **type**: str - | **default**: \\n - | **choices**: \\n, \\r, \\r\\n + | **default**: + + | **choices**: +, , + auto_reload @@ -900,17 +895,17 @@ Notes .. note:: Destination data sets are assumed to be in catalog. When trying to copy to an uncataloged data set, the module assumes that the data set does not exist and will create it. - Destination will be backed up if either \ :literal:`backup`\ is \ :literal:`true`\ or \ :literal:`backup\_name`\ is provided. If \ :literal:`backup`\ is \ :literal:`false`\ but \ :literal:`backup\_name`\ is provided, task will fail. + Destination will be backed up if either ``backup`` is ``true`` or ``backup_name`` is provided. If ``backup`` is ``false`` but ``backup_name`` is provided, task will fail. When copying local files or directories, temporary storage will be used on the remote z/OS system. The size of the temporary storage will correspond to the size of the file or directory being copied. Temporary files will always be deleted, regardless of success or failure of the copy task. VSAM data sets can only be copied to other VSAM data sets. - For supported character sets used to encode data, refer to the \ `documentation `__\ . + For supported character sets used to encode data, refer to the `documentation `_. This module uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. - Beginning in version 1.8.x, zos\_copy will no longer attempt to correct a copy of a data type member into a PDSE that contains program objects. You can control this behavior using module option \ :literal:`executable`\ that will signify an executable is being copied into a PDSE with other executables. Mixing data type members with program objects will result in a (FSUM8976,./zos\_copy.html) error. + Beginning in version 1.8.x, zos_copy will no longer attempt to correct a copy of a data type member into a PDSE that contains program objects. You can control this behavior using module option ``executable`` that will signify an executable is being copied into a PDSE with other executables. Mixing data type members with program objects will result in a (FSUM8976,./zos_copy.html) error. It is the playbook author or user's responsibility to ensure they have appropriate authority to the RACF FACILITY resource class. A user is described as the remote user, configured either for the playbook or playbook tasks, who can also obtain escalated privileges to execute as root or another user. @@ -1021,7 +1016,7 @@ destination_attributes checksum - SHA256 checksum of the file after running zos\_copy. + SHA256 checksum of the file after running zos_copy. | **returned**: When ``validate=true`` and if ``dest`` is USS | **type**: str diff --git a/docs/source/modules/zos_data_set.rst b/docs/source/modules/zos_data_set.rst index 3b1b64870..7a56cfe84 100644 --- a/docs/source/modules/zos_data_set.rst +++ b/docs/source/modules/zos_data_set.rst @@ -28,11 +28,11 @@ Parameters name - The name of the data set being managed. (e.g \ :literal:`USER.TEST`\ ) + The name of the data set being managed. (e.g ``USER.TEST``) - If \ :emphasis:`name`\ is not provided, a randomized data set name will be generated with the HLQ matching the module-runners username. + If *name* is not provided, a randomized data set name will be generated with the HLQ matching the module-runners username. - Required if \ :emphasis:`type=member`\ or \ :emphasis:`state!=present`\ and not using \ :emphasis:`batch`\ . + Required if *type=member* or *state!=present* and not using *batch*. | **required**: False | **type**: str @@ -41,22 +41,22 @@ name state The final state desired for specified data set. - If \ :emphasis:`state=absent`\ and the data set does not exist on the managed node, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . + If *state=absent* and the data set does not exist on the managed node, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=absent`\ and the data set does exist on the managed node, remove the data set, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=absent* and the data set does exist on the managed node, remove the data set, module completes successfully with *changed=True*. - If \ :emphasis:`state=absent`\ and \ :emphasis:`type=member`\ and \ :emphasis:`force=True`\ , the data set will be opened with \ :emphasis:`DISP=SHR`\ such that the entire data set can be accessed by other processes while the specified member is deleted. + If *state=absent* and *type=member* and *force=True*, the data set will be opened with *DISP=SHR* such that the entire data set can be accessed by other processes while the specified member is deleted. - If \ :emphasis:`state=absent`\ and \ :emphasis:`volumes`\ is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog is successful, then the data set is removed. Module completes successfully with \ :emphasis:`changed=True`\ . + If *state=absent* and *volumes* is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog is successful, then the data set is removed. Module completes successfully with *changed=True*. - If \ :emphasis:`state=absent`\ and \ :emphasis:`volumes`\ is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog fails, then no action is taken. Module completes successfully with \ :emphasis:`changed=False`\ . + If *state=absent* and *volumes* is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog fails, then no action is taken. Module completes successfully with *changed=False*. - If \ :emphasis:`state=absent`\ and \ :emphasis:`volumes`\ is provided, and the data set is found in the catalog, the module compares the catalog volume attributes to the provided \ :emphasis:`volumes`\ . If the volume attributes are different, the cataloged data set will be uncataloged temporarily while the requested data set be deleted is cataloged. The module will catalog the original data set on completion, if the attempts to catalog fail, no action is taken. Module completes successfully with \ :emphasis:`changed=False`\ . + If *state=absent* and *volumes* is provided, and the data set is found in the catalog, the module compares the catalog volume attributes to the provided *volumes*. If the volume attributes are different, the cataloged data set will be uncataloged temporarily while the requested data set be deleted is cataloged. The module will catalog the original data set on completion, if the attempts to catalog fail, no action is taken. Module completes successfully with *changed=False*. If *state=absent* and *type=gdg* and the GDG base has active generations the module will complete successfully with *changed=False*. To remove it option *force* needs to be used. If the GDG base does not have active generations the module will complete successfully with *changed=True*. @@ -65,31 +65,28 @@ state If *state=present* and the data set does not exist on the managed node, create and catalog the data set, module completes successfully with *changed=True*. - If \ :emphasis:`state=present`\ and the data set does not exist on the managed node, create and catalog the data set, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=present* and *replace=True* and the data set is present on the managed node the existing data set is deleted, and a new data set is created and cataloged with the desired attributes, module completes successfully with *changed=True*. - If \ :emphasis:`state=present`\ and \ :emphasis:`replace=True`\ and the data set is present on the managed node the existing data set is deleted, and a new data set is created and cataloged with the desired attributes, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=present* and *replace=False* and the data set is present on the managed node, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=present`\ and \ :emphasis:`replace=False`\ and the data set is present on the managed node, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . + If *state=present* and *type=member* and the member does not exist in the data set, create a member formatted to store data, module completes successfully with *changed=True*. Note, a PDSE does not allow a mixture of formats such that there is executables (program objects) and data. The member created is formatted to store data, not an executable. - If \ :emphasis:`state=present`\ and \ :emphasis:`type=member`\ and the member does not exist in the data set, create a member formatted to store data, module completes successfully with \ :emphasis:`changed=True`\ . Note, a PDSE does not allow a mixture of formats such that there is executables (program objects) and data. The member created is formatted to store data, not an executable. + If *state=cataloged* and *volumes* is provided and the data set is already cataloged, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=cataloged`\ and \ :emphasis:`volumes`\ is provided and the data set is already cataloged, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . + If *state=cataloged* and *volumes* is provided and the data set is not cataloged, module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog is successful, module completes successfully with *changed=True*. - If \ :emphasis:`state=cataloged`\ and \ :emphasis:`volumes`\ is provided and the data set is not cataloged, module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog is successful, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=cataloged* and *volumes* is provided and the data set is not cataloged, module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog fails, returns failure with *changed=False*. - If \ :emphasis:`state=cataloged`\ and \ :emphasis:`volumes`\ is provided and the data set is not cataloged, module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog fails, returns failure with \ :emphasis:`changed=False`\ . + If *state=uncataloged* and the data set is not found, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=uncataloged`\ and the data set is not found, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . - - - If \ :emphasis:`state=uncataloged`\ and the data set is found, the data set is uncataloged, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=uncataloged* and the data set is found, the data set is uncataloged, module completes successfully with *changed=True*. | **required**: False @@ -99,9 +96,9 @@ state type - The data set type to be used when creating a data set. (e.g \ :literal:`pdse`\ ). + The data set type to be used when creating a data set. (e.g ``pdse``). - \ :literal:`member`\ expects to be used with an existing partitioned data set. + ``member`` expects to be used with an existing partitioned data set. Choices are case-sensitive. @@ -114,7 +111,7 @@ type space_primary The amount of primary space to allocate for the dataset. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int @@ -124,7 +121,7 @@ space_primary space_secondary The amount of secondary space to allocate for the dataset. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int @@ -134,7 +131,7 @@ space_secondary space_type The unit of measurement to use when defining primary and secondary space. - Valid units of size are \ :literal:`k`\ , \ :literal:`m`\ , \ :literal:`g`\ , \ :literal:`cyl`\ , and \ :literal:`trk`\ . + Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. | **required**: False | **type**: str @@ -143,11 +140,11 @@ space_type record_format - The format of the data set. (e.g \ :literal:`FB`\ ) + The format of the data set. (e.g ``FB``) Choices are case-sensitive. - When \ :emphasis:`type=ksds`\ , \ :emphasis:`type=esds`\ , \ :emphasis:`type=rrds`\ , \ :emphasis:`type=lds`\ or \ :emphasis:`type=zfs`\ then \ :emphasis:`record\_format=None`\ , these types do not have a default \ :emphasis:`record\_format`\ . + When *type=ksds*, *type=esds*, *type=rrds*, *type=lds* or *type=zfs* then *record_format=None*, these types do not have a default *record_format*. | **required**: False | **type**: str @@ -222,9 +219,9 @@ directory_blocks key_offset The key offset to use when creating a KSDS data set. - \ :emphasis:`key\_offset`\ is required when \ :emphasis:`type=ksds`\ . + *key_offset* is required when *type=ksds*. - \ :emphasis:`key\_offset`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_offset* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -233,9 +230,9 @@ key_offset key_length The key length to use when creating a KSDS data set. - \ :emphasis:`key\_length`\ is required when \ :emphasis:`type=ksds`\ . + *key_length* is required when *type=ksds*. - \ :emphasis:`key\_length`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_length* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -310,19 +307,19 @@ scratch volumes - If cataloging a data set, \ :emphasis:`volumes`\ specifies the name of the volume(s) where the data set is located. + If cataloging a data set, *volumes* specifies the name of the volume(s) where the data set is located. - If creating a data set, \ :emphasis:`volumes`\ specifies the volume(s) where the data set should be created. + If creating a data set, *volumes* specifies the volume(s) where the data set should be created. - If \ :emphasis:`volumes`\ is provided when \ :emphasis:`state=present`\ , and the data set is not found in the catalog, \ `zos\_data\_set <./zos_data_set.html>`__\ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged. + If *volumes* is provided when *state=present*, and the data set is not found in the catalog, `zos_data_set <./zos_data_set.html>`_ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged. - If \ :emphasis:`volumes`\ is provided when \ :emphasis:`state=absent`\ and the data set is not found in the catalog, \ `zos\_data\_set <./zos_data_set.html>`__\ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged and promptly removed from the system. + If *volumes* is provided when *state=absent* and the data set is not found in the catalog, `zos_data_set <./zos_data_set.html>`_ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged and promptly removed from the system. - \ :emphasis:`volumes`\ is required when \ :emphasis:`state=cataloged`\ . + *volumes* is required when *state=cataloged*. Accepts a string when using a single volume and a list of strings when using multiple. @@ -331,12 +328,12 @@ volumes replace - When \ :emphasis:`replace=True`\ , and \ :emphasis:`state=present`\ , existing data set matching \ :emphasis:`name`\ will be replaced. + When *replace=True*, and *state=present*, existing data set matching *name* will be replaced. Replacement is performed by deleting the existing data set and creating a new data set with the same name and desired attributes. Since the existing data set will be deleted prior to creating the new data set, no data set will exist if creation of the new data set fails. - If \ :emphasis:`replace=True`\ , all data in the original data set will be lost. + If *replace=True*, all data in the original data set will be lost. | **required**: False | **type**: bool @@ -346,7 +343,7 @@ replace tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str @@ -357,7 +354,7 @@ force This is helpful when a data set is being used in a long running process such as a started task and you are wanting to delete a member. - The \ :emphasis:`force=True`\ option enables sharing of data sets through the disposition \ :emphasis:`DISP=SHR`\ . + The *force=True* option enables sharing of data sets through the disposition *DISP=SHR*. The *force=True* only applies to data set members when *state=absent* and *type=member* and when removing a GDG base with active generations. @@ -377,11 +374,11 @@ batch name - The name of the data set being managed. (e.g \ :literal:`USER.TEST`\ ) + The name of the data set being managed. (e.g ``USER.TEST``) - If \ :emphasis:`name`\ is not provided, a randomized data set name will be generated with the HLQ matching the module-runners username. + If *name* is not provided, a randomized data set name will be generated with the HLQ matching the module-runners username. - Required if \ :emphasis:`type=member`\ or \ :emphasis:`state!=present`\ + Required if *type=member* or *state!=present* | **required**: False | **type**: str @@ -390,49 +387,49 @@ batch state The final state desired for specified data set. - If \ :emphasis:`state=absent`\ and the data set does not exist on the managed node, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . + If *state=absent* and the data set does not exist on the managed node, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=absent`\ and the data set does exist on the managed node, remove the data set, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=absent* and the data set does exist on the managed node, remove the data set, module completes successfully with *changed=True*. - If \ :emphasis:`state=absent`\ and \ :emphasis:`type=member`\ and \ :emphasis:`force=True`\ , the data set will be opened with \ :emphasis:`DISP=SHR`\ such that the entire data set can be accessed by other processes while the specified member is deleted. + If *state=absent* and *type=member* and *force=True*, the data set will be opened with *DISP=SHR* such that the entire data set can be accessed by other processes while the specified member is deleted. - If \ :emphasis:`state=absent`\ and \ :emphasis:`volumes`\ is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog is successful, then the data set is removed. Module completes successfully with \ :emphasis:`changed=True`\ . + If *state=absent* and *volumes* is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog is successful, then the data set is removed. Module completes successfully with *changed=True*. - If \ :emphasis:`state=absent`\ and \ :emphasis:`volumes`\ is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog fails, then no action is taken. Module completes successfully with \ :emphasis:`changed=False`\ . + If *state=absent* and *volumes* is provided, and the data set is not found in the catalog, the module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog fails, then no action is taken. Module completes successfully with *changed=False*. - If \ :emphasis:`state=absent`\ and \ :emphasis:`volumes`\ is provided, and the data set is found in the catalog, the module compares the catalog volume attributes to the provided \ :emphasis:`volumes`\ . If they volume attributes are different, the cataloged data set will be uncataloged temporarily while the requested data set be deleted is cataloged. The module will catalog the original data set on completion, if the attempts to catalog fail, no action is taken. Module completes successfully with \ :emphasis:`changed=False`\ . + If *state=absent* and *volumes* is provided, and the data set is found in the catalog, the module compares the catalog volume attributes to the provided *volumes*. If they volume attributes are different, the cataloged data set will be uncataloged temporarily while the requested data set be deleted is cataloged. The module will catalog the original data set on completion, if the attempts to catalog fail, no action is taken. Module completes successfully with *changed=False*. - If \ :emphasis:`state=present`\ and the data set does not exist on the managed node, create and catalog the data set, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=present* and the data set does not exist on the managed node, create and catalog the data set, module completes successfully with *changed=True*. - If \ :emphasis:`state=present`\ and \ :emphasis:`replace=True`\ and the data set is present on the managed node the existing data set is deleted, and a new data set is created and cataloged with the desired attributes, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=present* and *replace=True* and the data set is present on the managed node the existing data set is deleted, and a new data set is created and cataloged with the desired attributes, module completes successfully with *changed=True*. - If \ :emphasis:`state=present`\ and \ :emphasis:`replace=False`\ and the data set is present on the managed node, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . + If *state=present* and *replace=False* and the data set is present on the managed node, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=present`\ and \ :emphasis:`type=member`\ and the member does not exist in the data set, create a member formatted to store data, module completes successfully with \ :emphasis:`changed=True`\ . Note, a PDSE does not allow a mixture of formats such that there is executables (program objects) and data. The member created is formatted to store data, not an executable. + If *state=present* and *type=member* and the member does not exist in the data set, create a member formatted to store data, module completes successfully with *changed=True*. Note, a PDSE does not allow a mixture of formats such that there is executables (program objects) and data. The member created is formatted to store data, not an executable. - If \ :emphasis:`state=cataloged`\ and \ :emphasis:`volumes`\ is provided and the data set is already cataloged, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . + If *state=cataloged* and *volumes* is provided and the data set is already cataloged, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=cataloged`\ and \ :emphasis:`volumes`\ is provided and the data set is not cataloged, module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog is successful, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=cataloged* and *volumes* is provided and the data set is not cataloged, module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog is successful, module completes successfully with *changed=True*. - If \ :emphasis:`state=cataloged`\ and \ :emphasis:`volumes`\ is provided and the data set is not cataloged, module attempts to perform catalog using supplied \ :emphasis:`name`\ and \ :emphasis:`volumes`\ . If the attempt to catalog the data set catalog fails, returns failure with \ :emphasis:`changed=False`\ . + If *state=cataloged* and *volumes* is provided and the data set is not cataloged, module attempts to perform catalog using supplied *name* and *volumes*. If the attempt to catalog the data set catalog fails, returns failure with *changed=False*. - If \ :emphasis:`state=uncataloged`\ and the data set is not found, no action taken, module completes successfully with \ :emphasis:`changed=False`\ . + If *state=uncataloged* and the data set is not found, no action taken, module completes successfully with *changed=False*. - If \ :emphasis:`state=uncataloged`\ and the data set is found, the data set is uncataloged, module completes successfully with \ :emphasis:`changed=True`\ . + If *state=uncataloged* and the data set is found, the data set is uncataloged, module completes successfully with *changed=True*. | **required**: False @@ -442,9 +439,9 @@ batch type - The data set type to be used when creating a data set. (e.g \ :literal:`pdse`\ ) + The data set type to be used when creating a data set. (e.g ``pdse``) - \ :literal:`member`\ expects to be used with an existing partitioned data set. + ``member`` expects to be used with an existing partitioned data set. Choices are case-sensitive. @@ -457,7 +454,7 @@ batch space_primary The amount of primary space to allocate for the dataset. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int @@ -467,7 +464,7 @@ batch space_secondary The amount of secondary space to allocate for the dataset. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int @@ -477,7 +474,7 @@ batch space_type The unit of measurement to use when defining primary and secondary space. - Valid units of size are \ :literal:`k`\ , \ :literal:`m`\ , \ :literal:`g`\ , \ :literal:`cyl`\ , and \ :literal:`trk`\ . + Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. | **required**: False | **type**: str @@ -486,11 +483,11 @@ batch record_format - The format of the data set. (e.g \ :literal:`FB`\ ) + The format of the data set. (e.g ``FB``) Choices are case-sensitive. - When \ :emphasis:`type=ksds`\ , \ :emphasis:`type=esds`\ , \ :emphasis:`type=rrds`\ , \ :emphasis:`type=lds`\ or \ :emphasis:`type=zfs`\ then \ :emphasis:`record\_format=None`\ , these types do not have a default \ :emphasis:`record\_format`\ . + When *type=ksds*, *type=esds*, *type=rrds*, *type=lds* or *type=zfs* then *record_format=None*, these types do not have a default *record_format*. | **required**: False | **type**: str @@ -565,9 +562,9 @@ batch key_offset The key offset to use when creating a KSDS data set. - \ :emphasis:`key\_offset`\ is required when \ :emphasis:`type=ksds`\ . + *key_offset* is required when *type=ksds*. - \ :emphasis:`key\_offset`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_offset* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -576,9 +573,9 @@ batch key_length The key length to use when creating a KSDS data set. - \ :emphasis:`key\_length`\ is required when \ :emphasis:`type=ksds`\ . + *key_length* is required when *type=ksds*. - \ :emphasis:`key\_length`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_length* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -653,19 +650,19 @@ batch volumes - If cataloging a data set, \ :emphasis:`volumes`\ specifies the name of the volume(s) where the data set is located. + If cataloging a data set, *volumes* specifies the name of the volume(s) where the data set is located. - If creating a data set, \ :emphasis:`volumes`\ specifies the volume(s) where the data set should be created. + If creating a data set, *volumes* specifies the volume(s) where the data set should be created. - If \ :emphasis:`volumes`\ is provided when \ :emphasis:`state=present`\ , and the data set is not found in the catalog, \ `zos\_data\_set <./zos_data_set.html>`__\ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged. + If *volumes* is provided when *state=present*, and the data set is not found in the catalog, `zos_data_set <./zos_data_set.html>`_ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged. - If \ :emphasis:`volumes`\ is provided when \ :emphasis:`state=absent`\ and the data set is not found in the catalog, \ `zos\_data\_set <./zos_data_set.html>`__\ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged and promptly removed from the system. + If *volumes* is provided when *state=absent* and the data set is not found in the catalog, `zos_data_set <./zos_data_set.html>`_ will check the volume table of contents to see if the data set exists. If the data set does exist, it will be cataloged and promptly removed from the system. - \ :emphasis:`volumes`\ is required when \ :emphasis:`state=cataloged`\ . + *volumes* is required when *state=cataloged*. Accepts a string when using a single volume and a list of strings when using multiple. @@ -674,12 +671,12 @@ batch replace - When \ :emphasis:`replace=True`\ , and \ :emphasis:`state=present`\ , existing data set matching \ :emphasis:`name`\ will be replaced. + When *replace=True*, and *state=present*, existing data set matching *name* will be replaced. Replacement is performed by deleting the existing data set and creating a new data set with the same name and desired attributes. Since the existing data set will be deleted prior to creating the new data set, no data set will exist if creation of the new data set fails. - If \ :emphasis:`replace=True`\ , all data in the original data set will be lost. + If *replace=True*, all data in the original data set will be lost. | **required**: False | **type**: bool @@ -691,9 +688,9 @@ batch This is helpful when a data set is being used in a long running process such as a started task and you are wanting to delete a member. - The \ :emphasis:`force=True`\ option enables sharing of data sets through the disposition \ :emphasis:`DISP=SHR`\ . + The *force=True* option enables sharing of data sets through the disposition *DISP=SHR*. - The \ :emphasis:`force=True`\ only applies to data set members when \ :emphasis:`state=absent`\ and \ :emphasis:`type=member`\ . + The *force=True* only applies to data set members when *state=absent* and *type=member*. | **required**: False | **type**: bool diff --git a/docs/source/modules/zos_encode.rst b/docs/source/modules/zos_encode.rst index 2c5bd4e1d..860a150bf 100644 --- a/docs/source/modules/zos_encode.rst +++ b/docs/source/modules/zos_encode.rst @@ -37,7 +37,7 @@ encoding from - The character set of the source \ :emphasis:`src`\ . + The character set of the source *src*. | **required**: False | **type**: str @@ -45,7 +45,7 @@ encoding to - The destination \ :emphasis:`dest`\ character set for the output to be written as. + The destination *dest* character set for the output to be written as. | **required**: False | **type**: str @@ -58,9 +58,7 @@ src The USS path or file must be an absolute pathname. - If \ :emphasis:`src`\ is a USS directory, all files will be encoded. - - Encoding a whole generation data group (GDG) is not supported. + If *src* is a USS directory, all files will be encoded. Encoding a whole generation data group (GDG) is not supported. @@ -73,9 +71,9 @@ dest The destination *dest* can be a UNIX System Services (USS) file or path, PS (sequential data set), PDS, PDSE, member of a PDS or PDSE, a generation data set (GDS) or KSDS (VSAM data set). - If the length of the PDSE member name used in \ :emphasis:`dest`\ is greater than 8 characters, the member name will be truncated when written out. + If the length of the PDSE member name used in *dest* is greater than 8 characters, the member name will be truncated when written out. - If \ :emphasis:`dest`\ is not specified, the \ :emphasis:`src`\ will be used as the destination and will overwrite the \ :emphasis:`src`\ with the character set in the option \ :emphasis:`to\_encoding`\ . + If *dest* is not specified, the *src* will be used as the destination and will overwrite the *src* with the character set in the option *to_encoding*. The USS file or path must be an absolute pathname. @@ -86,9 +84,9 @@ dest backup - Creates a backup file or backup data set for \ :emphasis:`dest`\ , including the timestamp information to ensure that you retrieve the original file. + Creates a backup file or backup data set for *dest*, including the timestamp information to ensure that you retrieve the original file. - \ :emphasis:`backup\_name`\ can be used to specify a backup file name if \ :emphasis:`backup=true`\ . + *backup_name* can be used to specify a backup file name if *backup=true*. | **required**: False | **type**: bool @@ -98,15 +96,13 @@ backup backup_name Specify the USS file name or data set name for the dest backup. - If dest is a USS file or path, \ :emphasis:`backup\_name`\ must be a file or path name, and the USS path or file must be an absolute pathname. + If dest is a USS file or path, *backup_name* must be a file or path name, and the USS path or file must be an absolute pathname. - If dest is an MVS data set, the \ :emphasis:`backup\_name`\ must be an MVS data set name. + If dest is an MVS data set, the *backup_name* must be an MVS data set name. - If \ :emphasis:`backup\_name`\ is not provided, the default backup name will be used. The default backup name for a USS file or path will be the destination file or path name appended with a timestamp, e.g. /path/file\_name.2020-04-23-08-32-29-bak.tar. If dest is an MVS data set, the default backup name will be a random name generated by IBM Z Open Automation Utilities. + If *backup_name* is not provided, the default backup name will be used. The default backup name for a USS file or path will be the destination file or path name appended with a timestamp, e.g. /path/file_name.2020-04-23-08-32-29-bak.tar. If dest is an MVS data set, the default backup name will be a random name generated by IBM Z Open Automation Utilities. - \ :literal:`backup\_name`\ will be returned on either success or failure of module execution such that data can be retrieved. - - If \ :emphasis:`backup\_name`\ is a generation data set (GDS), it must be a relative positive name (for example, \ :literal:`HLQ.USER.GDG(+1)`\ ). + ``backup_name`` will be returned on either success or failure of module execution such that data can be retrieved. If *backup_name* is a generation data set (GDS), it must be a relative positive name (for example, V(HLQ.USER.GDG(+1\))). @@ -117,7 +113,7 @@ backup_name backup_compress Determines if backups to USS files or paths should be compressed. - \ :emphasis:`backup\_compress`\ is only used when \ :emphasis:`backup=true`\ . + *backup_compress* is only used when *backup=true*. | **required**: False | **type**: bool @@ -127,7 +123,7 @@ backup_compress tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str @@ -283,6 +279,7 @@ Examples + Notes ----- @@ -291,7 +288,7 @@ Notes All data sets are always assumed to be cataloged. If an uncataloged data set needs to be encoded, it should be cataloged first. - For supported character sets used to encode data, refer to the \ `documentation `__\ . + For supported character sets used to encode data, refer to the `documentation `_. @@ -304,7 +301,7 @@ Return Values src - The location of the input characters identified in option \ :emphasis:`src`\ . + The location of the input characters identified in option *src*. | **returned**: always | **type**: str diff --git a/docs/source/modules/zos_fetch.rst b/docs/source/modules/zos_fetch.rst index 800eee88f..e3f0df325 100644 --- a/docs/source/modules/zos_fetch.rst +++ b/docs/source/modules/zos_fetch.rst @@ -98,7 +98,7 @@ encoding from - The character set of the source \ :emphasis:`src`\ . + The character set of the source *src*. Supported character sets rely on the charset conversion utility (iconv) version; the most common character sets are supported. @@ -107,7 +107,7 @@ encoding to - The destination \ :emphasis:`dest`\ character set for the output to be written as. + The destination *dest* character set for the output to be written as. Supported character sets rely on the charset conversion utility (iconv) version; the most common character sets are supported. @@ -119,20 +119,20 @@ encoding tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str ignore_sftp_stderr - During data transfer through sftp, the module fails if the sftp command directs any content to stderr. The user is able to override this behavior by setting this parameter to \ :literal:`true`\ . By doing so, the module would essentially ignore the stderr stream produced by sftp and continue execution. + During data transfer through SFTP, the SFTP command directs content to stderr. By default, the module essentially ignores the stderr stream produced by SFTP and continues execution. The user is able to override this behavior by setting this parameter to ``false``. By doing so, any content written to stderr is considered an error by Ansible and will have module fail. - When Ansible verbosity is set to greater than 3, either through the command line interface (CLI) using \ :strong:`-vvvv`\ or through environment variables such as \ :strong:`verbosity = 4`\ , then this parameter will automatically be set to \ :literal:`true`\ . + When Ansible verbosity is set to greater than 3, either through the command line interface (CLI) using **-vvvv** or through environment variables such as **verbosity = 4**, then this parameter will automatically be set to ``true``. | **required**: False | **type**: bool - | **default**: False + | **default**: True @@ -216,13 +216,13 @@ Notes .. note:: When fetching PDSE and VSAM data sets, temporary storage will be used on the remote z/OS system. After the PDSE or VSAM data set is successfully transferred, the temporary storage will be deleted. The size of the temporary storage will correspond to the size of PDSE or VSAM data set being fetched. If module execution fails, the temporary storage will be deleted. - To ensure optimal performance, data integrity checks for PDS, PDSE, and members of PDS or PDSE are done through the transfer methods used. As a result, the module response will not include the \ :literal:`checksum`\ parameter. + To ensure optimal performance, data integrity checks for PDS, PDSE, and members of PDS or PDSE are done through the transfer methods used. As a result, the module response will not include the ``checksum`` parameter. All data sets are always assumed to be cataloged. If an uncataloged data set needs to be fetched, it should be cataloged first. Fetching HFS or ZFS type data sets is currently not supported. - For supported character sets used to encode data, refer to the \ `documentation `__\ . + For supported character sets used to encode data, refer to the `documentation `_. This module uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. @@ -283,7 +283,7 @@ data_set_type | **sample**: PDSE note - Notice of module failure when \ :literal:`fail\_on\_missing`\ is false. + Notice of module failure when ``fail_on_missing`` is false. | **returned**: failure and fail_on_missing=false | **type**: str diff --git a/docs/source/modules/zos_find.rst b/docs/source/modules/zos_find.rst index 027940ff5..5c23a28a7 100644 --- a/docs/source/modules/zos_find.rst +++ b/docs/source/modules/zos_find.rst @@ -18,7 +18,7 @@ Synopsis -------- - Return a list of data sets based on specific criteria. - Multiple criteria can be added (AND'd) together. -- The \ :literal:`zos\_find`\ module can only find MVS data sets. Use the \ `find `__\ module to find USS files. +- The ``zos_find`` module can only find MVS data sets. Use the `find `_ module to find USS files. @@ -44,9 +44,9 @@ age age_stamp Choose the age property against which to compare age. - \ :literal:`creation\_date`\ is the date the data set was created and \ :literal:`ref\_date`\ is the date the data set was last referenced. + ``creation_date`` is the date the data set was created and ``ref_date`` is the date the data set was last referenced. - \ :literal:`ref\_date`\ is only applicable to sequential and partitioned data sets. + ``ref_date`` is only applicable to sequential and partitioned data sets. | **required**: False | **type**: str @@ -80,7 +80,7 @@ patterns This parameter expects a list, which can be either comma separated or YAML. - If \ :literal:`pds\_patterns`\ is provided, \ :literal:`patterns`\ must be member patterns. + If ``pds_patterns`` is provided, ``patterns`` must be member patterns. When searching for members within a PDS/PDSE, pattern can be a regular expression. @@ -107,7 +107,7 @@ pds_patterns Required when searching for data set members. - Valid only for \ :literal:`nonvsam`\ resource types. Otherwise ignored. + Valid only for ``nonvsam`` resource types. Otherwise ignored. | **required**: False | **type**: list @@ -117,9 +117,9 @@ pds_patterns resource_type The type of resource to search. - \ :literal:`nonvsam`\ refers to one of SEQ, LIBRARY (PDSE), PDS, LARGE, BASIC, EXTREQ, or EXTPREF. + ``nonvsam`` refers to one of SEQ, LIBRARY (PDSE), PDS, LARGE, BASIC, EXTREQ, or EXTPREF. - \ :literal:`cluster`\ refers to a VSAM cluster. The \ :literal:`data`\ and \ :literal:`index`\ are the data and index components of a VSAM cluster. + ``cluster`` refers to a VSAM cluster. The ``data`` and ``index`` are the data and index components of a VSAM cluster. ``gdg`` refers to Generation Data Groups. The module searches based on the GDG base name. @@ -253,16 +253,15 @@ Examples - Notes ----- .. note:: - Only cataloged data sets will be searched. If an uncataloged data set needs to be searched, it should be cataloged first. The \ `zos\_data\_set <./zos_data_set.html>`__\ module can be used to catalog uncataloged data sets. + Only cataloged data sets will be searched. If an uncataloged data set needs to be searched, it should be cataloged first. The `zos_data_set <./zos_data_set.html>`_ module can be used to catalog uncataloged data sets. - The \ `zos\_find <./zos_find.html>`__\ module currently does not support wildcards for high level qualifiers. For example, \ :literal:`SOME.\*.DATA.SET`\ is a valid pattern, but \ :literal:`\*.DATA.SET`\ is not. + The `zos_find <./zos_find.html>`_ module currently does not support wildcards for high level qualifiers. For example, ``SOME.*.DATA.SET`` is a valid pattern, but ``*.DATA.SET`` is not. - If a data set pattern is specified as \ :literal:`USER.\*`\ , the matching data sets will have two name segments such as \ :literal:`USER.ABC`\ , \ :literal:`USER.XYZ`\ etc. If a wildcard is specified as \ :literal:`USER.\*.ABC`\ , the matching data sets will have three name segments such as \ :literal:`USER.XYZ.ABC`\ , \ :literal:`USER.TEST.ABC`\ etc. + If a data set pattern is specified as ``USER.*``, the matching data sets will have two name segments such as ``USER.ABC``, ``USER.XYZ`` etc. If a wildcard is specified as ``USER.*.ABC``, the matching data sets will have three name segments such as ``USER.XYZ.ABC``, ``USER.TEST.ABC`` etc. The time taken to execute the module is proportional to the number of data sets present on the system and how large the data sets are. diff --git a/docs/source/modules/zos_gather_facts.rst b/docs/source/modules/zos_gather_facts.rst index 02a56fd23..0247ffd96 100644 --- a/docs/source/modules/zos_gather_facts.rst +++ b/docs/source/modules/zos_gather_facts.rst @@ -17,8 +17,8 @@ zos_gather_facts -- Gather z/OS system facts. Synopsis -------- - Retrieve variables from target z/OS systems. -- Variables are added to the \ :emphasis:`ansible\_facts`\ dictionary, available to playbooks. -- Apply filters on the \ :emphasis:`gather\_subset`\ list to reduce the variables that are added to the \ :emphasis:`ansible\_facts`\ dictionary. +- Variables are added to the *ansible_facts* dictionary, available to playbooks. +- Apply filters on the *gather_subset* list to reduce the variables that are added to the *ansible_facts* dictionary. - Note, the module will fail fast if any unsupported options are provided. This is done to raise awareness of a failure in an automation setting. @@ -32,7 +32,7 @@ Parameters gather_subset If specified, it will collect facts that come under the specified subset (eg. ipl will return ipl facts). Specifying subsets is recommended to reduce time in gathering facts when the facts needed are in a specific subset. - The following subsets are available \ :literal:`ipl`\ , \ :literal:`cpu`\ , \ :literal:`sys`\ , and \ :literal:`iodf`\ . Depending on the version of ZOAU, additional subsets may be available. + The following subsets are available ``ipl``, ``cpu``, ``sys``, and ``iodf``. Depending on the version of ZOAU, additional subsets may be available. | **required**: False | **type**: list @@ -41,13 +41,13 @@ gather_subset filter - Filter out facts from the \ :emphasis:`ansible\_facts`\ dictionary. + Filter out facts from the *ansible_facts* dictionary. - Uses shell-style \ `fnmatch `__\ pattern matching to filter out the collected facts. + Uses shell-style `fnmatch `_ pattern matching to filter out the collected facts. - An empty list means 'no filter', same as providing '\*'. + An empty list means 'no filter', same as providing '*'. - Filtering is performed after the facts are gathered such that no compute is saved when filtering. Filtering only reduces the number of variables that are added to the \ :emphasis:`ansible\_facts`\ dictionary. To restrict the facts that are collected, refer to the \ :emphasis:`gather\_subset`\ parameter. + Filtering is performed after the facts are gathered such that no compute is saved when filtering. Filtering only reduces the number of variables that are added to the *ansible_facts* dictionary. To restrict the facts that are collected, refer to the *gather_subset* parameter. | **required**: False | **type**: list diff --git a/docs/source/modules/zos_job_output.rst b/docs/source/modules/zos_job_output.rst index 59e37aeb9..efea6ea2a 100644 --- a/docs/source/modules/zos_job_output.rst +++ b/docs/source/modules/zos_job_output.rst @@ -18,9 +18,9 @@ Synopsis -------- - Display the z/OS job output for a given criteria (Job id/Job name/owner) with/without a data definition name as a filter. - At least provide a job id/job name/owner. -- The job id can be specific such as "STC02560", or one that uses a pattern such as "STC\*" or "\*". -- The job name can be specific such as "TCPIP", or one that uses a pattern such as "TCP\*" or "\*". -- The owner can be specific such as "IBMUSER", or one that uses a pattern like "\*". +- The job id can be specific such as "STC02560", or one that uses a pattern such as "STC*" or "*". +- The job name can be specific such as "TCPIP", or one that uses a pattern such as "TCP*" or "*". +- The owner can be specific such as "IBMUSER", or one that uses a pattern like "*". - If there is no ddname, or if ddname="?", output of all the ddnames under the given job will be displayed. @@ -32,21 +32,21 @@ Parameters job_id - The z/OS job ID of the job containing the spool file. (e.g "STC02560", "STC\*") + The z/OS job ID of the job containing the spool file. (e.g "STC02560", "STC*") | **required**: False | **type**: str job_name - The name of the batch job. (e.g "TCPIP", "C\*") + The name of the batch job. (e.g "TCPIP", "C*") | **required**: False | **type**: str owner - The owner who ran the job. (e.g "IBMUSER", "\*") + The owner who ran the job. (e.g "IBMUSER", "*") | **required**: False | **type**: str @@ -97,7 +97,7 @@ Return Values jobs - The output information for a list of jobs matching specified criteria. If no job status is found, this will return ret\_code dictionary with parameter msg\_txt = The job could not be found. + The output information for a list of jobs matching specified criteria. If no job status is found, this will return ret_code dictionary with parameter msg_txt = The job could not be found. | **returned**: success | **type**: list @@ -416,7 +416,7 @@ jobs | **sample**: CC 0000 msg_code - Return code extracted from the \`msg\` so that it can be evaluated. For example, ABEND(S0C4) would yield "S0C4". + Return code extracted from the `msg` so that it can be evaluated. For example, ABEND(S0C4) would yield "S0C4". | **type**: str | **sample**: S0C4 diff --git a/docs/source/modules/zos_job_query.rst b/docs/source/modules/zos_job_query.rst index e4da71341..ea320dfc3 100644 --- a/docs/source/modules/zos_job_query.rst +++ b/docs/source/modules/zos_job_query.rst @@ -17,8 +17,8 @@ zos_job_query -- Query job status Synopsis -------- - List z/OS job(s) and the current status of the job(s). -- Uses job\_name to filter the jobs by the job name. -- Uses job\_id to filter the jobs by the job identifier. +- Uses job_name to filter the jobs by the job name. +- Uses job_id to filter the jobs by the job identifier. - Uses owner to filter the jobs by the job owner. - Uses system to filter the jobs by system where the job is running (or ran) on. @@ -35,9 +35,9 @@ job_name A job name can be up to 8 characters long. - The \ :emphasis:`job\_name`\ can contain include multiple wildcards. + The *job_name* can contain include multiple wildcards. - The asterisk (\`\*\`) wildcard will match zero or more specified characters. + The asterisk (`*`) wildcard will match zero or more specified characters. | **required**: False | **type**: str @@ -56,13 +56,13 @@ owner job_id The job id that has been assigned to the job. - A job id must begin with \`STC\`, \`JOB\`, \`TSU\` and are followed by up to 5 digits. + A job id must begin with `STC`, `JOB`, `TSU` and are followed by up to 5 digits. - When a job id is greater than 99,999, the job id format will begin with \`S\`, \`J\`, \`T\` and are followed by 7 digits. + When a job id is greater than 99,999, the job id format will begin with `S`, `J`, `T` and are followed by 7 digits. - The \ :emphasis:`job\_id`\ can contain include multiple wildcards. + The *job_id* can contain include multiple wildcards. - The asterisk (\`\*\`) wildcard will match zero or more specified characters. + The asterisk (`*`) wildcard will match zero or more specified characters. | **required**: False | **type**: str @@ -122,7 +122,7 @@ changed | **type**: bool jobs - The output information for a list of jobs matching specified criteria. If no job status is found, this will return ret\_code dictionary with parameter msg\_txt = The job could not be found. + The output information for a list of jobs matching specified criteria. If no job status is found, this will return ret_code dictionary with parameter msg_txt = The job could not be found. | **returned**: success | **type**: list @@ -211,7 +211,7 @@ jobs | **sample**: CC 0000 msg_code - Return code extracted from the \`msg\` so that it can be evaluated. For example, ABEND(S0C4) would yield "S0C4". + Return code extracted from the `msg` so that it can be evaluated. For example, ABEND(S0C4) would yield "S0C4". | **type**: str | **sample**: S0C4 diff --git a/docs/source/modules/zos_job_submit.rst b/docs/source/modules/zos_job_submit.rst index b848365e2..6808137a6 100644 --- a/docs/source/modules/zos_job_submit.rst +++ b/docs/source/modules/zos_job_submit.rst @@ -44,11 +44,11 @@ src location - The JCL location. Supported choices are \ :literal:`data\_set`\ , \ :literal:`uss`\ or \ :literal:`local`\ . + The JCL location. Supported choices are ``data_set``, ``uss`` or ``local``. ``data_set`` can be a PDS, PDSE, sequential data set, or a generation data set. - \ :literal:`uss`\ means the JCL location is located in UNIX System Services (USS). + ``uss`` means the JCL location is located in UNIX System Services (USS). ``local`` means locally to the Ansible control node. @@ -59,9 +59,9 @@ location wait_time_s - Option \ :emphasis:`wait\_time\_s`\ is the total time that module \ `zos\_job\_submit <./zos_job_submit.html>`__\ will wait for a submitted job to complete. The time begins when the module is executed on the managed node. + Option *wait_time_s* is the total time that module `zos_job_submit <./zos_job_submit.html>`_ will wait for a submitted job to complete. The time begins when the module is executed on the managed node. - \ :emphasis:`wait\_time\_s`\ is measured in seconds and must be a value greater than 0 and less than 86400. + *wait_time_s* is measured in seconds and must be a value greater than 0 and less than 86400. | **required**: False | **type**: int @@ -88,9 +88,9 @@ return_output volume The volume serial (VOLSER) is where the data set resides. The option is required only when the data set is not cataloged on the system. - When configured, the \ `zos\_job\_submit <./zos_job_submit.html>`__\ will try to catalog the data set for the volume serial. If it is not able to, the module will fail. + When configured, the `zos_job_submit <./zos_job_submit.html>`_ will try to catalog the data set for the volume serial. If it is not able to, the module will fail. - Ignored for \ :emphasis:`location=uss`\ and \ :emphasis:`location=local`\ . + Ignored for *location=uss* and *location=local*. | **required**: False | **type**: str @@ -99,7 +99,7 @@ volume encoding Specifies which encoding the local JCL file should be converted from and to, before submitting the job. - This option is only supported for when \ :emphasis:`location=local`\ . + This option is only supported for when *location=local*. If this parameter is not provided, and the z/OS systems default encoding can not be identified, the JCL file will be converted from UTF-8 to IBM-1047 by default, otherwise the module will detect the z/OS system encoding. @@ -131,13 +131,13 @@ encoding use_template - Whether the module should treat \ :literal:`src`\ as a Jinja2 template and render it before continuing with the rest of the module. + Whether the module should treat ``src`` as a Jinja2 template and render it before continuing with the rest of the module. - Only valid when \ :literal:`src`\ is a local file or directory. + Only valid when ``src`` is a local file or directory. - All variables defined in inventory files, vars files and the playbook will be passed to the template engine, as well as \ `Ansible special variables `__\ , such as \ :literal:`playbook\_dir`\ , \ :literal:`ansible\_version`\ , etc. + All variables defined in inventory files, vars files and the playbook will be passed to the template engine, as well as `Ansible special variables `_, such as ``playbook_dir``, ``ansible_version``, etc. - If variables defined in different scopes share the same name, Ansible will apply variable precedence to them. You can see the complete precedence order \ `in Ansible's documentation `__\ + If variables defined in different scopes share the same name, Ansible will apply variable precedence to them. You can see the complete precedence order `in Ansible's documentation `_ | **required**: False | **type**: bool @@ -147,9 +147,9 @@ use_template template_parameters Options to set the way Jinja2 will process templates. - Jinja2 already sets defaults for the markers it uses, you can find more information at its \ `official documentation `__\ . + Jinja2 already sets defaults for the markers it uses, you can find more information at its `official documentation `_. - These options are ignored unless \ :literal:`use\_template`\ is true. + These options are ignored unless ``use_template`` is true. | **required**: False | **type**: dict @@ -228,7 +228,7 @@ template_parameters trim_blocks Whether Jinja2 should remove the first newline after a block is removed. - Setting this option to \ :literal:`False`\ will result in newlines being added to the rendered template. This could create invalid code when working with JCL templates or empty records in destination data sets. + Setting this option to ``False`` will result in newlines being added to the rendered template. This could create invalid code when working with JCL templates or empty records in destination data sets. | **required**: False | **type**: bool @@ -248,8 +248,11 @@ template_parameters | **required**: False | **type**: str - | **default**: \\n - | **choices**: \\n, \\r, \\r\\n + | **default**: + + | **choices**: +, , + auto_reload @@ -330,9 +333,9 @@ Notes ----- .. note:: - For supported character sets used to encode data, refer to the \ `documentation `__\ . + For supported character sets used to encode data, refer to the `documentation `_. - This module uses \ `zos\_copy <./zos_copy.html>`__\ to copy local scripts to the remote machine which uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. + This module uses `zos_copy <./zos_copy.html>`_ to copy local scripts to the remote machine which uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. @@ -345,7 +348,7 @@ Return Values jobs - List of jobs output. If no job status is found, this will return an empty ret\_code with msg\_txt explanation. + List of jobs output. If no job status is found, this will return an empty ret_code with msg_txt explanation. | **returned**: success | **type**: list @@ -692,25 +695,25 @@ jobs msg Job status resulting from the job submission. - Job status \`ABEND\` indicates the job ended abnormally. + Job status `ABEND` indicates the job ended abnormally. - Job status \`AC\` indicates the job is active, often a started task or job taking long. + Job status `AC` indicates the job is active, often a started task or job taking long. - Job status \`CAB\` indicates a converter abend. + Job status `CAB` indicates a converter abend. - Job status \`CANCELED\` indicates the job was canceled. + Job status `CANCELED` indicates the job was canceled. - Job status \`CNV\` indicates a converter error. + Job status `CNV` indicates a converter error. - Job status \`FLU\` indicates the job was flushed. + Job status `FLU` indicates the job was flushed. - Job status \`JCLERR\` or \`JCL ERROR\` indicates the JCL has an error. + Job status `JCLERR` or `JCL ERROR` indicates the JCL has an error. - Job status \`SEC\` or \`SEC ERROR\` indicates the job as encountered a security error. + Job status `SEC` or `SEC ERROR` indicates the job as encountered a security error. - Job status \`SYS\` indicates a system failure. + Job status `SYS` indicates a system failure. - Job status \`?\` indicates status can not be determined. + Job status `?` indicates status can not be determined. Jobs where status can not be determined will result in None (NULL). diff --git a/docs/source/modules/zos_lineinfile.rst b/docs/source/modules/zos_lineinfile.rst index c1ed7284d..da0108bfb 100644 --- a/docs/source/modules/zos_lineinfile.rst +++ b/docs/source/modules/zos_lineinfile.rst @@ -33,7 +33,7 @@ src The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1.)`` + Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1\``.) | **required**: True | **type**: str @@ -42,13 +42,13 @@ src regexp The regular expression to look for in every line of the USS file or data set. - For \ :literal:`state=present`\ , the pattern to replace if found. Only the last line found will be replaced. + For ``state=present``, the pattern to replace if found. Only the last line found will be replaced. - For \ :literal:`state=absent`\ , the pattern of the line(s) to remove. + For ``state=absent``, the pattern of the line(s) to remove. - If the regular expression is not matched, the line will be added to the USS file or data set in keeping with \ :literal:`insertbefore`\ or \ :literal:`insertafter`\ settings. + If the regular expression is not matched, the line will be added to the USS file or data set in keeping with ``insertbefore`` or ``insertafter`` settings. - When modifying a line the regexp should typically match both the initial state of the line as well as its state after replacement by \ :literal:`line`\ to ensure idempotence. + When modifying a line the regexp should typically match both the initial state of the line as well as its state after replacement by ``line`` to ensure idempotence. | **required**: False | **type**: str @@ -66,22 +66,22 @@ state line The line to insert/replace into the USS file or data set. - Required for \ :literal:`state=present`\ . + Required for ``state=present``. - If \ :literal:`backrefs`\ is set, may contain backreferences that will get expanded with the \ :literal:`regexp`\ capture groups if the regexp matches. + If ``backrefs`` is set, may contain backreferences that will get expanded with the ``regexp`` capture groups if the regexp matches. | **required**: False | **type**: str backrefs - Used with \ :literal:`state=present`\ . + Used with ``state=present``. - If set, \ :literal:`line`\ can contain backreferences (both positional and named) that will get populated if the \ :literal:`regexp`\ matches. + If set, ``line`` can contain backreferences (both positional and named) that will get populated if the ``regexp`` matches. - This parameter changes the operation of the module slightly; \ :literal:`insertbefore`\ and \ :literal:`insertafter`\ will be ignored, and if the \ :literal:`regexp`\ does not match anywhere in the USS file or data set, the USS file or data set will be left unchanged. + This parameter changes the operation of the module slightly; ``insertbefore`` and ``insertafter`` will be ignored, and if the ``regexp`` does not match anywhere in the USS file or data set, the USS file or data set will be left unchanged. - If the \ :literal:`regexp`\ does match, the last matching line will be replaced by the expanded line parameter. + If the ``regexp`` does match, the last matching line will be replaced by the expanded line parameter. | **required**: False | **type**: bool @@ -89,23 +89,23 @@ backrefs insertafter - Used with \ :literal:`state=present`\ . + Used with ``state=present``. If specified, the line will be inserted after the last match of specified regular expression. If the first match is required, use(firstmatch=yes). - A special value is available; \ :literal:`EOF`\ for inserting the line at the end of the USS file or data set. + A special value is available; ``EOF`` for inserting the line at the end of the USS file or data set. If the specified regular expression has no matches, EOF will be used instead. - If \ :literal:`insertbefore`\ is set, default value \ :literal:`EOF`\ will be ignored. + If ``insertbefore`` is set, default value ``EOF`` will be ignored. - If regular expressions are passed to both \ :literal:`regexp`\ and \ :literal:`insertafter`\ , \ :literal:`insertafter`\ is only honored if no match for \ :literal:`regexp`\ is found. + If regular expressions are passed to both ``regexp`` and ``insertafter``, ``insertafter`` is only honored if no match for ``regexp`` is found. - May not be used with \ :literal:`backrefs`\ or \ :literal:`insertbefore`\ . + May not be used with ``backrefs`` or ``insertbefore``. - Choices are EOF or '\*regex\*' + Choices are EOF or '*regex*' Default is EOF @@ -114,30 +114,30 @@ insertafter insertbefore - Used with \ :literal:`state=present`\ . + Used with ``state=present``. If specified, the line will be inserted before the last match of specified regular expression. - If the first match is required, use \ :literal:`firstmatch=yes`\ . + If the first match is required, use ``firstmatch=yes``. - A value is available; \ :literal:`BOF`\ for inserting the line at the beginning of the USS file or data set. + A value is available; ``BOF`` for inserting the line at the beginning of the USS file or data set. If the specified regular expression has no matches, the line will be inserted at the end of the USS file or data set. - If regular expressions are passed to both \ :literal:`regexp`\ and \ :literal:`insertbefore`\ , \ :literal:`insertbefore`\ is only honored if no match for \ :literal:`regexp`\ is found. + If regular expressions are passed to both ``regexp`` and ``insertbefore``, ``insertbefore`` is only honored if no match for ``regexp`` is found. - May not be used with \ :literal:`backrefs`\ or \ :literal:`insertafter`\ . + May not be used with ``backrefs`` or ``insertafter``. - Choices are BOF or '\*regex\*' + Choices are BOF or '*regex*' | **required**: False | **type**: str backup - Creates a backup file or backup data set for \ :emphasis:`src`\ , including the timestamp information to ensure that you retrieve the original file. + Creates a backup file or backup data set for *src*, including the timestamp information to ensure that you retrieve the original file. - \ :emphasis:`backup\_name`\ can be used to specify a backup file name if \ :emphasis:`backup=true`\ . + *backup_name* can be used to specify a backup file name if *backup=true*. The backup file name will be return on either success or failure of module execution such that data can be retrieved. @@ -151,11 +151,11 @@ backup backup_name Specify the USS file name or data set name for the destination backup. - If the source \ :emphasis:`src`\ is a USS file or path, the backup\_name must be a file or path name, and the USS file or path must be an absolute path name. + If the source *src* is a USS file or path, the backup_name must be a file or path name, and the USS file or path must be an absolute path name. - If the source is an MVS data set, the backup\_name must be an MVS data set name. + If the source is an MVS data set, the backup_name must be an MVS data set name. - If the backup\_name is not provided, the default backup\_name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp, e.g. \ :literal:`/path/file\_name.2020-04-23-08-32-29-bak.tar`\ . + If the backup_name is not provided, the default backup_name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp, e.g. ``/path/file_name.2020-04-23-08-32-29-bak.tar``. If the source is an MVS data set, it will be a data set with a random name generated by calling the ZOAU API. The MVS backup data set recovery can be done by renaming it. @@ -166,16 +166,16 @@ backup_name tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str firstmatch - Used with \ :literal:`insertafter`\ or \ :literal:`insertbefore`\ . + Used with ``insertafter`` or ``insertbefore``. - If set, \ :literal:`insertafter`\ and \ :literal:`insertbefore`\ will work with the first line that matches the given regular expression. + If set, ``insertafter`` and ``insertbefore`` will work with the first line that matches the given regular expression. | **required**: False | **type**: bool @@ -183,7 +183,7 @@ firstmatch encoding - The character set of the source \ :emphasis:`src`\ . \ `zos\_lineinfile <./zos_lineinfile.html>`__\ requires to be provided with correct encoding to read the content of USS file or data set. If this parameter is not provided, this module assumes that USS file or data set is encoded in IBM-1047. + The character set of the source *src*. `zos_lineinfile <./zos_lineinfile.html>`_ requires to be provided with correct encoding to read the content of USS file or data set. If this parameter is not provided, this module assumes that USS file or data set is encoded in IBM-1047. Supported character sets rely on the charset conversion utility (iconv) version; the most common character sets are supported. @@ -197,7 +197,7 @@ force This is helpful when a data set is being used in a long running process such as a started task and you are wanting to update or read. - The \ :literal:`force`\ option enables sharing of data sets through the disposition \ :emphasis:`DISP=SHR`\ . + The ``force`` option enables sharing of data sets through the disposition *DISP=SHR*. | **required**: False | **type**: bool @@ -262,7 +262,7 @@ Examples zos_lineinfile: src: SOME.CREATION.TEST insertafter: EOF - backup: True + backup: true backup_name: CREATION.GDS(+1) line: 'Should be a working test now' @@ -277,7 +277,7 @@ Notes All data sets are always assumed to be cataloged. If an uncataloged data set needs to be encoded, it should be cataloged first. - For supported character sets used to encode data, refer to the \ `documentation `__\ . + For supported character sets used to encode data, refer to the `documentation `_. @@ -290,7 +290,7 @@ Return Values changed - Indicates if the source was modified. Value of 1 represents \`true\`, otherwise \`false\`. + Indicates if the source was modified. Value of 1 represents `true`, otherwise `false`. | **returned**: success | **type**: bool diff --git a/docs/source/modules/zos_mount.rst b/docs/source/modules/zos_mount.rst index 5bd283453..3b30be909 100644 --- a/docs/source/modules/zos_mount.rst +++ b/docs/source/modules/zos_mount.rst @@ -16,9 +16,9 @@ zos_mount -- Mount a z/OS file system. Synopsis -------- -- The module \ `zos\_mount <./zos_mount.html>`__\ can manage mount operations for a z/OS UNIX System Services (USS) file system data set. -- The \ :emphasis:`src`\ data set must be unique and a Fully Qualified Name (FQN). -- The \ :emphasis:`path`\ will be created if needed. +- The module `zos_mount <./zos_mount.html>`_ can manage mount operations for a z/OS UNIX System Services (USS) file system data set. +- The *src* data set must be unique and a Fully Qualified Name (FQN). +- The *path* will be created if needed. @@ -31,7 +31,7 @@ Parameters path The absolute path name onto which the file system is to be mounted. - The \ :emphasis:`path`\ is case sensitive and must be less than or equal 1023 characters long. + The *path* is case sensitive and must be less than or equal 1023 characters long. | **required**: True | **type**: str @@ -40,9 +40,9 @@ path src The name of the file system to be added to the file system hierarchy. - The file system \ :emphasis:`src`\ must be a data set of type \ :emphasis:`fs\_type`\ . + The file system *src* must be a data set of type *fs_type*. - The file system \ :emphasis:`src`\ data set must be cataloged. + The file system *src* data set must be cataloged. | **required**: True | **type**: str @@ -53,7 +53,7 @@ fs_type The physical file systems data set format to perform the logical mount. - The \ :emphasis:`fs\_type`\ is required to be lowercase. + The *fs_type* is required to be lowercase. | **required**: True | **type**: str @@ -63,25 +63,25 @@ fs_type state The desired status of the described mount (choice). - If \ :emphasis:`state=mounted`\ and \ :emphasis:`src`\ are not in use, the module will add the file system entry to the parmlib member \ :emphasis:`persistent/data\_store`\ if not present. The \ :emphasis:`path`\ will be updated, the device will be mounted and the module will complete successfully with \ :emphasis:`changed=True`\ . + If *state=mounted* and *src* are not in use, the module will add the file system entry to the parmlib member *persistent/data_store* if not present. The *path* will be updated, the device will be mounted and the module will complete successfully with *changed=True*. - If \ :emphasis:`state=mounted`\ and \ :emphasis:`src`\ are in use, the module will add the file system entry to the parmlib member \ :emphasis:`persistent/data\_store`\ if not present. The \ :emphasis:`path`\ will not be updated, the device will not be mounted and the module will complete successfully with \ :emphasis:`changed=False`\ . + If *state=mounted* and *src* are in use, the module will add the file system entry to the parmlib member *persistent/data_store* if not present. The *path* will not be updated, the device will not be mounted and the module will complete successfully with *changed=False*. - If \ :emphasis:`state=unmounted`\ and \ :emphasis:`src`\ are in use, the module will \ :strong:`not`\ add the file system entry to the parmlib member \ :emphasis:`persistent/data\_store`\ . The device will be unmounted and the module will complete successfully with \ :emphasis:`changed=True`\ . + If *state=unmounted* and *src* are in use, the module will **not** add the file system entry to the parmlib member *persistent/data_store*. The device will be unmounted and the module will complete successfully with *changed=True*. - If \ :emphasis:`state=unmounted`\ and \ :emphasis:`src`\ are not in use, the module will \ :strong:`not`\ add the file system entry to parmlib member \ :emphasis:`persistent/data\_store`\ .The device will remain unchanged and the module will complete with \ :emphasis:`changed=False`\ . + If *state=unmounted* and *src* are not in use, the module will **not** add the file system entry to parmlib member *persistent/data_store*.The device will remain unchanged and the module will complete with *changed=False*. - If \ :emphasis:`state=present`\ , the module will add the file system entry to the provided parmlib member \ :emphasis:`persistent/data\_store`\ if not present. The module will complete successfully with \ :emphasis:`changed=True`\ . + If *state=present*, the module will add the file system entry to the provided parmlib member *persistent/data_store* if not present. The module will complete successfully with *changed=True*. - If \ :emphasis:`state=absent`\ , the module will remove the file system entry to the provided parmlib member \ :emphasis:`persistent/data\_store`\ if present. The module will complete successfully with \ :emphasis:`changed=True`\ . + If *state=absent*, the module will remove the file system entry to the provided parmlib member *persistent/data_store* if present. The module will complete successfully with *changed=True*. - If \ :emphasis:`state=remounted`\ , the module will \ :strong:`not`\ add the file system entry to parmlib member \ :emphasis:`persistent/data\_store`\ . The device will be unmounted and mounted, the module will complete successfully with \ :emphasis:`changed=True`\ . + If *state=remounted*, the module will **not** add the file system entry to parmlib member *persistent/data_store*. The device will be unmounted and mounted, the module will complete successfully with *changed=True*. | **required**: False @@ -91,7 +91,7 @@ state persistent - Add or remove mount command entries to provided \ :emphasis:`data\_store`\ + Add or remove mount command entries to provided *data_store* | **required**: False | **type**: dict @@ -105,9 +105,9 @@ persistent backup - Creates a backup file or backup data set for \ :emphasis:`data\_store`\ , including the timestamp information to ensure that you retrieve the original parameters defined in \ :emphasis:`data\_store`\ . + Creates a backup file or backup data set for *data_store*, including the timestamp information to ensure that you retrieve the original parameters defined in *data_store*. - \ :emphasis:`backup\_name`\ can be used to specify a backup file name if \ :emphasis:`backup=true`\ . + *backup_name* can be used to specify a backup file name if *backup=true*. The backup file name will be returned on either success or failure of module execution such that data can be retrieved. @@ -119,11 +119,11 @@ persistent backup_name Specify the USS file name or data set name for the destination backup. - If the source \ :emphasis:`data\_store`\ is a USS file or path, the \ :emphasis:`backup\_name`\ name can be relative or absolute for file or path name. + If the source *data_store* is a USS file or path, the *backup_name* name can be relative or absolute for file or path name. - If the source is an MVS data set, the backup\_name must be an MVS data set name. + If the source is an MVS data set, the backup_name must be an MVS data set name. - If the backup\_name is not provided, the default \ :emphasis:`backup\_name`\ will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp. For example, \ :literal:`/path/file\_name.2020-04-23-08-32-29-bak.tar`\ . + If the backup_name is not provided, the default *backup_name* will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp. For example, ``/path/file_name.2020-04-23-08-32-29-bak.tar``. If the source is an MVS data set, it will be a data set with a random name generated by calling the ZOAU API. The MVS backup data set recovery can be done by renaming it. @@ -132,9 +132,9 @@ persistent comment - If provided, this is used as a comment that surrounds the command in the \ :emphasis:`persistent/data\_store`\ + If provided, this is used as a comment that surrounds the command in the *persistent/data_store* - Comments are used to encapsulate the \ :emphasis:`persistent/data\_store`\ entry such that they can easily be understood and located. + Comments are used to encapsulate the *persistent/data_store* entry such that they can easily be understood and located. | **required**: False | **type**: list @@ -145,7 +145,7 @@ persistent unmount_opts Describes how the unmount will be performed. - For more on coded character set identifiers, review the IBM documentation topic \ :strong:`UNMOUNT - Remove a file system from the file hierarchy`\ . + For more on coded character set identifiers, review the IBM documentation topic **UNMOUNT - Remove a file system from the file hierarchy**. | **required**: False | **type**: str @@ -156,13 +156,13 @@ unmount_opts mount_opts Options available to the mount. - If \ :emphasis:`mount\_opts=ro`\ on a mounted/remount, mount is performed read-only. + If *mount_opts=ro* on a mounted/remount, mount is performed read-only. - If \ :emphasis:`mount\_opts=same`\ and (unmount\_opts=remount), mount is opened in the same mode as previously opened. + If *mount_opts=same* and (unmount_opts=remount), mount is opened in the same mode as previously opened. - If \ :emphasis:`mount\_opts=nowait`\ , mount is performed asynchronously. + If *mount_opts=nowait*, mount is performed asynchronously. - If \ :emphasis:`mount\_opts=nosecurity`\ , security checks are not enforced for files in this file system. + If *mount_opts=nosecurity*, security checks are not enforced for files in this file system. | **required**: False | **type**: str @@ -184,11 +184,11 @@ tag_untagged When the file system is unmounted, the tags are lost. - If \ :emphasis:`tag\_untagged=notext`\ none of the untagged files in the file system are automatically converted during file reading and writing. + If *tag_untagged=notext* none of the untagged files in the file system are automatically converted during file reading and writing. - If \ :emphasis:`tag\_untagged=text`\ each untagged file is implicitly marked as containing pure text data that can be converted. + If *tag_untagged=text* each untagged file is implicitly marked as containing pure text data that can be converted. - If this flag is used, use of tag\_ccsid is encouraged. + If this flag is used, use of tag_ccsid is encouraged. | **required**: False | **type**: str @@ -198,13 +198,13 @@ tag_untagged tag_ccsid Identifies the coded character set identifier (ccsid) to be implicitly set for the untagged file. - For more on coded character set identifiers, review the IBM documentation topic \ :strong:`Coded Character Sets`\ . + For more on coded character set identifiers, review the IBM documentation topic **Coded Character Sets**. Specified as a decimal value from 0 to 65535. However, when TEXT is specified, the value must be between 0 and 65535. The value is not checked as being valid and the corresponding code page is not checked as being installed. - Required when \ :emphasis:`tag\_untagged=TEXT`\ . + Required when *tag_untagged=TEXT*. | **required**: False | **type**: int @@ -214,10 +214,10 @@ allow_uid Specifies whether the SETUID and SETGID mode bits on an executable in this file system are considered. Also determines whether the APF extended attribute or the Program Control extended attribute is honored. - If \ :emphasis:`allow\_uid=True`\ the SETUID and SETGID mode bits are considered when a program in this file system is run. SETUID is the default. + If *allow_uid=True* the SETUID and SETGID mode bits are considered when a program in this file system is run. SETUID is the default. - If \ :emphasis:`allow\_uid=False`\ the SETUID and SETGID mode bits are ignored when a program in this file system is run. The program runs as though the SETUID and SETGID mode bits were not set. Also, if you specify the NOSETUID option on MOUNT, the APF extended attribute and the Program Control Bit values are ignored. + If *allow_uid=False* the SETUID and SETGID mode bits are ignored when a program in this file system is run. The program runs as though the SETUID and SETGID mode bits were not set. Also, if you specify the NOSETUID option on MOUNT, the APF extended attribute and the Program Control Bit values are ignored. | **required**: False @@ -226,10 +226,10 @@ allow_uid sysname - For systems participating in shared file system, \ :emphasis:`sysname`\ specifies the particular system on which a mount should be performed. This system will then become the owner of the file system mounted. This system must be IPLed with SYSPLEX(YES). + For systems participating in shared file system, *sysname* specifies the particular system on which a mount should be performed. This system will then become the owner of the file system mounted. This system must be IPLed with SYSPLEX(YES). - \ :emphasis:`sysname`\ is the name of a system participating in shared file system. The name must be 1-8 characters long; the valid characters are A-Z, 0-9, $, @, and #. + *sysname* is the name of a system participating in shared file system. The name must be 1-8 characters long; the valid characters are A-Z, 0-9, $, @, and #. | **required**: False @@ -240,13 +240,13 @@ automove These parameters apply only in a sysplex where systems are exploiting the shared file system capability. They specify what happens to the ownership of a file system when a shutdown, PFS termination, dead system takeover, or file system move occurs. The default setting is AUTOMOVE where the file system will be randomly moved to another system (no system list used). - \ :emphasis:`automove=automove`\ indicates that ownership of the file system can be automatically moved to another system participating in a shared file system. + *automove=automove* indicates that ownership of the file system can be automatically moved to another system participating in a shared file system. - \ :emphasis:`automove=noautomove`\ prevents movement of the file system's ownership in some situations. + *automove=noautomove* prevents movement of the file system's ownership in some situations. - \ :emphasis:`automove=unmount`\ allows the file system to be unmounted in some situations. + *automove=unmount* allows the file system to be unmounted in some situations. | **required**: False @@ -275,7 +275,7 @@ automove_list tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str @@ -388,7 +388,7 @@ Notes If an uncataloged data set needs to be fetched, it should be cataloged first. - Uncataloged data sets can be cataloged using the \ `zos\_data\_set <./zos_data_set.html>`__\ module. + Uncataloged data sets can be cataloged using the `zos_data_set <./zos_data_set.html>`_ module. @@ -466,7 +466,7 @@ persistent | **sample**: SYS1.FILESYS(PRMAABAK) comment - The text that was used in markers around the \ :emphasis:`Persistent/data\_store`\ entry. + The text that was used in markers around the *Persistent/data_store* entry. | **returned**: always | **type**: list @@ -528,7 +528,7 @@ allow_uid true sysname - \ :emphasis:`sysname`\ specifies the particular system on which a mount should be performed. + *sysname* specifies the particular system on which a mount should be performed. | **returned**: if Non-None | **type**: str diff --git a/docs/source/modules/zos_mvs_raw.rst b/docs/source/modules/zos_mvs_raw.rst index 2c5b65a61..817951fe3 100644 --- a/docs/source/modules/zos_mvs_raw.rst +++ b/docs/source/modules/zos_mvs_raw.rst @@ -45,9 +45,9 @@ parm auth Determines whether this program should run with authorized privileges. - If \ :emphasis:`auth=true`\ , the program runs as APF authorized. + If *auth=true*, the program runs as APF authorized. - If \ :emphasis:`auth=false`\ , the program runs as unauthorized. + If *auth=false*, the program runs as unauthorized. | **required**: False | **type**: bool @@ -57,7 +57,7 @@ auth verbose Determines if verbose output should be returned from the underlying utility used by this module. - When \ :emphasis:`verbose=true`\ verbose output is returned on module failure. + When *verbose=true* verbose output is returned on module failure. | **required**: False | **type**: bool @@ -67,19 +67,19 @@ verbose dds The input data source. - \ :emphasis:`dds`\ supports 6 types of sources + *dds* supports 6 types of sources - 1. \ :emphasis:`dd\_data\_set`\ for data set files. + 1. *dd_data_set* for data set files. - 2. \ :emphasis:`dd\_unix`\ for UNIX files. + 2. *dd_unix* for UNIX files. - 3. \ :emphasis:`dd\_input`\ for in-stream data set. + 3. *dd_input* for in-stream data set. - 4. \ :emphasis:`dd\_dummy`\ for no content input. + 4. *dd_dummy* for no content input. - 5. \ :emphasis:`dd\_concat`\ for a data set concatenation. + 5. *dd_concat* for a data set concatenation. - 6. \ :emphasis:`dds`\ supports any combination of source types. + 6. *dds* supports any combination of source types. | **required**: False | **type**: list @@ -89,7 +89,7 @@ dds dd_data_set Specify a data set. - \ :emphasis:`dd\_data\_set`\ can reference an existing data set or be used to define a new data set to be created during execution. + *dd_data_set* can reference an existing data set or be used to define a new data set to be created during execution. | **required**: False | **type**: dict @@ -114,7 +114,7 @@ dds type - The data set type. Only required when \ :emphasis:`disposition=new`\ . + The data set type. Only required when *disposition=new*. Maps to DSNTYPE on z/OS. @@ -124,7 +124,7 @@ dds disposition - \ :emphasis:`disposition`\ indicates the status of a data set. + *disposition* indicates the status of a data set. Defaults to shr. @@ -134,7 +134,7 @@ dds disposition_normal - \ :emphasis:`disposition\_normal`\ indicates what to do with the data set after a normal termination of the program. + *disposition_normal* indicates what to do with the data set after a normal termination of the program. | **required**: False | **type**: str @@ -142,7 +142,7 @@ dds disposition_abnormal - \ :emphasis:`disposition\_abnormal`\ indicates what to do with the data set after an abnormal termination of the program. + *disposition_abnormal* indicates what to do with the data set after an abnormal termination of the program. | **required**: False | **type**: str @@ -150,15 +150,15 @@ dds reuse - Determines if a data set should be reused if \ :emphasis:`disposition=new`\ and if a data set with a matching name already exists. + Determines if a data set should be reused if *disposition=new* and if a data set with a matching name already exists. - If \ :emphasis:`reuse=true`\ , \ :emphasis:`disposition`\ will be automatically switched to \ :literal:`SHR`\ . + If *reuse=true*, *disposition* will be automatically switched to ``SHR``. - If \ :emphasis:`reuse=false`\ , and a data set with a matching name already exists, allocation will fail. + If *reuse=false*, and a data set with a matching name already exists, allocation will fail. - Mutually exclusive with \ :emphasis:`replace`\ . + Mutually exclusive with *replace*. - \ :emphasis:`reuse`\ is only considered when \ :emphasis:`disposition=new`\ + *reuse* is only considered when *disposition=new* | **required**: False | **type**: bool @@ -166,17 +166,17 @@ dds replace - Determines if a data set should be replaced if \ :emphasis:`disposition=new`\ and a data set with a matching name already exists. + Determines if a data set should be replaced if *disposition=new* and a data set with a matching name already exists. - If \ :emphasis:`replace=true`\ , the original data set will be deleted, and a new data set created. + If *replace=true*, the original data set will be deleted, and a new data set created. - If \ :emphasis:`replace=false`\ , and a data set with a matching name already exists, allocation will fail. + If *replace=false*, and a data set with a matching name already exists, allocation will fail. - Mutually exclusive with \ :emphasis:`reuse`\ . + Mutually exclusive with *reuse*. - \ :emphasis:`replace`\ is only considered when \ :emphasis:`disposition=new`\ + *replace* is only considered when *disposition=new* - \ :emphasis:`replace`\ will result in loss of all data in the original data set unless \ :emphasis:`backup`\ is specified. + *replace* will result in loss of all data in the original data set unless *backup* is specified. | **required**: False | **type**: bool @@ -184,9 +184,9 @@ dds backup - Determines if a backup should be made of an existing data set when \ :emphasis:`disposition=new`\ , \ :emphasis:`replace=true`\ , and a data set with the desired name is found. + Determines if a backup should be made of an existing data set when *disposition=new*, *replace=true*, and a data set with the desired name is found. - \ :emphasis:`backup`\ is only used when \ :emphasis:`replace=true`\ . + *backup* is only used when *replace=true*. | **required**: False | **type**: bool @@ -194,7 +194,7 @@ dds space_type - The unit of measurement to use when allocating space for a new data set using \ :emphasis:`space\_primary`\ and \ :emphasis:`space\_secondary`\ . + The unit of measurement to use when allocating space for a new data set using *space_primary* and *space_secondary*. | **required**: False | **type**: str @@ -204,9 +204,9 @@ dds space_primary The primary amount of space to allocate for a new data set. - The value provided to \ :emphasis:`space\_type`\ is used as the unit of space for the allocation. + The value provided to *space_type* is used as the unit of space for the allocation. - Not applicable when \ :emphasis:`space\_type=blklgth`\ or \ :emphasis:`space\_type=reclgth`\ . + Not applicable when *space_type=blklgth* or *space_type=reclgth*. | **required**: False | **type**: int @@ -215,9 +215,9 @@ dds space_secondary When primary allocation of space is filled, secondary space will be allocated with the provided size as needed. - The value provided to \ :emphasis:`space\_type`\ is used as the unit of space for the allocation. + The value provided to *space_type* is used as the unit of space for the allocation. - Not applicable when \ :emphasis:`space\_type=blklgth`\ or \ :emphasis:`space\_type=reclgth`\ . + Not applicable when *space_type=blklgth* or *space_type=reclgth*. | **required**: False | **type**: int @@ -235,7 +235,7 @@ dds sms_management_class The desired management class for a new SMS-managed data set. - \ :emphasis:`sms\_management\_class`\ is ignored if specified for an existing data set. + *sms_management_class* is ignored if specified for an existing data set. All values must be between 1-8 alpha-numeric characters. @@ -246,7 +246,7 @@ dds sms_storage_class The desired storage class for a new SMS-managed data set. - \ :emphasis:`sms\_storage\_class`\ is ignored if specified for an existing data set. + *sms_storage_class* is ignored if specified for an existing data set. All values must be between 1-8 alpha-numeric characters. @@ -257,7 +257,7 @@ dds sms_data_class The desired data class for a new SMS-managed data set. - \ :emphasis:`sms\_data\_class`\ is ignored if specified for an existing data set. + *sms_data_class* is ignored if specified for an existing data set. All values must be between 1-8 alpha-numeric characters. @@ -268,7 +268,7 @@ dds block_size The maximum length of a block in bytes. - Default is dependent on \ :emphasis:`record\_format`\ + Default is dependent on *record_format* | **required**: False | **type**: int @@ -284,9 +284,9 @@ dds key_label The label for the encryption key used by the system to encrypt the data set. - \ :emphasis:`key\_label`\ is the public name of a protected encryption key in the ICSF key repository. + *key_label* is the public name of a protected encryption key in the ICSF key repository. - \ :emphasis:`key\_label`\ should only be provided when creating an extended format data set. + *key_label* should only be provided when creating an extended format data set. Maps to DSKEYLBL on z/OS. @@ -308,7 +308,7 @@ dds Key label must have a private key associated with it. - \ :emphasis:`label`\ can be a maximum of 64 characters. + *label* can be a maximum of 64 characters. Maps to KEYLAB1 on z/OS. @@ -317,9 +317,9 @@ dds encoding - How the label for the key encrypting key specified by \ :emphasis:`label`\ is encoded by the Encryption Key Manager. + How the label for the key encrypting key specified by *label* is encoded by the Encryption Key Manager. - \ :emphasis:`encoding`\ can either be set to \ :literal:`l`\ for label encoding, or \ :literal:`h`\ for hash encoding. + *encoding* can either be set to ``l`` for label encoding, or ``h`` for hash encoding. Maps to KEYCD1 on z/OS. @@ -343,7 +343,7 @@ dds Key label must have a private key associated with it. - \ :emphasis:`label`\ can be a maximum of 64 characters. + *label* can be a maximum of 64 characters. Maps to KEYLAB2 on z/OS. @@ -352,9 +352,9 @@ dds encoding - How the label for the key encrypting key specified by \ :emphasis:`label`\ is encoded by the Encryption Key Manager. + How the label for the key encrypting key specified by *label* is encoded by the Encryption Key Manager. - \ :emphasis:`encoding`\ can either be set to \ :literal:`l`\ for label encoding, or \ :literal:`h`\ for hash encoding. + *encoding* can either be set to ``l`` for label encoding, or ``h`` for hash encoding. Maps to KEYCD2 on z/OS. @@ -367,7 +367,7 @@ dds key_length The length of the keys used in a new data set. - If using SMS, setting \ :emphasis:`key\_length`\ overrides the key length defined in the SMS data class of the data set. + If using SMS, setting *key_length* overrides the key length defined in the SMS data class of the data set. Valid values are (0-255 non-vsam), (1-255 vsam). @@ -380,14 +380,14 @@ dds The first byte of a logical record is position 0. - Provide \ :emphasis:`key\_offset`\ only for VSAM key-sequenced data sets. + Provide *key_offset* only for VSAM key-sequenced data sets. | **required**: False | **type**: int record_length - The logical record length. (e.g \ :literal:`80`\ ). + The logical record length. (e.g ``80``). For variable data sets, the length must include the 4-byte prefix area. @@ -421,11 +421,11 @@ dds type The type of the content to be returned. - \ :literal:`text`\ means return content in encoding specified by \ :emphasis:`response\_encoding`\ . + ``text`` means return content in encoding specified by *response_encoding*. - \ :emphasis:`src\_encoding`\ and \ :emphasis:`response\_encoding`\ are only used when \ :emphasis:`type=text`\ . + *src_encoding* and *response_encoding* are only used when *type=text*. - \ :literal:`base64`\ means return content in binary mode. + ``base64`` means return content in binary mode. | **required**: True | **type**: str @@ -467,7 +467,7 @@ dds path The path to an existing UNIX file. - Or provide the path to an new created UNIX file when \ :emphasis:`status\_group=OCREAT`\ . + Or provide the path to an new created UNIX file when *status_group=OCREAT*. The provided path must be absolute. @@ -492,7 +492,7 @@ dds mode - The file access attributes when the UNIX file is created specified in \ :emphasis:`path`\ . + The file access attributes when the UNIX file is created specified in *path*. Specify the mode as an octal number similarly to chmod. @@ -503,47 +503,47 @@ dds status_group - The status for the UNIX file specified in \ :emphasis:`path`\ . + The status for the UNIX file specified in *path*. - If you do not specify a value for the \ :emphasis:`status\_group`\ parameter, the module assumes that the pathname exists, searches for it, and fails the module if the pathname does not exist. + If you do not specify a value for the *status_group* parameter, the module assumes that the pathname exists, searches for it, and fails the module if the pathname does not exist. Maps to PATHOPTS status group file options on z/OS. You can specify up to 6 choices. - \ :emphasis:`oappend`\ sets the file offset to the end of the file before each write, so that data is written at the end of the file. + *oappend* sets the file offset to the end of the file before each write, so that data is written at the end of the file. - \ :emphasis:`ocreat`\ specifies that if the file does not exist, the system is to create it. If a directory specified in the pathname does not exist, a new directory and a new file are not created. If the file already exists and \ :emphasis:`oexcl`\ was not specified, the system allows the program to use the existing file. If the file already exists and \ :emphasis:`oexcl`\ was specified, the system fails the allocation and the job step. + *ocreat* specifies that if the file does not exist, the system is to create it. If a directory specified in the pathname does not exist, a new directory and a new file are not created. If the file already exists and *oexcl* was not specified, the system allows the program to use the existing file. If the file already exists and *oexcl* was specified, the system fails the allocation and the job step. - \ :emphasis:`oexcl`\ specifies that if the file does not exist, the system is to create it. If the file already exists, the system fails the allocation and the job step. The system ignores \ :emphasis:`oexcl`\ if \ :emphasis:`ocreat`\ is not also specified. + *oexcl* specifies that if the file does not exist, the system is to create it. If the file already exists, the system fails the allocation and the job step. The system ignores *oexcl* if *ocreat* is not also specified. - \ :emphasis:`onoctty`\ specifies that if the PATH parameter identifies a terminal device, opening of the file does not make the terminal device the controlling terminal for the process. + *onoctty* specifies that if the PATH parameter identifies a terminal device, opening of the file does not make the terminal device the controlling terminal for the process. - \ :emphasis:`ononblock`\ specifies the following, depending on the type of file + *ononblock* specifies the following, depending on the type of file For a FIFO special file - 1. With \ :emphasis:`ononblock`\ specified and \ :emphasis:`ordonly`\ access, an open function for reading-only returns without delay. + 1. With *ononblock* specified and *ordonly* access, an open function for reading-only returns without delay. - 2. With \ :emphasis:`ononblock`\ not specified and \ :emphasis:`ordonly`\ access, an open function for reading-only blocks (waits) until a process opens the file for writing. + 2. With *ononblock* not specified and *ordonly* access, an open function for reading-only blocks (waits) until a process opens the file for writing. - 3. With \ :emphasis:`ononblock`\ specified and \ :emphasis:`owronly`\ access, an open function for writing-only returns an error if no process currently has the file open for reading. + 3. With *ononblock* specified and *owronly* access, an open function for writing-only returns an error if no process currently has the file open for reading. - 4. With \ :emphasis:`ononblock`\ not specified and \ :emphasis:`owronly`\ access, an open function for writing-only blocks (waits) until a process opens the file for reading. + 4. With *ononblock* not specified and *owronly* access, an open function for writing-only blocks (waits) until a process opens the file for reading. 5. For a character special file that supports nonblocking open - 6. If \ :emphasis:`ononblock`\ is specified, an open function returns without blocking (waiting) until the device is ready or available. Device response depends on the type of device. + 6. If *ononblock* is specified, an open function returns without blocking (waiting) until the device is ready or available. Device response depends on the type of device. - 7. If \ :emphasis:`ononblock`\ is not specified, an open function blocks (waits) until the device is ready or available. + 7. If *ononblock* is not specified, an open function blocks (waits) until the device is ready or available. - \ :emphasis:`ononblock`\ has no effect on other file types. + *ononblock* has no effect on other file types. - \ :emphasis:`osync`\ specifies that the system is to move data from buffer storage to permanent storage before returning control from a callable service that performs a write. + *osync* specifies that the system is to move data from buffer storage to permanent storage before returning control from a callable service that performs a write. - \ :emphasis:`otrunc`\ specifies that the system is to truncate the file length to zero if all the following are true: the file specified exists, the file is a regular file, and the file successfully opened with \ :emphasis:`ordwr`\ or \ :emphasis:`owronly`\ . + *otrunc* specifies that the system is to truncate the file length to zero if all the following are true: the file specified exists, the file is a regular file, and the file successfully opened with *ordwr* or *owronly*. - When \ :emphasis:`otrunc`\ is specified, the system does not change the mode and owner. \ :emphasis:`otrunc`\ has no effect on FIFO special files or character special files. + When *otrunc* is specified, the system does not change the mode and owner. *otrunc* has no effect on FIFO special files or character special files. | **required**: False | **type**: list @@ -552,7 +552,7 @@ dds access_group - The kind of access to request for the UNIX file specified in \ :emphasis:`path`\ . + The kind of access to request for the UNIX file specified in *path*. | **required**: False | **type**: str @@ -560,7 +560,7 @@ dds file_data_type - The type of data that is (or will be) stored in the file specified in \ :emphasis:`path`\ . + The type of data that is (or will be) stored in the file specified in *path*. Maps to FILEDATA on z/OS. @@ -573,7 +573,7 @@ dds block_size The block size, in bytes, for the UNIX file. - Default is dependent on \ :emphasis:`record\_format`\ + Default is dependent on *record_format* | **required**: False | **type**: int @@ -582,7 +582,7 @@ dds record_length The logical record length for the UNIX file. - \ :emphasis:`record\_length`\ is required in situations where the data will be processed as records and therefore, \ :emphasis:`record\_length`\ , \ :emphasis:`block\_size`\ and \ :emphasis:`record\_format`\ need to be supplied since a UNIX file would normally be treated as a stream of bytes. + *record_length* is required in situations where the data will be processed as records and therefore, *record_length*, *block_size* and *record_format* need to be supplied since a UNIX file would normally be treated as a stream of bytes. Maps to LRECL on z/OS. @@ -593,7 +593,7 @@ dds record_format The record format for the UNIX file. - \ :emphasis:`record\_format`\ is required in situations where the data will be processed as records and therefore, \ :emphasis:`record\_length`\ , \ :emphasis:`block\_size`\ and \ :emphasis:`record\_format`\ need to be supplied since a UNIX file would normally be treated as a stream of bytes. + *record_format* is required in situations where the data will be processed as records and therefore, *record_length*, *block_size* and *record_format* need to be supplied since a UNIX file would normally be treated as a stream of bytes. | **required**: False | **type**: str @@ -612,11 +612,11 @@ dds type The type of the content to be returned. - \ :literal:`text`\ means return content in encoding specified by \ :emphasis:`response\_encoding`\ . + ``text`` means return content in encoding specified by *response_encoding*. - \ :emphasis:`src\_encoding`\ and \ :emphasis:`response\_encoding`\ are only used when \ :emphasis:`type=text`\ . + *src_encoding* and *response_encoding* are only used when *type=text*. - \ :literal:`base64`\ means return content in binary mode. + ``base64`` means return content in binary mode. | **required**: True | **type**: str @@ -642,7 +642,7 @@ dds dd_input - \ :emphasis:`dd\_input`\ is used to specify an in-stream data set. + *dd_input* is used to specify an in-stream data set. Input will be saved to a temporary data set with a record length of 80. @@ -660,15 +660,15 @@ dds content The input contents for the DD. - \ :emphasis:`dd\_input`\ supports single or multiple lines of input. + *dd_input* supports single or multiple lines of input. Multi-line input can be provided as a multi-line string or a list of strings with 1 line per list item. If a list of strings is provided, newlines will be added to each of the lines when used as input. - If a multi-line string is provided, use the proper block scalar style. YAML supports both \ `literal `__\ and \ `folded `__\ scalars. It is recommended to use the literal style indicator "|" with a block indentation indicator, for example; \ :emphasis:`content: | 2`\ is a literal block style indicator with a 2 space indentation, the entire block will be indented and newlines preserved. The block indentation range is 1 - 9. While generally unnecessary, YAML does support block \ `chomping `__\ indicators "+" and "-" as well. + If a multi-line string is provided, use the proper block scalar style. YAML supports both `literal `_ and `folded `_ scalars. It is recommended to use the literal style indicator "|" with a block indentation indicator, for example; *content: | 2* is a literal block style indicator with a 2 space indentation, the entire block will be indented and newlines preserved. The block indentation range is 1 - 9. While generally unnecessary, YAML does support block `chomping `_ indicators "+" and "-" as well. - When using the \ :emphasis:`content`\ option for instream-data, the module will ensure that all lines contain a blank in columns 1 and 2 and add blanks when not present while retaining a maximum length of 80 columns for any line. This is true for all \ :emphasis:`content`\ types; string, list of strings and when using a YAML block indicator. + When using the *content* option for instream-data, the module will ensure that all lines contain a blank in columns 1 and 2 and add blanks when not present while retaining a maximum length of 80 columns for any line. This is true for all *content* types; string, list of strings and when using a YAML block indicator. | **required**: True | **type**: raw @@ -686,11 +686,11 @@ dds type The type of the content to be returned. - \ :literal:`text`\ means return content in encoding specified by \ :emphasis:`response\_encoding`\ . + ``text`` means return content in encoding specified by *response_encoding*. - \ :emphasis:`src\_encoding`\ and \ :emphasis:`response\_encoding`\ are only used when \ :emphasis:`type=text`\ . + *src_encoding* and *response_encoding* are only used when *type=text*. - \ :literal:`base64`\ means return content in binary mode. + ``base64`` means return content in binary mode. | **required**: True | **type**: str @@ -700,7 +700,7 @@ dds src_encoding The encoding of the data set on the z/OS system. - for \ :emphasis:`dd\_input`\ , \ :emphasis:`src\_encoding`\ should generally not need to be changed. + for *dd_input*, *src_encoding* should generally not need to be changed. | **required**: False | **type**: str @@ -718,7 +718,7 @@ dds dd_output - Use \ :emphasis:`dd\_output`\ to specify - Content sent to the DD should be returned to the user. + Use *dd_output* to specify - Content sent to the DD should be returned to the user. | **required**: False | **type**: dict @@ -743,11 +743,11 @@ dds type The type of the content to be returned. - \ :literal:`text`\ means return content in encoding specified by \ :emphasis:`response\_encoding`\ . + ``text`` means return content in encoding specified by *response_encoding*. - \ :emphasis:`src\_encoding`\ and \ :emphasis:`response\_encoding`\ are only used when \ :emphasis:`type=text`\ . + *src_encoding* and *response_encoding* are only used when *type=text*. - \ :literal:`base64`\ means return content in binary mode. + ``base64`` means return content in binary mode. | **required**: True | **type**: str @@ -757,7 +757,7 @@ dds src_encoding The encoding of the data set on the z/OS system. - for \ :emphasis:`dd\_input`\ , \ :emphasis:`src\_encoding`\ should generally not need to be changed. + for *dd_input*, *src_encoding* should generally not need to be changed. | **required**: False | **type**: str @@ -775,9 +775,9 @@ dds dd_dummy - Use \ :emphasis:`dd\_dummy`\ to specify - No device or external storage space is to be allocated to the data set. - No disposition processing is to be performed on the data set. + Use *dd_dummy* to specify - No device or external storage space is to be allocated to the data set. - No disposition processing is to be performed on the data set. - \ :emphasis:`dd\_dummy`\ accepts no content input. + *dd_dummy* accepts no content input. | **required**: False | **type**: dict @@ -792,7 +792,7 @@ dds dd_vio - \ :emphasis:`dd\_vio`\ is used to handle temporary data sets. + *dd_vio* is used to handle temporary data sets. VIO data sets reside in the paging space; but, to the problem program and the access method, the data sets appear to reside on a direct access storage device. @@ -811,7 +811,7 @@ dds dd_concat - \ :emphasis:`dd\_concat`\ is used to specify a data set concatenation. + *dd_concat* is used to specify a data set concatenation. | **required**: False | **type**: dict @@ -825,7 +825,7 @@ dds dds - A list of DD statements, which can contain any of the following types: \ :emphasis:`dd\_data\_set`\ , \ :emphasis:`dd\_unix`\ , and \ :emphasis:`dd\_input`\ . + A list of DD statements, which can contain any of the following types: *dd_data_set*, *dd_unix*, and *dd_input*. | **required**: False | **type**: list @@ -835,7 +835,7 @@ dds dd_data_set Specify a data set. - \ :emphasis:`dd\_data\_set`\ can reference an existing data set. The data set referenced with \ :literal:`data\_set\_name`\ must be allocated before the module \ `zos\_mvs\_raw <./zos_mvs_raw.html>`__\ is run, you can use \ `zos\_data\_set <./zos_data_set.html>`__\ to allocate a data set. + *dd_data_set* can reference an existing data set. The data set referenced with ``data_set_name`` must be allocated before the module `zos_mvs_raw <./zos_mvs_raw.html>`_ is run, you can use `zos_data_set <./zos_data_set.html>`_ to allocate a data set. | **required**: False | **type**: dict @@ -853,7 +853,7 @@ dds type - The data set type. Only required when \ :emphasis:`disposition=new`\ . + The data set type. Only required when *disposition=new*. Maps to DSNTYPE on z/OS. @@ -863,7 +863,7 @@ dds disposition - \ :emphasis:`disposition`\ indicates the status of a data set. + *disposition* indicates the status of a data set. Defaults to shr. @@ -873,7 +873,7 @@ dds disposition_normal - \ :emphasis:`disposition\_normal`\ indicates what to do with the data set after normal termination of the program. + *disposition_normal* indicates what to do with the data set after normal termination of the program. | **required**: False | **type**: str @@ -881,7 +881,7 @@ dds disposition_abnormal - \ :emphasis:`disposition\_abnormal`\ indicates what to do with the data set after abnormal termination of the program. + *disposition_abnormal* indicates what to do with the data set after abnormal termination of the program. | **required**: False | **type**: str @@ -889,15 +889,15 @@ dds reuse - Determines if data set should be reused if \ :emphasis:`disposition=new`\ and a data set with matching name already exists. + Determines if data set should be reused if *disposition=new* and a data set with matching name already exists. - If \ :emphasis:`reuse=true`\ , \ :emphasis:`disposition`\ will be automatically switched to \ :literal:`SHR`\ . + If *reuse=true*, *disposition* will be automatically switched to ``SHR``. - If \ :emphasis:`reuse=false`\ , and a data set with a matching name already exists, allocation will fail. + If *reuse=false*, and a data set with a matching name already exists, allocation will fail. - Mutually exclusive with \ :emphasis:`replace`\ . + Mutually exclusive with *replace*. - \ :emphasis:`reuse`\ is only considered when \ :emphasis:`disposition=new`\ + *reuse* is only considered when *disposition=new* | **required**: False | **type**: bool @@ -905,17 +905,17 @@ dds replace - Determines if data set should be replaced if \ :emphasis:`disposition=new`\ and a data set with matching name already exists. + Determines if data set should be replaced if *disposition=new* and a data set with matching name already exists. - If \ :emphasis:`replace=true`\ , the original data set will be deleted, and a new data set created. + If *replace=true*, the original data set will be deleted, and a new data set created. - If \ :emphasis:`replace=false`\ , and a data set with a matching name already exists, allocation will fail. + If *replace=false*, and a data set with a matching name already exists, allocation will fail. - Mutually exclusive with \ :emphasis:`reuse`\ . + Mutually exclusive with *reuse*. - \ :emphasis:`replace`\ is only considered when \ :emphasis:`disposition=new`\ + *replace* is only considered when *disposition=new* - \ :emphasis:`replace`\ will result in loss of all data in the original data set unless \ :emphasis:`backup`\ is specified. + *replace* will result in loss of all data in the original data set unless *backup* is specified. | **required**: False | **type**: bool @@ -923,9 +923,9 @@ dds backup - Determines if a backup should be made of existing data set when \ :emphasis:`disposition=new`\ , \ :emphasis:`replace=true`\ , and a data set with the desired name is found. + Determines if a backup should be made of existing data set when *disposition=new*, *replace=true*, and a data set with the desired name is found. - \ :emphasis:`backup`\ is only used when \ :emphasis:`replace=true`\ . + *backup* is only used when *replace=true*. | **required**: False | **type**: bool @@ -933,7 +933,7 @@ dds space_type - The unit of measurement to use when allocating space for a new data set using \ :emphasis:`space\_primary`\ and \ :emphasis:`space\_secondary`\ . + The unit of measurement to use when allocating space for a new data set using *space_primary* and *space_secondary*. | **required**: False | **type**: str @@ -943,9 +943,9 @@ dds space_primary The primary amount of space to allocate for a new data set. - The value provided to \ :emphasis:`space\_type`\ is used as the unit of space for the allocation. + The value provided to *space_type* is used as the unit of space for the allocation. - Not applicable when \ :emphasis:`space\_type=blklgth`\ or \ :emphasis:`space\_type=reclgth`\ . + Not applicable when *space_type=blklgth* or *space_type=reclgth*. | **required**: False | **type**: int @@ -954,9 +954,9 @@ dds space_secondary When primary allocation of space is filled, secondary space will be allocated with the provided size as needed. - The value provided to \ :emphasis:`space\_type`\ is used as the unit of space for the allocation. + The value provided to *space_type* is used as the unit of space for the allocation. - Not applicable when \ :emphasis:`space\_type=blklgth`\ or \ :emphasis:`space\_type=reclgth`\ . + Not applicable when *space_type=blklgth* or *space_type=reclgth*. | **required**: False | **type**: int @@ -974,7 +974,7 @@ dds sms_management_class The desired management class for a new SMS-managed data set. - \ :emphasis:`sms\_management\_class`\ is ignored if specified for an existing data set. + *sms_management_class* is ignored if specified for an existing data set. All values must be between 1-8 alpha-numeric characters. @@ -985,7 +985,7 @@ dds sms_storage_class The desired storage class for a new SMS-managed data set. - \ :emphasis:`sms\_storage\_class`\ is ignored if specified for an existing data set. + *sms_storage_class* is ignored if specified for an existing data set. All values must be between 1-8 alpha-numeric characters. @@ -996,7 +996,7 @@ dds sms_data_class The desired data class for a new SMS-managed data set. - \ :emphasis:`sms\_data\_class`\ is ignored if specified for an existing data set. + *sms_data_class* is ignored if specified for an existing data set. All values must be between 1-8 alpha-numeric characters. @@ -1007,7 +1007,7 @@ dds block_size The maximum length of a block in bytes. - Default is dependent on \ :emphasis:`record\_format`\ + Default is dependent on *record_format* | **required**: False | **type**: int @@ -1023,9 +1023,9 @@ dds key_label The label for the encryption key used by the system to encrypt the data set. - \ :emphasis:`key\_label`\ is the public name of a protected encryption key in the ICSF key repository. + *key_label* is the public name of a protected encryption key in the ICSF key repository. - \ :emphasis:`key\_label`\ should only be provided when creating an extended format data set. + *key_label* should only be provided when creating an extended format data set. Maps to DSKEYLBL on z/OS. @@ -1047,7 +1047,7 @@ dds Key label must have a private key associated with it. - \ :emphasis:`label`\ can be a maximum of 64 characters. + *label* can be a maximum of 64 characters. Maps to KEYLAB1 on z/OS. @@ -1056,9 +1056,9 @@ dds encoding - How the label for the key encrypting key specified by \ :emphasis:`label`\ is encoded by the Encryption Key Manager. + How the label for the key encrypting key specified by *label* is encoded by the Encryption Key Manager. - \ :emphasis:`encoding`\ can either be set to \ :literal:`l`\ for label encoding, or \ :literal:`h`\ for hash encoding. + *encoding* can either be set to ``l`` for label encoding, or ``h`` for hash encoding. Maps to KEYCD1 on z/OS. @@ -1082,7 +1082,7 @@ dds Key label must have a private key associated with it. - \ :emphasis:`label`\ can be a maximum of 64 characters. + *label* can be a maximum of 64 characters. Maps to KEYLAB2 on z/OS. @@ -1091,9 +1091,9 @@ dds encoding - How the label for the key encrypting key specified by \ :emphasis:`label`\ is encoded by the Encryption Key Manager. + How the label for the key encrypting key specified by *label* is encoded by the Encryption Key Manager. - \ :emphasis:`encoding`\ can either be set to \ :literal:`l`\ for label encoding, or \ :literal:`h`\ for hash encoding. + *encoding* can either be set to ``l`` for label encoding, or ``h`` for hash encoding. Maps to KEYCD2 on z/OS. @@ -1106,7 +1106,7 @@ dds key_length The length of the keys used in a new data set. - If using SMS, setting \ :emphasis:`key\_length`\ overrides the key length defined in the SMS data class of the data set. + If using SMS, setting *key_length* overrides the key length defined in the SMS data class of the data set. Valid values are (0-255 non-vsam), (1-255 vsam). @@ -1119,14 +1119,14 @@ dds The first byte of a logical record is position 0. - Provide \ :emphasis:`key\_offset`\ only for VSAM key-sequenced data sets. + Provide *key_offset* only for VSAM key-sequenced data sets. | **required**: False | **type**: int record_length - The logical record length. (e.g \ :literal:`80`\ ). + The logical record length. (e.g ``80``). For variable data sets, the length must include the 4-byte prefix area. @@ -1160,11 +1160,11 @@ dds type The type of the content to be returned. - \ :literal:`text`\ means return content in encoding specified by \ :emphasis:`response\_encoding`\ . + ``text`` means return content in encoding specified by *response_encoding*. - \ :emphasis:`src\_encoding`\ and \ :emphasis:`response\_encoding`\ are only used when \ :emphasis:`type=text`\ . + *src_encoding* and *response_encoding* are only used when *type=text*. - \ :literal:`base64`\ means return content in binary mode. + ``base64`` means return content in binary mode. | **required**: True | **type**: str @@ -1199,7 +1199,7 @@ dds path The path to an existing UNIX file. - Or provide the path to an new created UNIX file when \ :emphasis:`status\_group=ocreat`\ . + Or provide the path to an new created UNIX file when *status_group=ocreat*. The provided path must be absolute. @@ -1224,7 +1224,7 @@ dds mode - The file access attributes when the UNIX file is created specified in \ :emphasis:`path`\ . + The file access attributes when the UNIX file is created specified in *path*. Specify the mode as an octal number similar to chmod. @@ -1235,47 +1235,47 @@ dds status_group - The status for the UNIX file specified in \ :emphasis:`path`\ . + The status for the UNIX file specified in *path*. - If you do not specify a value for the \ :emphasis:`status\_group`\ parameter the module assumes that the pathname exists, searches for it, and fails the module if the pathname does not exist. + If you do not specify a value for the *status_group* parameter the module assumes that the pathname exists, searches for it, and fails the module if the pathname does not exist. Maps to PATHOPTS status group file options on z/OS. You can specify up to 6 choices. - \ :emphasis:`oappend`\ sets the file offset to the end of the file before each write, so that data is written at the end of the file. + *oappend* sets the file offset to the end of the file before each write, so that data is written at the end of the file. - \ :emphasis:`ocreat`\ specifies that if the file does not exist, the system is to create it. If a directory specified in the pathname does not exist, one is not created, and the new file is not created. If the file already exists and \ :emphasis:`oexcl`\ was not specified, the system allows the program to use the existing file. If the file already exists and \ :emphasis:`oexcl`\ was specified, the system fails the allocation and the job step. + *ocreat* specifies that if the file does not exist, the system is to create it. If a directory specified in the pathname does not exist, one is not created, and the new file is not created. If the file already exists and *oexcl* was not specified, the system allows the program to use the existing file. If the file already exists and *oexcl* was specified, the system fails the allocation and the job step. - \ :emphasis:`oexcl`\ specifies that if the file does not exist, the system is to create it. If the file already exists, the system fails the allocation and the job step. The system ignores \ :emphasis:`oexcl`\ if \ :emphasis:`ocreat`\ is not also specified. + *oexcl* specifies that if the file does not exist, the system is to create it. If the file already exists, the system fails the allocation and the job step. The system ignores *oexcl* if *ocreat* is not also specified. - \ :emphasis:`onoctty`\ specifies that if the PATH parameter identifies a terminal device, opening of the file does not make the terminal device the controlling terminal for the process. + *onoctty* specifies that if the PATH parameter identifies a terminal device, opening of the file does not make the terminal device the controlling terminal for the process. - \ :emphasis:`ononblock`\ specifies the following, depending on the type of file + *ononblock* specifies the following, depending on the type of file For a FIFO special file - 1. With \ :emphasis:`ononblock`\ specified and \ :emphasis:`ordonly`\ access, an open function for reading-only returns without delay. + 1. With *ononblock* specified and *ordonly* access, an open function for reading-only returns without delay. - 2. With \ :emphasis:`ononblock`\ not specified and \ :emphasis:`ordonly`\ access, an open function for reading-only blocks (waits) until a process opens the file for writing. + 2. With *ononblock* not specified and *ordonly* access, an open function for reading-only blocks (waits) until a process opens the file for writing. - 3. With \ :emphasis:`ononblock`\ specified and \ :emphasis:`owronly`\ access, an open function for writing-only returns an error if no process currently has the file open for reading. + 3. With *ononblock* specified and *owronly* access, an open function for writing-only returns an error if no process currently has the file open for reading. - 4. With \ :emphasis:`ononblock`\ not specified and \ :emphasis:`owronly`\ access, an open function for writing-only blocks (waits) until a process opens the file for reading. + 4. With *ononblock* not specified and *owronly* access, an open function for writing-only blocks (waits) until a process opens the file for reading. 5. For a character special file that supports nonblocking open - 6. If \ :emphasis:`ononblock`\ is specified, an open function returns without blocking (waiting) until the device is ready or available. Device response depends on the type of device. + 6. If *ononblock* is specified, an open function returns without blocking (waiting) until the device is ready or available. Device response depends on the type of device. - 7. If \ :emphasis:`ononblock`\ is not specified, an open function blocks (waits) until the device is ready or available. + 7. If *ononblock* is not specified, an open function blocks (waits) until the device is ready or available. - \ :emphasis:`ononblock`\ has no effect on other file types. + *ononblock* has no effect on other file types. - \ :emphasis:`osync`\ specifies that the system is to move data from buffer storage to permanent storage before returning control from a callable service that performs a write. + *osync* specifies that the system is to move data from buffer storage to permanent storage before returning control from a callable service that performs a write. - \ :emphasis:`otrunc`\ specifies that the system is to truncate the file length to zero if all the following are true: the file specified exists, the file is a regular file, and the file successfully opened with \ :emphasis:`ordwr`\ or \ :emphasis:`owronly`\ . + *otrunc* specifies that the system is to truncate the file length to zero if all the following are true: the file specified exists, the file is a regular file, and the file successfully opened with *ordwr* or *owronly*. - When \ :emphasis:`otrunc`\ is specified, the system does not change the mode and owner. \ :emphasis:`otrunc`\ has no effect on FIFO special files or character special files. + When *otrunc* is specified, the system does not change the mode and owner. *otrunc* has no effect on FIFO special files or character special files. | **required**: False | **type**: list @@ -1284,7 +1284,7 @@ dds access_group - The kind of access to request for the UNIX file specified in \ :emphasis:`path`\ . + The kind of access to request for the UNIX file specified in *path*. | **required**: False | **type**: str @@ -1292,7 +1292,7 @@ dds file_data_type - The type of data that is (or will be) stored in the file specified in \ :emphasis:`path`\ . + The type of data that is (or will be) stored in the file specified in *path*. Maps to FILEDATA on z/OS. @@ -1305,7 +1305,7 @@ dds block_size The block size, in bytes, for the UNIX file. - Default is dependent on \ :emphasis:`record\_format`\ + Default is dependent on *record_format* | **required**: False | **type**: int @@ -1314,7 +1314,7 @@ dds record_length The logical record length for the UNIX file. - \ :emphasis:`record\_length`\ is required in situations where the data will be processed as records and therefore, \ :emphasis:`record\_length`\ , \ :emphasis:`block\_size`\ and \ :emphasis:`record\_format`\ need to be supplied since a UNIX file would normally be treated as a stream of bytes. + *record_length* is required in situations where the data will be processed as records and therefore, *record_length*, *block_size* and *record_format* need to be supplied since a UNIX file would normally be treated as a stream of bytes. Maps to LRECL on z/OS. @@ -1325,7 +1325,7 @@ dds record_format The record format for the UNIX file. - \ :emphasis:`record\_format`\ is required in situations where the data will be processed as records and therefore, \ :emphasis:`record\_length`\ , \ :emphasis:`block\_size`\ and \ :emphasis:`record\_format`\ need to be supplied since a UNIX file would normally be treated as a stream of bytes. + *record_format* is required in situations where the data will be processed as records and therefore, *record_length*, *block_size* and *record_format* need to be supplied since a UNIX file would normally be treated as a stream of bytes. | **required**: False | **type**: str @@ -1344,11 +1344,11 @@ dds type The type of the content to be returned. - \ :literal:`text`\ means return content in encoding specified by \ :emphasis:`response\_encoding`\ . + ``text`` means return content in encoding specified by *response_encoding*. - \ :emphasis:`src\_encoding`\ and \ :emphasis:`response\_encoding`\ are only used when \ :emphasis:`type=text`\ . + *src_encoding* and *response_encoding* are only used when *type=text*. - \ :literal:`base64`\ means return content in binary mode. + ``base64`` means return content in binary mode. | **required**: True | **type**: str @@ -1374,7 +1374,7 @@ dds dd_input - \ :emphasis:`dd\_input`\ is used to specify an in-stream data set. + *dd_input* is used to specify an in-stream data set. Input will be saved to a temporary data set with a record length of 80. @@ -1385,15 +1385,15 @@ dds content The input contents for the DD. - \ :emphasis:`dd\_input`\ supports single or multiple lines of input. + *dd_input* supports single or multiple lines of input. Multi-line input can be provided as a multi-line string or a list of strings with 1 line per list item. If a list of strings is provided, newlines will be added to each of the lines when used as input. - If a multi-line string is provided, use the proper block scalar style. YAML supports both \ `literal `__\ and \ `folded `__\ scalars. It is recommended to use the literal style indicator "|" with a block indentation indicator, for example; \ :emphasis:`content: | 2`\ is a literal block style indicator with a 2 space indentation, the entire block will be indented and newlines preserved. The block indentation range is 1 - 9. While generally unnecessary, YAML does support block \ `chomping `__\ indicators "+" and "-" as well. + If a multi-line string is provided, use the proper block scalar style. YAML supports both `literal `_ and `folded `_ scalars. It is recommended to use the literal style indicator "|" with a block indentation indicator, for example; *content: | 2* is a literal block style indicator with a 2 space indentation, the entire block will be indented and newlines preserved. The block indentation range is 1 - 9. While generally unnecessary, YAML does support block `chomping `_ indicators "+" and "-" as well. - When using the \ :emphasis:`content`\ option for instream-data, the module will ensure that all lines contain a blank in columns 1 and 2 and add blanks when not present while retaining a maximum length of 80 columns for any line. This is true for all \ :emphasis:`content`\ types; string, list of strings and when using a YAML block indicator. + When using the *content* option for instream-data, the module will ensure that all lines contain a blank in columns 1 and 2 and add blanks when not present while retaining a maximum length of 80 columns for any line. This is true for all *content* types; string, list of strings and when using a YAML block indicator. | **required**: True | **type**: raw @@ -1411,11 +1411,11 @@ dds type The type of the content to be returned. - \ :literal:`text`\ means return content in encoding specified by \ :emphasis:`response\_encoding`\ . + ``text`` means return content in encoding specified by *response_encoding*. - \ :emphasis:`src\_encoding`\ and \ :emphasis:`response\_encoding`\ are only used when \ :emphasis:`type=text`\ . + *src_encoding* and *response_encoding* are only used when *type=text*. - \ :literal:`base64`\ means return content in binary mode. + ``base64`` means return content in binary mode. | **required**: True | **type**: str @@ -1425,7 +1425,7 @@ dds src_encoding The encoding of the data set on the z/OS system. - for \ :emphasis:`dd\_input`\ , \ :emphasis:`src\_encoding`\ should generally not need to be changed. + for *dd_input*, *src_encoding* should generally not need to be changed. | **required**: False | **type**: str @@ -1448,7 +1448,7 @@ dds tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str @@ -1794,11 +1794,11 @@ Notes ----- .. note:: - When executing programs using \ `zos\_mvs\_raw <./zos_mvs_raw.html>`__\ , you may encounter errors that originate in the programs implementation. Two such known issues are noted below of which one has been addressed with an APAR. + When executing programs using `zos_mvs_raw <./zos_mvs_raw.html>`_, you may encounter errors that originate in the programs implementation. Two such known issues are noted below of which one has been addressed with an APAR. - 1. \ `zos\_mvs\_raw <./zos_mvs_raw.html>`__\ module execution fails when invoking Database Image Copy 2 Utility or Database Recovery Utility in conjunction with FlashCopy or Fast Replication. + 1. `zos_mvs_raw <./zos_mvs_raw.html>`_ module execution fails when invoking Database Image Copy 2 Utility or Database Recovery Utility in conjunction with FlashCopy or Fast Replication. - 2. \ `zos\_mvs\_raw <./zos_mvs_raw.html>`__\ module execution fails when invoking DFSRRC00 with parm "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". This issue is addressed by APAR PH28089. + 2. `zos_mvs_raw <./zos_mvs_raw.html>`_ module execution fails when invoking DFSRRC00 with parm "UPB,PRECOMP", "UPB, POSTCOMP" or "UPB,PRECOMP,POSTCOMP". This issue is addressed by APAR PH28089. 3. When executing a program, refer to the programs documentation as each programs requirments can vary fom DDs, instream-data indentation and continuation characters. @@ -1876,7 +1876,7 @@ backups | **type**: str backup_name - The name of the data set containing the backup of content from data set in original\_name. + The name of the data set containing the backup of content from data set in original_name. | **type**: str diff --git a/docs/source/modules/zos_operator.rst b/docs/source/modules/zos_operator.rst index e29c59346..2bd53fc83 100644 --- a/docs/source/modules/zos_operator.rst +++ b/docs/source/modules/zos_operator.rst @@ -56,7 +56,7 @@ wait_time_s This option is helpful on a busy system requiring more time to execute commands. - Setting \ :emphasis:`wait`\ can instruct if execution should wait the full \ :emphasis:`wait\_time\_s`\ . + Setting *wait* can instruct if execution should wait the full *wait_time_s*. | **required**: False | **type**: int diff --git a/docs/source/modules/zos_operator_action_query.rst b/docs/source/modules/zos_operator_action_query.rst index b7956c8b8..ba9398b50 100644 --- a/docs/source/modules/zos_operator_action_query.rst +++ b/docs/source/modules/zos_operator_action_query.rst @@ -31,7 +31,7 @@ system If the system name is not specified, all outstanding messages for that system and for the local systems attached to it are returned. - A trailing asterisk, (\*) wildcard is supported. + A trailing asterisk, (*) wildcard is supported. | **required**: False | **type**: str @@ -42,7 +42,7 @@ message_id If the message identifier is not specified, all outstanding messages for all message identifiers are returned. - A trailing asterisk, (\*) wildcard is supported. + A trailing asterisk, (*) wildcard is supported. | **required**: False | **type**: str @@ -53,7 +53,7 @@ job_name If the message job name is not specified, all outstanding messages for all job names are returned. - A trailing asterisk, (\*) wildcard is supported. + A trailing asterisk, (*) wildcard is supported. | **required**: False | **type**: str @@ -69,24 +69,24 @@ message_filter filter - Specifies the substring or regex to match to the outstanding messages, see \ :emphasis:`use\_regex`\ . + Specifies the substring or regex to match to the outstanding messages, see *use_regex*. All special characters in a filter string that are not a regex are escaped. - Valid Python regular expressions are supported. See \ `the official documentation `__\ for more information. + Valid Python regular expressions are supported. See `the official documentation `_ for more information. - Regular expressions are compiled with the flag \ :strong:`re.DOTALL`\ which makes the \ :strong:`'.'`\ special character match any character including a newline." + Regular expressions are compiled with the flag **re.DOTALL** which makes the **'.'** special character match any character including a newline." | **required**: True | **type**: str use_regex - Indicates that the value for \ :emphasis:`filter`\ is a regex or a string to match. + Indicates that the value for *filter* is a regex or a string to match. - If False, the module assumes that \ :emphasis:`filter`\ is not a regex and matches the \ :emphasis:`filter`\ substring on the outstanding messages. + If False, the module assumes that *filter* is not a regex and matches the *filter* substring on the outstanding messages. - If True, the module creates a regex from the \ :emphasis:`filter`\ string and matches it to the outstanding messages. + If True, the module creates a regex from the *filter* string and matches it to the outstanding messages. | **required**: False | **type**: bool @@ -222,7 +222,7 @@ actions | **sample**: STC01537 message_text - Content of the outstanding message requiring operator action awaiting a reply. If \ :emphasis:`message\_filter`\ is set, \ :emphasis:`message\_text`\ will be filtered accordingly. + Content of the outstanding message requiring operator action awaiting a reply. If *message_filter* is set, *message_text* will be filtered accordingly. | **returned**: success | **type**: str diff --git a/docs/source/modules/zos_ping.rst b/docs/source/modules/zos_ping.rst index acb901790..a4405b473 100644 --- a/docs/source/modules/zos_ping.rst +++ b/docs/source/modules/zos_ping.rst @@ -16,9 +16,9 @@ zos_ping -- Ping z/OS and check dependencies. Synopsis -------- -- \ `zos\_ping <./zos_ping.html>`__\ verifies the presence of z/OS Web Client Enablement Toolkit, iconv, and Python. -- \ `zos\_ping <./zos_ping.html>`__\ returns \ :literal:`pong`\ when the target host is not missing any required dependencies. -- If the target host is missing optional dependencies, the \ `zos\_ping <./zos_ping.html>`__\ will return one or more warning messages. +- `zos_ping <./zos_ping.html>`_ verifies the presence of z/OS Web Client Enablement Toolkit, iconv, and Python. +- `zos_ping <./zos_ping.html>`_ returns ``pong`` when the target host is not missing any required dependencies. +- If the target host is missing optional dependencies, the `zos_ping <./zos_ping.html>`_ will return one or more warning messages. - If a required dependency is missing from the target host, an explanatory message will be returned with the module failure. @@ -44,7 +44,7 @@ Notes ----- .. note:: - This module is written in REXX and relies on the SCP protocol to transfer the source to the managed z/OS node and encode it in the managed nodes default encoding, eg IBM-1047. Starting with OpenSSH 9.0, it switches from SCP to use SFTP by default, meaning transfers are no longer treated as text and are transferred as binary preserving the source files encoding resulting in a module failure. If you are using OpenSSH 9.0 (ssh -V) or later, you can instruct SSH to use SCP by adding the entry \ :literal:`scp\_extra\_args="-O"`\ into the ini file named \ :literal:`ansible.cfg`\ . + This module is written in REXX and relies on the SCP protocol to transfer the source to the managed z/OS node and encode it in the managed nodes default encoding, eg IBM-1047. Starting with OpenSSH 9.0, it switches from SCP to use SFTP by default, meaning transfers are no longer treated as text and are transferred as binary preserving the source files encoding resulting in a module failure. If you are using OpenSSH 9.0 (ssh -V) or later, you can instruct SSH to use SCP by adding the entry ``scp_extra_args="-O"`` into the ini file named ``ansible.cfg``. diff --git a/docs/source/modules/zos_script.rst b/docs/source/modules/zos_script.rst index d2977c486..821f11a9c 100644 --- a/docs/source/modules/zos_script.rst +++ b/docs/source/modules/zos_script.rst @@ -16,7 +16,7 @@ zos_script -- Run scripts in z/OS Synopsis -------- -- The \ `zos\_script <./zos_script.html>`__\ module runs a local or remote script in the remote machine. +- The `zos_script <./zos_script.html>`_ module runs a local or remote script in the remote machine. @@ -56,7 +56,7 @@ creates encoding Specifies which encodings the script should be converted from and to. - If \ :literal:`encoding`\ is not provided, the module determines which local and remote charsets to convert the data from and to. + If ``encoding`` is not provided, the module determines which local and remote charsets to convert the data from and to. | **required**: False | **type**: dict @@ -87,9 +87,9 @@ executable remote_src - If set to \ :literal:`false`\ , the module will search the script in the controller. + If set to ``false``, the module will search the script in the controller. - If set to \ :literal:`true`\ , the module will search the script in the remote machine. + If set to ``true``, the module will search the script in the remote machine. | **required**: False | **type**: bool @@ -103,13 +103,13 @@ removes use_template - Whether the module should treat \ :literal:`src`\ as a Jinja2 template and render it before continuing with the rest of the module. + Whether the module should treat ``src`` as a Jinja2 template and render it before continuing with the rest of the module. - Only valid when \ :literal:`src`\ is a local file or directory. + Only valid when ``src`` is a local file or directory. - All variables defined in inventory files, vars files and the playbook will be passed to the template engine, as well as \ `Ansible special variables `__\ , such as \ :literal:`playbook\_dir`\ , \ :literal:`ansible\_version`\ , etc. + All variables defined in inventory files, vars files and the playbook will be passed to the template engine, as well as `Ansible special variables `_, such as ``playbook_dir``, ``ansible_version``, etc. - If variables defined in different scopes share the same name, Ansible will apply variable precedence to them. You can see the complete precedence order \ `in Ansible's documentation `__\ + If variables defined in different scopes share the same name, Ansible will apply variable precedence to them. You can see the complete precedence order `in Ansible's documentation `_ | **required**: False | **type**: bool @@ -119,9 +119,9 @@ use_template template_parameters Options to set the way Jinja2 will process templates. - Jinja2 already sets defaults for the markers it uses, you can find more information at its \ `official documentation `__\ . + Jinja2 already sets defaults for the markers it uses, you can find more information at its `official documentation `_. - These options are ignored unless \ :literal:`use\_template`\ is true. + These options are ignored unless ``use_template`` is true. | **required**: False | **type**: dict @@ -200,7 +200,7 @@ template_parameters trim_blocks Whether Jinja2 should remove the first newline after a block is removed. - Setting this option to \ :literal:`False`\ will result in newlines being added to the rendered template. This could create invalid code when working with JCL templates or empty records in destination data sets. + Setting this option to ``False`` will result in newlines being added to the rendered template. This could create invalid code when working with JCL templates or empty records in destination data sets. | **required**: False | **type**: bool @@ -220,8 +220,11 @@ template_parameters | **required**: False | **type**: str - | **default**: \\n - | **choices**: \\n, \\r, \\r\\n + | **default**: + + | **choices**: +, , + auto_reload @@ -290,7 +293,7 @@ Notes .. note:: When executing local scripts, temporary storage will be used on the remote z/OS system. The size of the temporary storage will correspond to the size of the file being copied. - The location in the z/OS system where local scripts will be copied to can be configured through Ansible's \ :literal:`remote\_tmp`\ option. Refer to \ `Ansible's documentation `__\ for more information. + The location in the z/OS system where local scripts will be copied to can be configured through Ansible's ``remote_tmp`` option. Refer to `Ansible's documentation `_ for more information. All local scripts copied to a remote z/OS system will be removed from the managed node before the module finishes executing. @@ -298,13 +301,13 @@ Notes The module will only add execution permissions for the file owner. - If executing REXX scripts, make sure to include a newline character on each line of the file. Otherwise, the interpreter may fail and return error \ :literal:`BPXW0003I`\ . + If executing REXX scripts, make sure to include a newline character on each line of the file. Otherwise, the interpreter may fail and return error ``BPXW0003I``. - For supported character sets used to encode data, refer to the \ `documentation `__\ . + For supported character sets used to encode data, refer to the `documentation `_. - This module uses \ `zos\_copy <./zos_copy.html>`__\ to copy local scripts to the remote machine which uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. + This module uses `zos_copy <./zos_copy.html>`_ to copy local scripts to the remote machine which uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. - This module executes scripts inside z/OS UNIX System Services. For running REXX scripts contained in data sets or CLISTs, consider issuing a TSO command with \ `zos\_tso\_command <./zos_tso_command.html>`__\ . + This module executes scripts inside z/OS UNIX System Services. For running REXX scripts contained in data sets or CLISTs, consider issuing a TSO command with `zos_tso_command <./zos_tso_command.html>`_. The community script module does not rely on Python to execute scripts on a managed node, while this module does. Python must be present on the remote machine. diff --git a/docs/source/modules/zos_tso_command.rst b/docs/source/modules/zos_tso_command.rst index b35c13a1b..4af6b1b52 100644 --- a/docs/source/modules/zos_tso_command.rst +++ b/docs/source/modules/zos_tso_command.rst @@ -40,7 +40,7 @@ commands max_rc Specifies the maximum return code allowed for a TSO command. - If more than one TSO command is submitted, the \ :emphasis:`max\_rc`\ applies to all TSO commands. + If more than one TSO command is submitted, the *max_rc* applies to all TSO commands. | **required**: False | **type**: int @@ -119,7 +119,7 @@ output max_rc Specifies the maximum return code allowed for a TSO command. - If more than one TSO command is submitted, the \ :emphasis:`max\_rc`\ applies to all TSO commands. + If more than one TSO command is submitted, the *max_rc* applies to all TSO commands. | **returned**: always | **type**: int diff --git a/docs/source/modules/zos_unarchive.rst b/docs/source/modules/zos_unarchive.rst index 42a4db897..b3a4ff7cd 100644 --- a/docs/source/modules/zos_unarchive.rst +++ b/docs/source/modules/zos_unarchive.rst @@ -16,8 +16,8 @@ zos_unarchive -- Unarchive files and data sets in z/OS. Synopsis -------- -- The \ :literal:`zos\_unarchive`\ module unpacks an archive after optionally transferring it to the remote system. -- For supported archive formats, see option \ :literal:`format`\ . +- The ``zos_unarchive`` module unpacks an archive after optionally transferring it to the remote system. +- For supported archive formats, see option ``format``. - Supported sources are USS (UNIX System Services) or z/OS data sets. - Mixing MVS data sets with USS files for unarchiving is not supported. - The archive is sent to the remote as binary, so no encoding is performed. @@ -33,13 +33,13 @@ Parameters src The remote absolute path or data set of the archive to be uncompressed. - \ :emphasis:`src`\ can be a USS file or MVS data set name. + *src* can be a USS file or MVS data set name. USS file paths should be absolute paths. - MVS data sets supported types are \ :literal:`SEQ`\ , \ :literal:`PDS`\ , \ :literal:`PDSE`\ . + MVS data sets supported types are ``SEQ``, ``PDS``, ``PDSE``. - GDS relative names are supported ``e.g. USER.GDG(-1)``. + GDS relative names are supported ``e.g. USER.GDG(-1\``). | **required**: True | **type**: str @@ -74,14 +74,14 @@ format If the data set provided exists, the data set must have the following attributes: LRECL=255, BLKSIZE=3120, and RECFM=VB - When providing the \ :emphasis:`xmit\_log\_data\_set`\ name, ensure there is adequate space. + When providing the *xmit_log_data_set* name, ensure there is adequate space. | **required**: False | **type**: str use_adrdssu - If set to true, the \ :literal:`zos\_archive`\ module will use Data Facility Storage Management Subsystem data set services (DFSMSdss) program ADRDSSU to uncompress data sets from a portable format after using \ :literal:`xmit`\ or \ :literal:`terse`\ . + If set to true, the ``zos_archive`` module will use Data Facility Storage Management Subsystem data set services (DFSMSdss) program ADRDSSU to uncompress data sets from a portable format after using ``xmit`` or ``terse``. | **required**: False | **type**: bool @@ -89,7 +89,7 @@ format dest_volumes - When \ :emphasis:`use\_adrdssu=True`\ , specify the volume the data sets will be written to. + When *use_adrdssu=True*, specify the volume the data sets will be written to. If no volume is specified, storage management rules will be used to determine the volume where the file will be unarchived. @@ -105,7 +105,7 @@ format dest The remote absolute path or data set where the content should be unarchived to. - \ :emphasis:`dest`\ can be a USS file, directory or MVS data set name. + *dest* can be a USS file, directory or MVS data set name. If dest has missing parent directories, they will not be created. @@ -118,7 +118,7 @@ group When left unspecified, it uses the current group of the current user unless you are root, in which case it can preserve the previous ownership. - This option is only applicable if \ :literal:`dest`\ is USS, otherwise ignored. + This option is only applicable if ``dest`` is USS, otherwise ignored. | **required**: False | **type**: str @@ -127,13 +127,13 @@ group mode The permission of the uncompressed files. - If \ :literal:`dest`\ is USS, this will act as Unix file mode, otherwise ignored. + If ``dest`` is USS, this will act as Unix file mode, otherwise ignored. - It should be noted that modes are octal numbers. The user must either add a leading zero so that Ansible's YAML parser knows it is an octal number (like \ :literal:`0644`\ or \ :literal:`01777`\ )or quote it (like \ :literal:`'644'`\ or \ :literal:`'1777'`\ ) so Ansible receives a string and can do its own conversion from string into number. Giving Ansible a number without following one of these rules will end up with a decimal number which will have unexpected results. + It should be noted that modes are octal numbers. The user must either add a leading zero so that Ansible's YAML parser knows it is an octal number (like ``0644`` or ``01777``)or quote it (like ``'644'`` or ``'1777'``) so Ansible receives a string and can do its own conversion from string into number. Giving Ansible a number without following one of these rules will end up with a decimal number which will have unexpected results. - The mode may also be specified as a symbolic mode (for example, \`\`u+rwx\`\` or \`\`u=rw,g=r,o=r\`\`) or a special string \`preserve\`. + The mode may also be specified as a symbolic mode (for example, ``u+rwx`` or ``u=rw,g=r,o=r``) or a special string `preserve`. - \ :emphasis:`mode=preserve`\ means that the file will be given the same permissions as the source file. + *mode=preserve* means that the file will be given the same permissions as the source file. | **required**: False | **type**: str @@ -151,7 +151,7 @@ owner include A list of directories, files or data set names to extract from the archive. - GDS relative names are supported ``e.g. USER.GDG(-1)``. + GDS relative names are supported ``e.g. USER.GDG(-1\``). When ``include`` is set, only those files will we be extracted leaving the remaining files in the archive. @@ -165,7 +165,7 @@ include exclude List the directory and file or data set names that you would like to exclude from the unarchive action. - GDS relative names are supported ``e.g. USER.GDG(-1)``. + GDS relative names are supported ``e.g. USER.GDG(-1\``). Mutually exclusive with include. @@ -183,7 +183,7 @@ list dest_data_set - Data set attributes to customize a \ :literal:`dest`\ data set that the archive will be copied into. + Data set attributes to customize a ``dest`` data set that the archive will be copied into. | **required**: False | **type**: dict @@ -206,18 +206,18 @@ dest_data_set space_primary - If the destination \ :emphasis:`dest`\ data set does not exist , this sets the primary space allocated for the data set. + If the destination *dest* data set does not exist , this sets the primary space allocated for the data set. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int space_secondary - If the destination \ :emphasis:`dest`\ data set does not exist , this sets the secondary space allocated for the data set. + If the destination *dest* data set does not exist , this sets the secondary space allocated for the data set. - The unit of space used is set using \ :emphasis:`space\_type`\ . + The unit of space used is set using *space_type*. | **required**: False | **type**: int @@ -226,7 +226,7 @@ dest_data_set space_type If the destination data set does not exist, this sets the unit of measurement to use when defining primary and secondary space. - Valid units of size are \ :literal:`k`\ , \ :literal:`m`\ , \ :literal:`g`\ , \ :literal:`cyl`\ , and \ :literal:`trk`\ . + Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. | **required**: False | **type**: str @@ -234,7 +234,7 @@ dest_data_set record_format - If the destination data set does not exist, this sets the format of the data set. (e.g \ :literal:`fb`\ ) + If the destination data set does not exist, this sets the format of the data set. (e.g ``fb``) Choices are case-sensitive. @@ -271,9 +271,9 @@ dest_data_set key_offset The key offset to use when creating a KSDS data set. - \ :emphasis:`key\_offset`\ is required when \ :emphasis:`type=ksds`\ . + *key_offset* is required when *type=ksds*. - \ :emphasis:`key\_offset`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_offset* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -282,9 +282,9 @@ dest_data_set key_length The key length to use when creating a KSDS data set. - \ :emphasis:`key\_length`\ is required when \ :emphasis:`type=ksds`\ . + *key_length* is required when *type=ksds*. - \ :emphasis:`key\_length`\ should only be provided when \ :emphasis:`type=ksds`\ + *key_length* should only be provided when *type=ksds* | **required**: False | **type**: int @@ -333,7 +333,7 @@ dest_data_set tmp_hlq Override the default high level qualifier (HLQ) for temporary data sets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the environment variable value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the environment variable value ``TMPHLQ`` is used. | **required**: False | **type**: str @@ -348,9 +348,9 @@ force remote_src - If set to true, \ :literal:`zos\_unarchive`\ retrieves the archive from the remote system. + If set to true, ``zos_unarchive`` retrieves the archive from the remote system. - If set to false, \ :literal:`zos\_unarchive`\ searches the local machine (Ansible controller) for the archive. + If set to false, ``zos_unarchive`` searches the local machine (Ansible controller) for the archive. | **required**: False | **type**: bool @@ -417,7 +417,7 @@ Notes .. note:: VSAMs are not supported. - This module uses \ `zos\_copy <./zos_copy.html>`__\ to copy local scripts to the remote machine which uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. + This module uses `zos_copy <./zos_copy.html>`_ to copy local scripts to the remote machine which uses SFTP (Secure File Transfer Protocol) for the underlying transfer protocol; SCP (secure copy protocol) and Co:Z SFTP are not supported. In the case of Co:z SFTP, you can exempt the Ansible user id on z/OS from using Co:Z thus falling back to using standard SFTP. If the module detects SCP, it will temporarily use SFTP for transfers, if not available, the module will fail. diff --git a/docs/source/modules/zos_volume_init.rst b/docs/source/modules/zos_volume_init.rst index a2b6f25ab..5647ad998 100644 --- a/docs/source/modules/zos_volume_init.rst +++ b/docs/source/modules/zos_volume_init.rst @@ -17,14 +17,14 @@ zos_volume_init -- Initialize volumes or minidisks. Synopsis -------- - Initialize a volume or minidisk on z/OS. -- \ :emphasis:`zos\_volume\_init`\ will create the volume label and entry into the volume table of contents (VTOC). +- *zos_volume_init* will create the volume label and entry into the volume table of contents (VTOC). - Volumes are used for storing data and executable programs. - A minidisk is a portion of a disk that is linked to your virtual machine. - A VTOC lists the data sets that reside on a volume, their location, size, and other attributes. -- \ :emphasis:`zos\_volume\_init`\ uses the ICKDSF command INIT to initialize a volume. In some cases the command could be protected by facility class \`STGADMIN.ICK.INIT\`. Protection occurs when the class is active, and the class profile is defined. Ensure the user executing the Ansible task is permitted to execute ICKDSF command INIT, otherwise, any user can use the command. -- ICKDSF is an Authorized Program Facility (APF) program on z/OS, \ :emphasis:`zos\_volume\_init`\ will run in authorized mode but if the program ICKDSF is not APF authorized, the task will end. +- *zos_volume_init* uses the ICKDSF command INIT to initialize a volume. In some cases the command could be protected by facility class `STGADMIN.ICK.INIT`. Protection occurs when the class is active, and the class profile is defined. Ensure the user executing the Ansible task is permitted to execute ICKDSF command INIT, otherwise, any user can use the command. +- ICKDSF is an Authorized Program Facility (APF) program on z/OS, *zos_volume_init* will run in authorized mode but if the program ICKDSF is not APF authorized, the task will end. - Note that defaults set on target z/OS systems may override ICKDSF parameters. -- If is recommended that data on the volume is backed up as the \ :emphasis:`zos\_volume\_init`\ module will not perform any backups. You can use the \ `zos\_backup\_restore <./zos_backup_restore.html>`__\ module to backup a volume. +- If is recommended that data on the volume is backed up as the *zos_volume_init* module will not perform any backups. You can use the `zos_backup_restore <./zos_backup_restore.html>`_ module to backup a volume. @@ -35,9 +35,9 @@ Parameters address - \ :emphasis:`address`\ is a 3 or 4 digit hexadecimal number that specifies the address of the volume or minidisk. + *address* is a 3 or 4 digit hexadecimal number that specifies the address of the volume or minidisk. - \ :emphasis:`address`\ can be the number assigned to the device (device number) when it is installed or the virtual address. + *address* can be the number assigned to the device (device number) when it is installed or the virtual address. | **required**: True | **type**: str @@ -46,15 +46,15 @@ address verify_volid Verify that the volume serial matches what is on the existing volume or minidisk. - \ :emphasis:`verify\_volid`\ must be 1 to 6 alphanumeric characters or \ :literal:`\*NONE\*`\ . + *verify_volid* must be 1 to 6 alphanumeric characters or ``*NONE*``. - To verify that a volume serial number does not exist, use \ :emphasis:`verify\_volid=\*NONE\*`\ . + To verify that a volume serial number does not exist, use *verify_volid=*NONE**. - If \ :emphasis:`verify\_volid`\ is specified and the volume serial number does not match that found on the volume or minidisk, initialization does not complete. + If *verify_volid* is specified and the volume serial number does not match that found on the volume or minidisk, initialization does not complete. - If \ :emphasis:`verify\_volid=\*NONE\*`\ is specified and a volume serial is found on the volume or minidisk, initialization does not complete. + If *verify_volid=*NONE** is specified and a volume serial is found on the volume or minidisk, initialization does not complete. - Note, this option is \ :strong:`not`\ a boolean, leave it blank to skip the verification. + Note, this option is **not** a boolean, leave it blank to skip the verification. | **required**: False | **type**: str @@ -73,11 +73,11 @@ volid Expects 1-6 alphanumeric, national ($,#,@) or special characters. - A \ :emphasis:`volid`\ with less than 6 characters will be padded with spaces. + A *volid* with less than 6 characters will be padded with spaces. - A \ :emphasis:`volid`\ can also be referred to as volser or volume serial number. + A *volid* can also be referred to as volser or volume serial number. - When \ :emphasis:`volid`\ is not specified for a previously initialized volume or minidisk, the volume serial number will remain unchanged. + When *volid* is not specified for a previously initialized volume or minidisk, the volume serial number will remain unchanged. | **required**: False | **type**: str @@ -99,7 +99,7 @@ index The VTOC index enhances the performance of VTOC access. - When set to \ :emphasis:`false`\ , no index will be created. + When set to *false*, no index will be created. | **required**: False | **type**: bool @@ -109,7 +109,7 @@ index sms_managed Specifies that the volume be managed by Storage Management System (SMS). - If \ :emphasis:`sms\_managed`\ is \ :emphasis:`true`\ then \ :emphasis:`index`\ must also be \ :emphasis:`true`\ . + If *sms_managed* is *true* then *index* must also be *true*. | **required**: False | **type**: bool @@ -127,7 +127,7 @@ verify_volume_empty tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup datasets. - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value \ :literal:`TMPHLQ`\ is used. + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. | **required**: False | **type**: str From 8d2138df41a063bcc21cf06ef00f7052a740a693 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Tue, 13 Aug 2024 18:21:18 -0600 Subject: [PATCH 08/13] Fixed pep8 issue --- plugins/action/zos_job_submit.py | 1 - 1 file changed, 1 deletion(-) diff --git a/plugins/action/zos_job_submit.py b/plugins/action/zos_job_submit.py index 90b0670ac..20c8e28db 100644 --- a/plugins/action/zos_job_submit.py +++ b/plugins/action/zos_job_submit.py @@ -27,7 +27,6 @@ display = Display() - class ActionModule(ActionBase): def run(self, tmp=None, task_vars=None): """ handler for file transfer operations """ From ec705f58ae378555e8c349520bca157dfdc43ea4 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Wed, 14 Aug 2024 16:19:07 -0600 Subject: [PATCH 09/13] Updated backslashes for samples --- docs/source/modules/zos_apf.rst-e | 318 ++++++++++++++++++ docs/source/modules/zos_backup_restore.rst | 30 -- docs/source/modules/zos_blockinfile.rst | 2 +- .../source/resources/releases_maintenance.rst | 2 +- plugins/doc_fragments/template.py-e | 120 +++++++ plugins/modules/zos_blockinfile.py | 2 +- 6 files changed, 441 insertions(+), 33 deletions(-) create mode 100644 docs/source/modules/zos_apf.rst-e create mode 100644 plugins/doc_fragments/template.py-e diff --git a/docs/source/modules/zos_apf.rst-e b/docs/source/modules/zos_apf.rst-e new file mode 100644 index 000000000..b758d3129 --- /dev/null +++ b/docs/source/modules/zos_apf.rst-e @@ -0,0 +1,318 @@ + +:github_url: https://github.com/ansible-collections/ibm_zos_core/blob/dev/plugins/modules/zos_apf.py + +.. _zos_apf_module: + + +zos_apf -- Add or remove libraries to Authorized Program Facility (APF) +======================================================================= + + + +.. contents:: + :local: + :depth: 1 + + +Synopsis +-------- +- Adds or removes libraries to Authorized Program Facility (APF). +- Manages APF statement persistent entries to a data set or data set member. +- Changes APF list format to "DYNAMIC" or "STATIC". +- Gets the current APF list entries. + + + + + +Parameters +---------- + + +library + The library name to be added or removed from the APF list. + + | **required**: False + | **type**: str + + +state + Ensure that the library is added ``state=present`` or removed ``state=absent``. + + The APF list format has to be "DYNAMIC". + + | **required**: False + | **type**: str + | **default**: present + | **choices**: absent, present + + +force_dynamic + Will force the APF list format to "DYNAMIC" before adding or removing libraries. + + If the format is "STATIC", the format will be changed to "DYNAMIC". + + | **required**: False + | **type**: bool + | **default**: False + + +volume + The identifier for the volume containing the library specified in the ``library`` parameter. The values must be one the following. + + 1. The volume serial number. + + 2. Six asterisks ``******``, indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. + + 3. *MCAT*, indicating that the system must use the volume serial number of the volume containing the master catalog. + + If ``volume`` is not specified, ``library`` has to be cataloged. + + | **required**: False + | **type**: str + + +sms + Indicates that the library specified in the ``library`` parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. + + If ``sms=True``, ``volume`` value will be ignored. + + | **required**: False + | **type**: bool + | **default**: False + + +operation + Change APF list format to "DYNAMIC" ``operation=set_dynamic`` or "STATIC" ``operation=set_static`` + + Display APF list current format ``operation=check_format`` + + Display APF list entries when ``operation=list`` ``library``, ``volume`` and ``sms`` will be used as filters. + + If ``operation`` is not set, add or remove operation will be ignored. + + | **required**: False + | **type**: str + | **choices**: set_dynamic, set_static, check_format, list + + +tmp_hlq + Override the default high level qualifier (HLQ) for temporary and backup datasets. + + The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. + + | **required**: False + | **type**: str + + +persistent + Add/remove persistent entries to or from *data_set_name* + + ``library`` will not be persisted or removed if ``persistent=None`` + + | **required**: False + | **type**: dict + + + data_set_name + The data set name used for persisting or removing a ``library`` from the APF list. + + | **required**: True + | **type**: str + + + marker + The marker line template. + + ``{mark}`` will be replaced with "BEGIN" and "END". + + Using a custom marker without the ``{mark}`` variable may result in the block being repeatedly inserted on subsequent playbook runs. + + ``{mark}`` length may not exceed 72 characters. + + The timestamp () used in the default marker follows the '+%Y%m%d-%H%M%S' date format + + | **required**: False + | **type**: str + | **default**: /* {mark} ANSIBLE MANAGED BLOCK */ + + + backup + Creates a backup file or backup data set for *data_set_name*, including the timestamp information to ensure that you retrieve the original APF list defined in *data_set_name*". + + *backup_name* can be used to specify a backup file name if *backup=true*. + + The backup file name will be return on either success or failure of module execution such that data can be retrieved. + + | **required**: False + | **type**: bool + | **default**: False + + + backup_name + Specify the USS file name or data set name for the destination backup. + + If the source *data_set_name* is a USS file or path, the backup_name name must be a file or path name, and the USS file or path must be an absolute path name. + + If the source is an MVS data set, the backup_name must be an MVS data set name. + + If the backup_name is not provided, the default backup_name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp. For example, ``/path/file_name.2020-04-23-08-32-29-bak.tar``. + + If the source is an MVS data set, it will be a data set with a random name generated by calling the ZOAU API. The MVS backup data set recovery can be done by renaming it. + + | **required**: False + | **type**: str + + + +batch + A list of dictionaries for adding or removing libraries. + + This is mutually exclusive with ``library``, ``volume``, ``sms`` + + Can be used with ``persistent`` + + | **required**: False + | **type**: list + | **elements**: dict + + + library + The library name to be added or removed from the APF list. + + | **required**: True + | **type**: str + + + volume + The identifier for the volume containing the library specified on the ``library`` parameter. The values must be one of the following. + + 1. The volume serial number + + 2. Six asterisks ``******``, indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. + + 3. *MCAT*, indicating that the system must use the volume serial number of the volume containing the master catalog. + + If ``volume`` is not specified, ``library`` has to be cataloged. + + | **required**: False + | **type**: str + + + sms + Indicates that the library specified in the ``library`` parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. + + If true ``volume`` will be ignored. + + | **required**: False + | **type**: bool + | **default**: False + + + + + +Examples +-------- + +.. code-block:: yaml+jinja + + + - name: Add a library to the APF list + zos_apf: + library: SOME.SEQUENTIAL.DATASET + volume: T12345 + - name: Add a library (cataloged) to the APF list and persistence + zos_apf: + library: SOME.SEQUENTIAL.DATASET + force_dynamic: true + persistent: + data_set_name: SOME.PARTITIONED.DATASET(MEM) + - name: Remove a library from the APF list and persistence + zos_apf: + state: absent + library: SOME.SEQUENTIAL.DATASET + volume: T12345 + persistent: + data_set_name: SOME.PARTITIONED.DATASET(MEM) + - name: Batch libraries with custom marker, persistence for the APF list + zos_apf: + persistent: + data_set_name: "SOME.PARTITIONED.DATASET(MEM)" + marker: "/* {mark} PROG001 USR0010 */" + batch: + - library: SOME.SEQ.DS1 + - library: SOME.SEQ.DS2 + sms: true + - library: SOME.SEQ.DS3 + volume: T12345 + - name: Print the APF list matching library pattern or volume serial number + zos_apf: + operation: list + library: SOME.SEQ.* + volume: T12345 + - name: Set the APF list format to STATIC + zos_apf: + operation: set_static + + + + +Notes +----- + +.. note:: + It is the playbook author or user's responsibility to ensure they have appropriate authority to the RACF® FACILITY resource class. A user is described as the remote user, configured either for the playbook or playbook tasks, who can also obtain escalated privileges to execute as root or another user. + + To add or delete the APF list entry for library libname, you must have UPDATE authority to the RACF® FACILITY resource class entity CSVAPF.libname, or there must be no FACILITY class profile that protects that entity. + + To change the format of the APF list to dynamic, you must have UPDATE authority to the RACF FACILITY resource class profile CSVAPF.MVS.SETPROG.FORMAT.DYNAMIC, or there must be no FACILITY class profile that protects that entity. + + To change the format of the APF list back to static, you must have UPDATE authority to the RACF FACILITY resource class profile CSVAPF.MVS.SETPROG.FORMAT.STATIC, or there must be no FACILITY class profile that protects that entity. + + + + + + + +Return Values +------------- + + +stdout + The stdout from ZOAU command apfadm. Output varies based on the type of operation. + + state> stdout of the executed operator command (opercmd), "SETPROG" from ZOAU command apfadm + + operation> stdout of operation options list> Returns a list of dictionaries of APF list entries [{'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFHAUTH'}, {'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFJAUTH'}, ...] set_dynamic> Set to DYNAMIC set_static> Set to STATIC check_format> DYNAMIC or STATIC + + | **returned**: always + | **type**: str + +stderr + The error messages from ZOAU command apfadm + + | **returned**: always + | **type**: str + | **sample**: BGYSC1310E ADD Error: Dataset COMMON.LINKLIB volume COMN01 is already present in APF list. + +rc + The return code from ZOAU command apfadm + + | **returned**: always + | **type**: int + +msg + The module messages + + | **returned**: failure + | **type**: str + | **sample**: Parameter verification failed + +backup_name + Name of the backup file or data set that was created. + + | **returned**: if backup=true, always + | **type**: str + diff --git a/docs/source/modules/zos_backup_restore.rst b/docs/source/modules/zos_backup_restore.rst index fdec87b0b..9d6656ac3 100644 --- a/docs/source/modules/zos_backup_restore.rst +++ b/docs/source/modules/zos_backup_restore.rst @@ -48,21 +48,17 @@ data_sets include When *operation=backup*, specifies a list of data sets or data set patterns to include in the backup. - When *operation=backup*, specifies a list of data sets or data set patterns to include in the backup. When *operation=backup* GDS relative names are supported. When *operation=restore*, specifies a list of data sets or data set patterns to include when restoring from a backup. - The single asterisk, ``*``, is used in place of exactly one qualifier. In addition, it can be used to indicate to DFSMSdss that only part of a qualifier has been specified. The single asterisk, ``*``, is used in place of exactly one qualifier. In addition, it can be used to indicate to DFSMSdss that only part of a qualifier has been specified. - When used with other qualifiers, the double asterisk, ``**``, indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. When used with other qualifiers, the double asterisk, ``**``, indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. Two asterisks are the maximum permissible in a qualifier. If there are two asterisks in a qualifier, they must be the first and last characters. - A question mark ``?`` or percent sign ``%`` matches a single character. A question mark ``?`` or percent sign ``%`` matches a single character. | **required**: False @@ -71,21 +67,17 @@ data_sets exclude When *operation=backup*, specifies a list of data sets or data set patterns to exclude from the backup. - When *operation=backup*, specifies a list of data sets or data set patterns to exclude from the backup. When *operation=backup* GDS relative names are supported. When *operation=restore*, specifies a list of data sets or data set patterns to exclude when restoring from a backup. - The single asterisk, ``*``, is used in place of exactly one qualifier. In addition, it can be used to indicate that only part of a qualifier has been specified." The single asterisk, ``*``, is used in place of exactly one qualifier. In addition, it can be used to indicate that only part of a qualifier has been specified." - When used with other qualifiers, the double asterisk, ``**``, indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. When used with other qualifiers, the double asterisk, ``**``, indicates either the nonexistence of leading, trailing, or middle qualifiers, or the fact that they play no role in the selection process. Two asterisks are the maximum permissible in a qualifier. If there are two asterisks in a qualifier, they must be the first and last characters. - A question mark ``?`` or percent sign ``%`` matches a single character. A question mark ``?`` or percent sign ``%`` matches a single character. | **required**: False @@ -96,13 +88,10 @@ data_sets volume This applies to both data set restores and volume restores. - When *operation=backup* and *data_sets* are provided, specifies the volume that contains the data sets to backup. When *operation=backup* and *data_sets* are provided, specifies the volume that contains the data sets to backup. - When *operation=restore*, specifies the volume the backup should be restored to. When *operation=restore*, specifies the volume the backup should be restored to. - *volume* is required when restoring a full volume backup. *volume* is required when restoring a full volume backup. | **required**: False @@ -111,12 +100,9 @@ volume full_volume When *operation=backup* and *full_volume=True*, specifies that the entire volume provided to *volume* should be backed up. - When *operation=backup* and *full_volume=True*, specifies that the entire volume provided to *volume* should be backed up. - When *operation=restore* and *full_volume=True*, specifies that the volume should be restored (default is dataset). When *operation=restore* and *full_volume=True*, specifies that the volume should be restored (default is dataset). - *volume* must be provided when *full_volume=True*. *volume* must be provided when *full_volume=True*. | **required**: False @@ -127,7 +113,6 @@ full_volume temp_volume Specifies a particular volume on which the temporary data sets should be created during the backup and restore process. - When *operation=backup* and *backup_name* is a data set, specifies the volume the backup should be placed in. When *operation=backup* and *backup_name* is a data set, specifies the volume the backup should be placed in. | **required**: False @@ -157,9 +142,7 @@ recover overwrite When *operation=backup*, specifies if an existing data set or UNIX file matching *backup_name* should be deleted. - When *operation=backup*, specifies if an existing data set or UNIX file matching *backup_name* should be deleted. - When *operation=restore*, specifies if the module should overwrite existing data sets with matching name on the target device. When *operation=restore*, specifies if the module should overwrite existing data sets with matching name on the target device. | **required**: False @@ -169,12 +152,9 @@ overwrite sms_storage_class When *operation=restore*, specifies the storage class to use. The storage class will also be used for temporary data sets created during restore process. - When *operation=restore*, specifies the storage class to use. The storage class will also be used for temporary data sets created during restore process. - When *operation=backup*, specifies the storage class to use for temporary data sets created during backup process. When *operation=backup*, specifies the storage class to use for temporary data sets created during backup process. - If neither of *sms_storage_class* or *sms_management_class* are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. If neither of *sms_storage_class* or *sms_management_class* are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. | **required**: False @@ -183,12 +163,9 @@ sms_storage_class sms_management_class When *operation=restore*, specifies the management class to use. The management class will also be used for temporary data sets created during restore process. - When *operation=restore*, specifies the management class to use. The management class will also be used for temporary data sets created during restore process. - When *operation=backup*, specifies the management class to use for temporary data sets created during backup process. When *operation=backup*, specifies the management class to use for temporary data sets created during backup process. - If neither of *sms_storage_class* or *sms_management_class* are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. If neither of *sms_storage_class* or *sms_management_class* are specified, the z/OS system's Automatic Class Selection (ACS) routines will be used. | **required**: False @@ -197,15 +174,11 @@ sms_management_class space If *operation=backup*, specifies the amount of space to allocate for the backup. Please note that even when backing up to a UNIX file, backup contents will be temporarily held in a data set. - If *operation=backup*, specifies the amount of space to allocate for the backup. Please note that even when backing up to a UNIX file, backup contents will be temporarily held in a data set. - If *operation=restore*, specifies the amount of space to allocate for data sets temporarily created during the restore process. If *operation=restore*, specifies the amount of space to allocate for data sets temporarily created during the restore process. - The unit of space used is set using *space_type*. The unit of space used is set using *space_type*. - When *full_volume=True*, *space* defaults to ``1``, otherwise default is ``25`` When *full_volume=True*, *space* defaults to ``1``, otherwise default is ``25`` | **required**: False @@ -215,10 +188,8 @@ space space_type The unit of measurement to use when defining data set space. - Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. Valid units of size are ``k``, ``m``, ``g``, ``cyl``, and ``trk``. - When *full_volume=True*, *space_type* defaults to ``g``, otherwise default is ``m`` When *full_volume=True*, *space_type* defaults to ``g``, otherwise default is ``m`` | **required**: False @@ -238,7 +209,6 @@ hlq tmp_hlq Override the default high level qualifier (HLQ) for temporary and backup data sets. - The default HLQ is the Ansible user that executes the module and if that is not available, then the value of ``TMPHLQ`` is used. The default HLQ is the Ansible user that executes the module and if that is not available, then the value of ``TMPHLQ`` is used. | **required**: False diff --git a/docs/source/modules/zos_blockinfile.rst b/docs/source/modules/zos_blockinfile.rst index deacb25e3..fdd98d0f8 100644 --- a/docs/source/modules/zos_blockinfile.rst +++ b/docs/source/modules/zos_blockinfile.rst @@ -33,7 +33,7 @@ src The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1\``.) + Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1).`` | **required**: True | **type**: str diff --git a/docs/source/resources/releases_maintenance.rst b/docs/source/resources/releases_maintenance.rst index df4ee6754..9a5adbce8 100644 --- a/docs/source/resources/releases_maintenance.rst +++ b/docs/source/resources/releases_maintenance.rst @@ -89,7 +89,7 @@ The z/OS managed node includes several shells, currently the only supported shel +---------+----------------------------+---------------------------------------------------+---------------+---------------+ | Version | Controller | Managed Node | GA | End of Life | +=========+============================+===================================================+===============+===============+ -| 1.11.x |- `ansible-core`_ >=2.15.x |- `z/OS`_ V2R4 - V2Rx | In preview | TBD | +| 1.11.x |- `ansible-core`_ >=2.15.x |- `z/OS`_ V2R4 - V3Rx | In preview | TBD | | |- `Ansible`_ >=8.0.x |- `z/OS shell`_ | | | | |- `AAP`_ >=2.4 |- IBM `Open Enterprise SDK for Python`_ | | | | | |- IBM `Z Open Automation Utilities`_ >=1.3.1 | | | diff --git a/plugins/doc_fragments/template.py-e b/plugins/doc_fragments/template.py-e new file mode 100644 index 000000000..af96f7b9d --- /dev/null +++ b/plugins/doc_fragments/template.py-e @@ -0,0 +1,120 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) IBM Corporation 2022, 2024 +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import (absolute_import, division, print_function) +__metaclass__ = type + + +class ModuleDocFragment(object): + + DOCUMENTATION = r''' +options: + use_template: + description: + - Whether the module should treat C(src) as a Jinja2 template and + render it before continuing with the rest of the module. + - Only valid when C(src) is a local file or directory. + - All variables defined in inventory files, vars files and the playbook + will be passed to the template engine, + as well as L(Ansible special variables,https://docs.ansible.com/ansible/latest/reference_appendices/special_variables.html#special-variables), + such as C(playbook_dir), C(ansible_version), etc. + - If variables defined in different scopes share the same name, Ansible will + apply variable precedence to them. You can see the complete precedence order + L(in Ansible's documentation,https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#understanding-variable-precedence) + type: bool + default: false + template_parameters: + description: + - Options to set the way Jinja2 will process templates. + - Jinja2 already sets defaults for the markers it uses, you can find more + information at its L(official documentation,https://jinja.palletsprojects.com/en/latest/templates/). + - These options are ignored unless C(use_template) is true. + required: false + type: dict + suboptions: + variable_start_string: + description: + - Marker for the beginning of a statement to print a variable in Jinja2. + type: str + default: '{{' + variable_end_string: + description: + - Marker for the end of a statement to print a variable in Jinja2. + type: str + default: '}}' + block_start_string: + description: + - Marker for the beginning of a block in Jinja2. + type: str + default: '{%' + block_end_string: + description: + - Marker for the end of a block in Jinja2. + type: str + default: '%}' + comment_start_string: + description: + - Marker for the beginning of a comment in Jinja2. + type: str + default: '{#' + comment_end_string: + description: + - Marker for the end of a comment in Jinja2. + type: str + default: '#}' + line_statement_prefix: + description: + - Prefix used by Jinja2 to identify line-based statements. + type: str + required: false + line_comment_prefix: + description: + - Prefix used by Jinja2 to identify comment lines. + type: str + required: false + lstrip_blocks: + description: + - Whether Jinja2 should strip leading spaces from the start of a line + to a block. + type: bool + default: false + trim_blocks: + description: + - Whether Jinja2 should remove the first newline after a block is removed. + - Setting this option to C(False) will result in newlines being added to + the rendered template. This could create invalid code when working with + JCL templates or empty records in destination data sets. + type: bool + default: true + keep_trailing_newline: + description: + - Whether Jinja2 should keep the first trailing newline at the end of a + template after rendering. + type: bool + default: false + newline_sequence: + description: + - Sequence that starts a newline in a template. + type: str + default: '\\n' + choices: + - '\\n' + - '\\r' + - "\r\n" + auto_reload: + description: + - Whether to reload a template file when it has changed after the task + has started. + type: bool + default: false +''' diff --git a/plugins/modules/zos_blockinfile.py b/plugins/modules/zos_blockinfile.py index 9a3065833..ab6d2a0dd 100644 --- a/plugins/modules/zos_blockinfile.py +++ b/plugins/modules/zos_blockinfile.py @@ -39,7 +39,7 @@ PS (sequential data set), member of a PDS or PDSE, PDS, PDSE. - The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already - created. C(e.g. SOME.CREATION(-1\).) + created. ``e.g. SOME.CREATION(-1).`` type: str aliases: [ path, destfile, name ] required: true From dd079ad50f381ca6e188ce0fcdb97e7cca8ae4d5 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Wed, 14 Aug 2024 16:28:20 -0600 Subject: [PATCH 10/13] Fixed backslashes in docs --- plugins/modules/zos_job_submit.py | 4 ++-- plugins/modules/zos_lineinfile.py | 2 +- plugins/modules/zos_unarchive.py | 6 +++--- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/plugins/modules/zos_job_submit.py b/plugins/modules/zos_job_submit.py index e6e191060..d91b511c3 100644 --- a/plugins/modules/zos_job_submit.py +++ b/plugins/modules/zos_job_submit.py @@ -36,9 +36,9 @@ description: - The source file or data set containing the JCL to submit. - It could be a physical sequential data set, a partitioned data set - qualified by a member or a path (e.g. C(USER.TEST), V(USER.JCL(TEST\))), + qualified by a member or a path (e.g. C(USER.TEST), ``USER.JCL(TEST)``), or a generation data set from a generation data group - (for example, V(USER.TEST.GDG(-2\))). + (for example, ``USER.TEST.GDG(-2)``). - Or a USS file. (e.g C(/u/tester/demo/sample.jcl)) - Or a LOCAL file in ansible control node. (e.g C(/User/tester/ansible-playbook/sample.jcl)) diff --git a/plugins/modules/zos_lineinfile.py b/plugins/modules/zos_lineinfile.py index ba8a1c4be..c5f262fe0 100644 --- a/plugins/modules/zos_lineinfile.py +++ b/plugins/modules/zos_lineinfile.py @@ -37,7 +37,7 @@ PS (sequential data set), member of a PDS or PDSE, PDS, PDSE. - The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already - created. C(e.g. SOME.CREATION(-1\).) + created. ``e.g. SOME.CREATION(-1).`` type: str aliases: [ path, destfile, name ] required: true diff --git a/plugins/modules/zos_unarchive.py b/plugins/modules/zos_unarchive.py index 7a09cd025..f5febbf90 100644 --- a/plugins/modules/zos_unarchive.py +++ b/plugins/modules/zos_unarchive.py @@ -36,7 +36,7 @@ - I(src) can be a USS file or MVS data set name. - USS file paths should be absolute paths. - MVS data sets supported types are C(SEQ), C(PDS), C(PDSE). - - GDS relative names are supported C(e.g. USER.GDG(-1\)). + - GDS relative names are supported ``e.g. USER.GDG(-1)``. type: str required: true format: @@ -146,7 +146,7 @@ description: - A list of directories, files or data set names to extract from the archive. - - GDS relative names are supported C(e.g. USER.GDG(-1\)). + - GDS relative names are supported ``e.g. USER.GDG(-1)``. - When C(include) is set, only those files will we be extracted leaving the remaining files in the archive. - Mutually exclusive with exclude. @@ -157,7 +157,7 @@ description: - List the directory and file or data set names that you would like to exclude from the unarchive action. - - GDS relative names are supported C(e.g. USER.GDG(-1\)). + - GDS relative names are supported ``e.g. USER.GDG(-1)``. - Mutually exclusive with include. type: list elements: str From f1ff1788c0936090a8610222a75fe5910a248f53 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Wed, 14 Aug 2024 16:35:08 -0600 Subject: [PATCH 11/13] Updated RSTs --- docs/source/modules/zos_job_submit.rst | 2 +- docs/source/modules/zos_lineinfile.rst | 2 +- docs/source/modules/zos_unarchive.rst | 6 +++--- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/source/modules/zos_job_submit.rst b/docs/source/modules/zos_job_submit.rst index 6808137a6..d22911889 100644 --- a/docs/source/modules/zos_job_submit.rst +++ b/docs/source/modules/zos_job_submit.rst @@ -31,7 +31,7 @@ Parameters src The source file or data set containing the JCL to submit. - It could be a physical sequential data set, a partitioned data set qualified by a member or a path (e.g. ``USER.TEST``, V(USER.JCL(TEST\))), or a generation data set from a generation data group (for example, V(USER.TEST.GDG(-2\))). + It could be a physical sequential data set, a partitioned data set qualified by a member or a path (e.g. ``USER.TEST``, ``USER.JCL(TEST)``), or a generation data set from a generation data group (for example, ``USER.TEST.GDG(-2)``). Or a USS file. (e.g ``/u/tester/demo/sample.jcl``) diff --git a/docs/source/modules/zos_lineinfile.rst b/docs/source/modules/zos_lineinfile.rst index da0108bfb..1db6545c5 100644 --- a/docs/source/modules/zos_lineinfile.rst +++ b/docs/source/modules/zos_lineinfile.rst @@ -33,7 +33,7 @@ src The USS file must be an absolute pathname. - Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1\``.) + Generation data set (GDS) relative name of generation already created. ``e.g. SOME.CREATION(-1).`` | **required**: True | **type**: str diff --git a/docs/source/modules/zos_unarchive.rst b/docs/source/modules/zos_unarchive.rst index b3a4ff7cd..89b4b065c 100644 --- a/docs/source/modules/zos_unarchive.rst +++ b/docs/source/modules/zos_unarchive.rst @@ -39,7 +39,7 @@ src MVS data sets supported types are ``SEQ``, ``PDS``, ``PDSE``. - GDS relative names are supported ``e.g. USER.GDG(-1\``). + GDS relative names are supported ``e.g. USER.GDG(-1)``. | **required**: True | **type**: str @@ -151,7 +151,7 @@ owner include A list of directories, files or data set names to extract from the archive. - GDS relative names are supported ``e.g. USER.GDG(-1\``). + GDS relative names are supported ``e.g. USER.GDG(-1)``. When ``include`` is set, only those files will we be extracted leaving the remaining files in the archive. @@ -165,7 +165,7 @@ include exclude List the directory and file or data set names that you would like to exclude from the unarchive action. - GDS relative names are supported ``e.g. USER.GDG(-1\``). + GDS relative names are supported ``e.g. USER.GDG(-1)``. Mutually exclusive with include. From daabd5405598798744453b2b8eec98c11440c343 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Wed, 14 Aug 2024 17:00:43 -0600 Subject: [PATCH 12/13] Updated newline sequences --- docs/source/modules/zos_copy.rst | 7 ++----- docs/source/modules/zos_job_submit.rst | 7 ++----- docs/source/modules/zos_script.rst | 7 ++----- 3 files changed, 6 insertions(+), 15 deletions(-) diff --git a/docs/source/modules/zos_copy.rst b/docs/source/modules/zos_copy.rst index b6d164a84..8e8cb42bf 100644 --- a/docs/source/modules/zos_copy.rst +++ b/docs/source/modules/zos_copy.rst @@ -665,11 +665,8 @@ template_parameters | **required**: False | **type**: str - | **default**: - - | **choices**: -, , - + | **default**: \\n + | **choices**: \\n, \\r, \\r\\n auto_reload diff --git a/docs/source/modules/zos_job_submit.rst b/docs/source/modules/zos_job_submit.rst index d22911889..573b4f4bd 100644 --- a/docs/source/modules/zos_job_submit.rst +++ b/docs/source/modules/zos_job_submit.rst @@ -248,11 +248,8 @@ template_parameters | **required**: False | **type**: str - | **default**: - - | **choices**: -, , - + | **default**: \\n + | **choices**: \\n, \\r, \\r\\n auto_reload diff --git a/docs/source/modules/zos_script.rst b/docs/source/modules/zos_script.rst index 821f11a9c..10660d38a 100644 --- a/docs/source/modules/zos_script.rst +++ b/docs/source/modules/zos_script.rst @@ -220,11 +220,8 @@ template_parameters | **required**: False | **type**: str - | **default**: - - | **choices**: -, , - + | **default**: \\n + | **choices**: \\n, \\r, \\r\\n auto_reload From 663ff08495f1b4d252d164b8d07a1f9948bfce56 Mon Sep 17 00:00:00 2001 From: Fernando Flores Date: Wed, 14 Aug 2024 17:01:47 -0600 Subject: [PATCH 13/13] Removed extra file --- docs/source/modules/zos_apf.rst-e | 318 ------------------------------ 1 file changed, 318 deletions(-) delete mode 100644 docs/source/modules/zos_apf.rst-e diff --git a/docs/source/modules/zos_apf.rst-e b/docs/source/modules/zos_apf.rst-e deleted file mode 100644 index b758d3129..000000000 --- a/docs/source/modules/zos_apf.rst-e +++ /dev/null @@ -1,318 +0,0 @@ - -:github_url: https://github.com/ansible-collections/ibm_zos_core/blob/dev/plugins/modules/zos_apf.py - -.. _zos_apf_module: - - -zos_apf -- Add or remove libraries to Authorized Program Facility (APF) -======================================================================= - - - -.. contents:: - :local: - :depth: 1 - - -Synopsis --------- -- Adds or removes libraries to Authorized Program Facility (APF). -- Manages APF statement persistent entries to a data set or data set member. -- Changes APF list format to "DYNAMIC" or "STATIC". -- Gets the current APF list entries. - - - - - -Parameters ----------- - - -library - The library name to be added or removed from the APF list. - - | **required**: False - | **type**: str - - -state - Ensure that the library is added ``state=present`` or removed ``state=absent``. - - The APF list format has to be "DYNAMIC". - - | **required**: False - | **type**: str - | **default**: present - | **choices**: absent, present - - -force_dynamic - Will force the APF list format to "DYNAMIC" before adding or removing libraries. - - If the format is "STATIC", the format will be changed to "DYNAMIC". - - | **required**: False - | **type**: bool - | **default**: False - - -volume - The identifier for the volume containing the library specified in the ``library`` parameter. The values must be one the following. - - 1. The volume serial number. - - 2. Six asterisks ``******``, indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. - - 3. *MCAT*, indicating that the system must use the volume serial number of the volume containing the master catalog. - - If ``volume`` is not specified, ``library`` has to be cataloged. - - | **required**: False - | **type**: str - - -sms - Indicates that the library specified in the ``library`` parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. - - If ``sms=True``, ``volume`` value will be ignored. - - | **required**: False - | **type**: bool - | **default**: False - - -operation - Change APF list format to "DYNAMIC" ``operation=set_dynamic`` or "STATIC" ``operation=set_static`` - - Display APF list current format ``operation=check_format`` - - Display APF list entries when ``operation=list`` ``library``, ``volume`` and ``sms`` will be used as filters. - - If ``operation`` is not set, add or remove operation will be ignored. - - | **required**: False - | **type**: str - | **choices**: set_dynamic, set_static, check_format, list - - -tmp_hlq - Override the default high level qualifier (HLQ) for temporary and backup datasets. - - The default HLQ is the Ansible user used to execute the module and if that is not available, then the value ``TMPHLQ`` is used. - - | **required**: False - | **type**: str - - -persistent - Add/remove persistent entries to or from *data_set_name* - - ``library`` will not be persisted or removed if ``persistent=None`` - - | **required**: False - | **type**: dict - - - data_set_name - The data set name used for persisting or removing a ``library`` from the APF list. - - | **required**: True - | **type**: str - - - marker - The marker line template. - - ``{mark}`` will be replaced with "BEGIN" and "END". - - Using a custom marker without the ``{mark}`` variable may result in the block being repeatedly inserted on subsequent playbook runs. - - ``{mark}`` length may not exceed 72 characters. - - The timestamp () used in the default marker follows the '+%Y%m%d-%H%M%S' date format - - | **required**: False - | **type**: str - | **default**: /* {mark} ANSIBLE MANAGED BLOCK */ - - - backup - Creates a backup file or backup data set for *data_set_name*, including the timestamp information to ensure that you retrieve the original APF list defined in *data_set_name*". - - *backup_name* can be used to specify a backup file name if *backup=true*. - - The backup file name will be return on either success or failure of module execution such that data can be retrieved. - - | **required**: False - | **type**: bool - | **default**: False - - - backup_name - Specify the USS file name or data set name for the destination backup. - - If the source *data_set_name* is a USS file or path, the backup_name name must be a file or path name, and the USS file or path must be an absolute path name. - - If the source is an MVS data set, the backup_name must be an MVS data set name. - - If the backup_name is not provided, the default backup_name will be used. If the source is a USS file or path, the name of the backup file will be the source file or path name appended with a timestamp. For example, ``/path/file_name.2020-04-23-08-32-29-bak.tar``. - - If the source is an MVS data set, it will be a data set with a random name generated by calling the ZOAU API. The MVS backup data set recovery can be done by renaming it. - - | **required**: False - | **type**: str - - - -batch - A list of dictionaries for adding or removing libraries. - - This is mutually exclusive with ``library``, ``volume``, ``sms`` - - Can be used with ``persistent`` - - | **required**: False - | **type**: list - | **elements**: dict - - - library - The library name to be added or removed from the APF list. - - | **required**: True - | **type**: str - - - volume - The identifier for the volume containing the library specified on the ``library`` parameter. The values must be one of the following. - - 1. The volume serial number - - 2. Six asterisks ``******``, indicating that the system must use the volume serial number of the current system residence (SYSRES) volume. - - 3. *MCAT*, indicating that the system must use the volume serial number of the volume containing the master catalog. - - If ``volume`` is not specified, ``library`` has to be cataloged. - - | **required**: False - | **type**: str - - - sms - Indicates that the library specified in the ``library`` parameter is managed by the storage management subsystem (SMS), and therefore no volume is associated with the library. - - If true ``volume`` will be ignored. - - | **required**: False - | **type**: bool - | **default**: False - - - - - -Examples --------- - -.. code-block:: yaml+jinja - - - - name: Add a library to the APF list - zos_apf: - library: SOME.SEQUENTIAL.DATASET - volume: T12345 - - name: Add a library (cataloged) to the APF list and persistence - zos_apf: - library: SOME.SEQUENTIAL.DATASET - force_dynamic: true - persistent: - data_set_name: SOME.PARTITIONED.DATASET(MEM) - - name: Remove a library from the APF list and persistence - zos_apf: - state: absent - library: SOME.SEQUENTIAL.DATASET - volume: T12345 - persistent: - data_set_name: SOME.PARTITIONED.DATASET(MEM) - - name: Batch libraries with custom marker, persistence for the APF list - zos_apf: - persistent: - data_set_name: "SOME.PARTITIONED.DATASET(MEM)" - marker: "/* {mark} PROG001 USR0010 */" - batch: - - library: SOME.SEQ.DS1 - - library: SOME.SEQ.DS2 - sms: true - - library: SOME.SEQ.DS3 - volume: T12345 - - name: Print the APF list matching library pattern or volume serial number - zos_apf: - operation: list - library: SOME.SEQ.* - volume: T12345 - - name: Set the APF list format to STATIC - zos_apf: - operation: set_static - - - - -Notes ------ - -.. note:: - It is the playbook author or user's responsibility to ensure they have appropriate authority to the RACF® FACILITY resource class. A user is described as the remote user, configured either for the playbook or playbook tasks, who can also obtain escalated privileges to execute as root or another user. - - To add or delete the APF list entry for library libname, you must have UPDATE authority to the RACF® FACILITY resource class entity CSVAPF.libname, or there must be no FACILITY class profile that protects that entity. - - To change the format of the APF list to dynamic, you must have UPDATE authority to the RACF FACILITY resource class profile CSVAPF.MVS.SETPROG.FORMAT.DYNAMIC, or there must be no FACILITY class profile that protects that entity. - - To change the format of the APF list back to static, you must have UPDATE authority to the RACF FACILITY resource class profile CSVAPF.MVS.SETPROG.FORMAT.STATIC, or there must be no FACILITY class profile that protects that entity. - - - - - - - -Return Values -------------- - - -stdout - The stdout from ZOAU command apfadm. Output varies based on the type of operation. - - state> stdout of the executed operator command (opercmd), "SETPROG" from ZOAU command apfadm - - operation> stdout of operation options list> Returns a list of dictionaries of APF list entries [{'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFHAUTH'}, {'vol': 'PP0L6P', 'ds': 'DFH.V5R3M0.CICS.SDFJAUTH'}, ...] set_dynamic> Set to DYNAMIC set_static> Set to STATIC check_format> DYNAMIC or STATIC - - | **returned**: always - | **type**: str - -stderr - The error messages from ZOAU command apfadm - - | **returned**: always - | **type**: str - | **sample**: BGYSC1310E ADD Error: Dataset COMMON.LINKLIB volume COMN01 is already present in APF list. - -rc - The return code from ZOAU command apfadm - - | **returned**: always - | **type**: int - -msg - The module messages - - | **returned**: failure - | **type**: str - | **sample**: Parameter verification failed - -backup_name - Name of the backup file or data set that was created. - - | **returned**: if backup=true, always - | **type**: str -