diff --git a/main/faq/index.html b/main/faq/index.html index bb8fc18c4..1a7f1c562 100644 --- a/main/faq/index.html +++ b/main/faq/index.html @@ -1727,6 +1727,34 @@ + + +
  • + + Why am I seeing AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' when running ANTA + + + + +
  • + +
  • + + pip install -U pyopenssl>22.0 + + +
  • @@ -1799,6 +1827,34 @@ +
  • + +
  • + + Why am I seeing AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' when running ANTA + + + + +
  • + +
  • + + pip install -U pyopenssl>22.0 + + +
  • @@ -1860,8 +1916,20 @@
    How can I resolve this error?urllib3 v2 migration guide, the root cause of this error is an incompatibility with older OpenSSL versions. For example, users on RHEL7 might consider upgrading to RHEL8, which supports the required OpenSSL version.

  • -
    +

    Why am I seeing AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' when running ANTA

    +

    When running the anta commands after installation, some users might encounter the following error:

    +
    AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms'
    +
    +

    The error is a result of incompatibility between cryptography and pyopenssl when installing asyncssh which is a requirement of ANTA.

    +
    How can I resolve this error?
    +
      +
    1. +

      Upgrade pyopenssl

      +

      pip install -U pyopenssl>22.0
      +

      Still facing issues?

      +
    2. +

    If you’ve tried the above solutions and continue to experience problems, please report the issue in our GitHub repository.


    @@ -1869,7 +1937,7 @@

    Still facing issues?August 18, 2023 + September 21, 2023 diff --git a/main/getting-started/index.html b/main/getting-started/index.html index a75849cbc..fffd4d1f8 100644 --- a/main/getting-started/index.html +++ b/main/getting-started/index.html @@ -402,8 +402,8 @@
  • - - Report per host + + Report in JSON format
  • @@ -1869,8 +1869,8 @@
  • - - Report per host + + Report in JSON format
  • @@ -2069,9 +2069,6 @@

    Test your network text ANTA command to check network states with text result tpl-report ANTA command to check network state with templated report -
    -

    Currently to be able to run anta nrfu --help you need to have given to ANTA the mandatory input parameters: username, password and inventory otherwise the CLI will report an issue. This is tracked in: https://github.com/arista-netdevops-community/anta/issues/263

    -

    To run the NRFU, you need to select an output format amongst [“json”, “table”, “text”, “tpl-report”]. For a first usage, table is recommended. By default all test results for all devices are rendered but it can be changed to a report per test case or per host

    Default report using table
    anta \
    @@ -2136,7 +2133,7 @@ 
    Report in text mode :: VerifyMlagConfigSanity :: SKIPPED (MLAG is disabled) [...]
    -
    Report per host
    +
    Report in JSON format
    $ anta \
         --username tom \
         --password arista123 \
    @@ -2188,7 +2185,7 @@ 
    Report per hostAugust 18, 2023 + September 21, 2023 diff --git a/main/search/search_index.json b/main/search/search_index.json index 8a1edc412..2da1a0eab 100644 --- a/main/search/search_index.json +++ b/main/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#arista-network-test-automation-anta-framework","title":"Arista Network Test Automation (ANTA) Framework","text":"

    ANTA is Python framework that automates tests for Arista devices.

    • ANTA provides a set of tests to validate the state of your network
    • ANTA can be used to:
      • Automate NRFU (Network Ready For Use) test on a preproduction network
      • Automate tests on a live network (periodically or on demand)
    • ANTA can be used with:
      • The ANTA CLI
      • As a Python library in your own application

    # Install ANTA CLI\n$ pip install anta\n\n# Run ANTA CLI\n$ anta --help\nUsage: anta [OPTIONS] COMMAND [ARGS]...\n\n  Arista Network Test Automation (ANTA) CLI\n\nOptions:\n  --version                       Show the version and exit.\n  --username TEXT                 Username to connect to EOS  [env var:\n                                  ANTA_USERNAME; required]\n--password TEXT                 Password to connect to EOS that must be\n                                  provided. It can be prompted using '--\n                                  prompt' option.  [env var: ANTA_PASSWORD]\n--enable-password TEXT          Password to access EOS Privileged EXEC mode.\n                                  It can be prompted using '--prompt' option.\n                                  Requires '--enable' option.  [env var:\n                                  ANTA_ENABLE_PASSWORD]\n--enable                        Some commands may require EOS Privileged\n                                  EXEC mode. This option tries to access this\n                                  mode before sending a command to the device.\n                                  [env var: ANTA_ENABLE]\n-P, --prompt                    Prompt for passwords if they are not\n                                  provided.\n  --timeout INTEGER               Global connection timeout  [env var:\n                                  ANTA_TIMEOUT; default: 30]\n--insecure                      Disable SSH Host Key validation  [env var:\n                                  ANTA_INSECURE]\n-i, --inventory FILE            Path to the inventory YAML file  [env var:\n                                  ANTA_INVENTORY; required]\n--log-file FILE                 Send the logs to a file. If logging level is\n                                  DEBUG, only INFO or higher will be sent to\n                                  stdout.  [env var: ANTA_LOG_FILE]\n--log-level, --log [CRITICAL|ERROR|WARNING|INFO|DEBUG]\nANTA logging level  [env var:\n                                  ANTA_LOG_LEVEL; default: INFO]\n--ignore-status                 Always exit with success  [env var:\n                                  ANTA_IGNORE_STATUS]\n--ignore-error                  Only report failures and not errors  [env\n                                  var: ANTA_IGNORE_ERROR]\n--help                          Show this message and exit.\n\nCommands:\n  debug  Debug commands for building ANTA\n  exec   Execute commands to inventory devices\n  get    Get data from/to ANTA\n  nrfu   Run NRFU against inventory devices\n

    username, password, enable-password, enable, timeout and insecure values are the same for all devices

    "},{"location":"#documentation","title":"Documentation","text":"

    The documentation is published on ANTA package website. Also, a demo repository is available to facilitate your journey with ANTA.

    "},{"location":"#contribution-guide","title":"Contribution guide","text":"

    Contributions are welcome. Please refer to the contribution guide

    "},{"location":"#credits","title":"Credits","text":"

    Thank you to Ang\u00e9lique Phillipps, Colin MacGiollaE\u00e1in, Khelil Sator, Matthieu Tache, Onur Gashi, Paul Lavelle, Guillaume Mulocher and Thomas Grimonet for their contributions and guidances.

    "},{"location":"contribution/","title":"Contributions","text":""},{"location":"contribution/#how-to-contribute-to-anta","title":"How to contribute to ANTA","text":"

    Contribution model is based on a fork-model. Don\u2019t push to arista-netdevops-community/anta directly. Always do a branch in your forked repository and create a PR.

    To help development, open your PR as soon as possible even in draft mode. It helps other to know on what you are working on and avoid duplicate PRs.

    "},{"location":"contribution/#create-a-development-environement","title":"Create a development environement","text":"

    Run the following commands to create an ANTA development environement:

    # Clone repository\n$ git clone https://github.com/arista-netdevops-community/anta.git\n$ cd anta\n\n# Install ANTA in editable mode and its development tools\n$ pip install -e .[dev]\n\n# Verify installation\n$ pip list -e\nPackage Version Editable project location\n------- ------- -------------------------\nanta    0.7.2   /mnt/lab/projects/anta\n

    Then, tox is configued with few environments to run CI locally:

    $ tox list -d\ndefault environments:\nclean  -> Erase previous coverage reports\nlint   -> Check the code style\ntype   -> Check typing\npy38   -> Run pytest with py38\npy39   -> Run pytest with py39\npy310  -> Run pytest with py310\npy311  -> Run pytest with py311\nreport -> Generate coverage report\n
    "},{"location":"contribution/#code-linting","title":"Code linting","text":"
    tox -e lint\n[...]\nlint: commands[0]> black --check --diff --color .\nAll done! \u2728 \ud83c\udf70 \u2728\n104 files would be left unchanged.\nlint: commands[1]> isort --check --diff --color .\nSkipped 7 files\nlint: commands[2]> flake8 --max-line-length=165 --config=/dev/null anta\nlint: commands[3]> flake8 --max-line-length=165 --config=/dev/null tests\nlint: commands[4]> pylint anta\n\n--------------------------------------------------------------------\nYour code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)\n\n.pkg: _exit> python /Users/guillaumemulocher/.pyenv/versions/3.8.13/envs/anta/lib/python3.8/site-packages/pyproject_api/_backend.py True setuptools.build_meta\n  lint: OK (19.26=setup[5.83]+cmd[1.50,0.76,1.19,1.20,8.77] seconds)\ncongratulations :) (19.56 seconds)\n
    "},{"location":"contribution/#code-typing","title":"Code Typing","text":"
    tox -e type\n\n[...]\ntype: commands[0]> mypy --config-file=pyproject.toml anta\nSuccess: no issues found in 52 source files\n.pkg: _exit> python /Users/guillaumemulocher/.pyenv/versions/3.8.13/envs/anta/lib/python3.8/site-packages/pyproject_api/_backend.py True setuptools.build_meta\n  type: OK (46.66=setup[24.20]+cmd[22.46] seconds)\ncongratulations :) (47.01 seconds)\n

    NOTE: Typing is configured quite strictly, do not hesitate to reach out if you have any questions, struggles, nightmares.

    "},{"location":"contribution/#unit-tests","title":"Unit tests","text":"

    To keep high quality code, we require to provide a Pytest for every tests implemented in ANTA.

    All submodule should have its own pytest section under tests/units/anta_tests/<submodule-name>.py.

    "},{"location":"contribution/#how-to-write-a-unit-test-for-an-antatest-subclass","title":"How to write a unit test for an AntaTest subclass","text":"

    The Python modules in the tests/units/anta_tests folder define test parameters for AntaTest subclasses unit tests. A generic test function is written for all unit tests in tests.lib.anta module. The pytest_generate_tests function definition in conftest.py is called during test collection. The pytest_generate_tests function will parametrize the generic test function based on the DATA data structure defined in tests.units.anta_tests modules. See https://docs.pytest.org/en/7.3.x/how-to/parametrize.html#basic-pytest-generate-tests-example

    The DATA structure is a list of dictionaries used to parametrize the test. The list elements have the following keys: - name (str): Test name as displayed by Pytest. - test (AntaTest): An AntaTest subclass imported in the test module - e.g. VerifyUptime. - eos_data (list[dict]): List of data mocking EOS returned data to be passed to the test. - inputs (dict): Dictionary to instantiate the test inputs as defined in the class from test. - expected (dict): Expected test result structure, a dictionary containing a key result containing one of the allowed status (Literal['success', 'failure', 'unset', 'skipped', 'error']) and optionally a key messages which is a list(str) and each message is expected to be a substring of one of the actual messages in the TestResult object.

    In order for your unit tests to be correctly collected, you need to import the generic test function even if not used in the Python module.

    Test example for anta.tests.system.VerifyUptime AntaTest.

    # Import the generic test function\nfrom tests.lib.anta import test  # noqa: F401\n\n# Import your AntaTest\nfrom anta.tests.system import VerifyUptime\n\n# Define test parameters\nDATA: list[dict[str, Any]] = [\n   {\n        # Arbitrary test name\n        \"name\": \"success\",\n        # Must be an AntaTest definition\n        \"test\": VerifyUptime,\n        # Data returned by EOS on which the AntaTest is tested\n        \"eos_data\": [{\"upTime\": 1186689.15, \"loadAvg\": [0.13, 0.12, 0.09], \"users\": 1, \"currentTime\": 1683186659.139859}],\n        # Dictionary to instantiate VerifyUptime.Input\n        \"inputs\": {\"minimum\": 666},\n        # Expected test result\n        \"expected\": {\"result\": \"success\"},\n    },\n    {\n        \"name\": \"failure\",\n        \"test\": VerifyUptime,\n        \"eos_data\": [{\"upTime\": 665.15, \"loadAvg\": [0.13, 0.12, 0.09], \"users\": 1, \"currentTime\": 1683186659.139859}],\n        \"inputs\": {\"minimum\": 666},\n        # If the test returns messages, it needs to be expected otherwise test will fail.\n        # NB: expected messages only needs to be included in messages returned by the test. Exact match is not required.\n        \"expected\": {\"result\": \"failure\", \"messages\": [\"Device uptime is 665.15 seconds\"]},\n    },\n]\n
    "},{"location":"contribution/#git-pre-commit-hook","title":"Git Pre-commit hook","text":"
    pip install pre-commit\npre-commit install\n

    When running a commit or a pre-commit check:

    \u276f echo \"import foobaz\" > test.py && git add test.py\n\u276f pre-commit\npylint...................................................................Failed\n- hook id: pylint\n- exit code: 22\n\n************* Module test\ntest.py:1:0: C0114: Missing module docstring (missing-module-docstring)\ntest.py:1:0: E0401: Unable to import 'foobaz' (import-error)\ntest.py:1:0: W0611: Unused import foobaz (unused-import)\n

    NOTE: It could happen that pre-commit and tox disagree on something, in that case please open an issue on Github so we can take a look.. It is most probably wrong configuration on our side.

    "},{"location":"contribution/#configure-mypypath","title":"Configure MYPYPATH","text":"

    In some cases, mypy can complain about not having MYPYPATH configured in your shell. It is especially the case when you update both an anta test and its unit test. So you can configure this environment variable with:

    # Option 1: use local folder\nexport MYPYPATH=.\n\n# Option 2: use absolute path\nexport MYPYPATH=/path/to/your/local/anta/repository\n
    "},{"location":"contribution/#documentation","title":"Documentation","text":"

    mkdocs is used to generate the documentation. A PR should always update the documentation to avoid documentation debt.

    "},{"location":"contribution/#install-documentation-requirements","title":"Install documentation requirements","text":"

    Run pip to install the documentation requirements from the root of the repo:

    pip install -e .[doc]\n
    "},{"location":"contribution/#testing-documentation","title":"Testing documentation","text":"

    You can then check locally the documentation using the following command from the root of the repo:

    mkdocs serve\n

    By default, mkdocs listens to http://127.0.0.1:8000/, if you need to expose the documentation to another IP or port (for instance all IPs on port 8080), use the following command:

    mkdocs serve --dev-addr=0.0.0.0:8080\n
    "},{"location":"contribution/#build-class-diagram","title":"Build class diagram","text":"

    To build class diagram to use in API documentation, you can use pyreverse part of pylint with graphviz installed for jpeg generation.

    pyreverse anta --colorized -a1 -s1 -o jpeg -m true -k --output-directory docs/imgs/uml/ -c <FQDN anta class>\n

    Image will be generated under docs/imgs/uml/ and can be inserted in your documentation.

    "},{"location":"contribution/#checking-links","title":"Checking links","text":"

    Writing documentation is crucial but managing links can be cumbersome. To be sure there is no dead links, you can use muffet with the following command:

    muffet -c 2 --color=always http://127.0.0.1:8000 -e fonts.gstatic.com\n
    "},{"location":"contribution/#continuous-integration","title":"Continuous Integration","text":"

    GitHub actions is used to test git pushes and pull requests. The workflows are defined in this directory. We can view the results here.

    "},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#frequently-asked-questions-faq","title":"Frequently Asked Questions (FAQ)","text":""},{"location":"faq/#why-am-i-seeing-an-importerror-related-to-urllib3-when-running-anta","title":"Why am I seeing an ImportError related to urllib3 when running ANTA?","text":"

    When running the anta --help command, some users might encounter the following error:

    ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.0.2k-fips  26 Jan 2017'. See: https://github.com/urllib3/urllib3/issues/2168\n

    This error arises due to a compatibility issue between urllib3 v2.0 and older versions of OpenSSL.

    "},{"location":"faq/#how-can-i-resolve-this-error","title":"How can I resolve this error?","text":"
    1. Workaround: Downgrade urllib3

      If you need a quick fix, you can temporarily downgrade the urllib3 package:

      pip3 uninstall urllib3\n\npip3 install urllib3==1.26.15\n
    2. Recommended: Upgrade System or Libraries:

      As per the urllib3 v2 migration guide, the root cause of this error is an incompatibility with older OpenSSL versions. For example, users on RHEL7 might consider upgrading to RHEL8, which supports the required OpenSSL version.

    "},{"location":"faq/#still-facing-issues","title":"Still facing issues?","text":"

    If you\u2019ve tried the above solutions and continue to experience problems, please report the issue in our GitHub repository.

    "},{"location":"getting-started/","title":"Getting Started","text":""},{"location":"getting-started/#getting-started","title":"Getting Started","text":"

    This section shows how to use ANTA with basic configuration. All examples are based on Arista Test Drive (ATD) topology you can access by reaching out to your prefered SE.

    "},{"location":"getting-started/#installation","title":"Installation","text":"

    The easiest way to intall ANTA package is to run Python (>=3.8) and its pip package to install:

    pip install anta\n

    For more details about how to install package, please see the requirements and intallation section.

    "},{"location":"getting-started/#configure-arista-eos-devices","title":"Configure Arista EOS devices","text":"

    For ANTA to be able to connect to your target devices, you need to configure your management interface

    vrf instance MGMT\n!\ninterface Management0\n   description oob_management\n   vrf MGMT\n   ip address 192.168.0.10/24\n!\n

    Then, configure access to eAPI:

    !\nmanagement api http-commands\n   protocol https port 443\n   no shutdown\n   vrf MGMT\n      no shutdown\n   !\n!\n
    "},{"location":"getting-started/#create-your-inventory","title":"Create your inventory","text":"

    ANTA uses an inventory to list the target devices for the tests. You can create a file manually with this format:

    anta_inventory:\nhosts:\n- host: 192.168.0.10\nname: spine01\ntags: ['fabric', 'spine']\n- host: 192.168.0.11\nname: spine02\ntags: ['fabric', 'spine']\n- host: 192.168.0.12\nname: leaf01\ntags: ['fabric', 'leaf']\n- host: 192.168.0.13\nname: leaf02\ntags: ['fabric', 'leaf']\n- host: 192.168.0.14\nname: leaf03\ntags: ['fabric', 'leaf']\n- host: 192.168.0.15\nname: leaf04\ntags: ['fabric', 'leaf']\n

    You can read more details about how to build your inventory here

    "},{"location":"getting-started/#test-catalog","title":"Test Catalog","text":"

    To test your network, ANTA relies on a test catalog to list all the tests to run against your inventory. A test catalog references python functions into a yaml file.

    The structure to follow is like:

    <anta_tests_submodule>:\n- <anta_tests_submodule function name>:\n<test function option>:\n<test function option value>\n

    You can read more details about how to build your catalog here

    Here is an example for basic tests:

    # Load anta.tests.software\nanta.tests.software:\n- VerifyEOSVersion: # Verifies the device is running one of the allowed EOS version.\nversions: # List of allowed EOS versions.\n- 4.25.4M\n- 4.26.1F\n- '4.28.3M-28837868.4283M (engineering build)'\n- VerifyTerminAttrVersion:\nversions:\n- v1.22.1\n\nanta.tests.system:\n- VerifyUptime: # Verifies the device uptime is higher than a value.\nminimum: 1\n- VerifyNTP:\n- VerifySyslog:\n\nanta.tests.mlag:\n- VerifyMlagStatus:\n- VerifyMlagInterfaces:\n- VerifyMlagConfigSanity:\n\nanta.tests.configuration:\n- VerifyZeroTouch: # Verifies ZeroTouch is disabled.\n- VerifyRunningConfigDiffs:\n
    "},{"location":"getting-started/#test-your-network","title":"Test your network","text":"

    ANTA comes with a generic CLI entrypoint to run tests in your network. It requires an inventory file as well as a test catalog.

    This entrypoint has multiple options to manage test coverage and reporting.

    # Generic ANTA options\n$ anta\nUsage: anta [OPTIONS] COMMAND [ARGS]...\n\n  Arista Network Test Automation (ANTA) CLI\n\nOptions:\n  --version                       Show the version and exit.\n  --username TEXT                 Username to connect to EOS  [env var:\n                                  ANTA_USERNAME; required]\n--password TEXT                 Password to connect to EOS that must be\n                                  provided. It can be prompted using '--\n                                  prompt' option.  [env var: ANTA_PASSWORD]\n--enable-password TEXT          Password to access EOS Privileged EXEC mode.\n                                  It can be prompted using '--prompt' option.\n                                  Requires '--enable' option.  [env var:\n                                  ANTA_ENABLE_PASSWORD]\n--enable                        Some commands may require EOS Privileged\n                                  EXEC mode. This option tries to access this\n                                  mode before sending a command to the device.\n                                  [env var: ANTA_ENABLE]\n-P, --prompt                    Prompt for passwords if they are not\n                                  provided.\n  --timeout INTEGER               Global connection timeout  [env var:\n                                  ANTA_TIMEOUT; default: 30]\n--insecure                      Disable SSH Host Key validation  [env var:\n                                  ANTA_INSECURE]\n-i, --inventory FILE            Path to the inventory YAML file  [env var:\n                                  ANTA_INVENTORY; required]\n--log-file FILE                 Send the logs to a file. If logging level is\n                                  DEBUG, only INFO or higher will be sent to\n                                  stdout.  [env var: ANTA_LOG_FILE]\n--log-level, --log [CRITICAL|ERROR|WARNING|INFO|DEBUG]\nANTA logging level  [env var:\n                                  ANTA_LOG_LEVEL; default: INFO]\n--ignore-status                 Always exit with success  [env var:\n                                  ANTA_IGNORE_STATUS]\n--ignore-error                  Only report failures and not errors  [env\n                                  var: ANTA_IGNORE_ERROR]\n--help                          Show this message and exit.\n\nCommands:\n  debug  Debug commands for building ANTA\n  exec   Execute commands to inventory devices\n  get    Get data from/to ANTA\n  nrfu   Run NRFU against inventory devices\n
    # NRFU part of ANTA\n$ anta nrfu --help\nUsage: anta nrfu [OPTIONS] COMMAND [ARGS]...\n\n  Run NRFU against inventory devices\n\nOptions:\n  -c, --catalog FILE  Path to the tests catalog YAML file  [env var:\n                      ANTA_NRFU_CATALOG; required]\n--help              Show this message and exit.\n\nCommands:\n  json        ANTA command to check network state with JSON result\n  table       ANTA command to check network states with table result\n  text        ANTA command to check network states with text result\n  tpl-report  ANTA command to check network state with templated report\n

    Currently to be able to run anta nrfu --help you need to have given to ANTA the mandatory input parameters: username, password and inventory otherwise the CLI will report an issue. This is tracked in: https://github.com/arista-netdevops-community/anta/issues/263

    To run the NRFU, you need to select an output format amongst [\u201cjson\u201d, \u201ctable\u201d, \u201ctext\u201d, \u201ctpl-report\u201d]. For a first usage, table is recommended. By default all test results for all devices are rendered but it can be changed to a report per test case or per host

    "},{"location":"getting-started/#default-report-using-table","title":"Default report using table","text":"
    anta \\\n--username tom \\\n--password arista123 \\\n--enable \\\n--enable-password t \\\n--inventory .personal/inventory_atd.yml \\\nnrfu --catalog .personal/tests-bases.yml table --tags leaf\n\n\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Settings \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Running ANTA tests:                                  \u2502\n\u2502 - ANTA Inventory contains 6 devices (AsyncEOSDevice) \u2502\n\u2502 - Tests catalog contains 10 tests                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[10:17:24] INFO     Running ANTA tests...                                                                                                           runner.py:75\n  \u2022 Running NRFU Tests...100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 40/40 \u2022 0:00:02 \u2022 0:00:00\n\n                                                                       All tests results\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Device IP \u2503 Test Name                \u2503 Test Status \u2503 Message(s)       \u2503 Test description                                                     \u2503 Test category \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 leaf01    \u2502 VerifyEOSVersion         \u2502 success     \u2502                  \u2502 Verifies the device is running one of the allowed EOS version.       \u2502 software      \u2502\n\u2502 leaf01    \u2502 VerifyTerminAttrVersion  \u2502 success     \u2502                  \u2502 Verifies the device is running one of the allowed TerminAttr         \u2502 software      \u2502\n\u2502           \u2502                          \u2502             \u2502                  \u2502 version.                                                             \u2502               \u2502\n\u2502 leaf01    \u2502 VerifyUptime             \u2502 success     \u2502                  \u2502 Verifies the device uptime is higher than a value.                   \u2502 system        \u2502\n\u2502 leaf01    \u2502 VerifyNTP                \u2502 success     \u2502                  \u2502 Verifies NTP is synchronised.                                        \u2502 system        \u2502\n\u2502 leaf01    \u2502 VerifySyslog             \u2502 success     \u2502                  \u2502 Verifies the device had no syslog message with a severity of warning \u2502 system        \u2502\n\u2502           \u2502                          \u2502             \u2502                  \u2502 (or a more severe message) during the last 7 days.                   \u2502               \u2502\n\u2502 leaf01    \u2502 VerifyMlagStatus         \u2502 skipped     \u2502 MLAG is disabled \u2502 This test verifies the health status of the MLAG configuration.      \u2502 mlag          \u2502\n\u2502 leaf01    \u2502 VerifyMlagInterfaces     \u2502 skipped     \u2502 MLAG is disabled \u2502 This test verifies there are no inactive or active-partial MLAG      \u2502 mlag          \u2502\n[...]\n\u2502 leaf04    \u2502 VerifyMlagConfigSanity   \u2502 skipped     \u2502 MLAG is disabled \u2502 This test verifies there are no MLAG config-sanity inconsistencies.  \u2502 mlag          \u2502\n\u2502 leaf04    \u2502 VerifyZeroTouch          \u2502 success     \u2502                  \u2502 Verifies ZeroTouch is disabled.                                      \u2502 configuration \u2502\n\u2502 leaf04    \u2502 VerifyRunningConfigDiffs \u2502 success     \u2502                  \u2502                                                                      \u2502 configuration \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
    "},{"location":"getting-started/#report-in-text-mode","title":"Report in text mode","text":"
    $ anta \\\n--username tom \\\n--password arista123 \\\n--enable \\\n--enable-password t \\\n--inventory .personal/inventory_atd.yml \\\nnrfu --catalog .personal/tests-bases.yml text --tags leaf\n\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Settings \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Running ANTA tests:                                  \u2502\n\u2502 - ANTA Inventory contains 6 devices (AsyncEOSDevice) \u2502\n\u2502 - Tests catalog contains 10 tests                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[10:20:47] INFO     Running ANTA tests...                                                                                                           runner.py:75\n  \u2022 Running NRFU Tests...100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 40/40 \u2022 0:00:01 \u2022 0:00:00\nleaf01 :: VerifyEOSVersion :: SUCCESS\nleaf01 :: VerifyTerminAttrVersion :: SUCCESS\nleaf01 :: VerifyUptime :: SUCCESS\nleaf01 :: VerifyNTP :: SUCCESS\nleaf01 :: VerifySyslog :: SUCCESS\nleaf01 :: VerifyMlagStatus :: SKIPPED (MLAG is disabled)\nleaf01 :: VerifyMlagInterfaces :: SKIPPED (MLAG is disabled)\nleaf01 :: VerifyMlagConfigSanity :: SKIPPED (MLAG is disabled)\n[...]\n
    "},{"location":"getting-started/#report-per-host","title":"Report per host","text":"
    $ anta \\\n--username tom \\\n--password arista123 \\\n--enable \\\n--enable-password t \\\n--inventory .personal/inventory_atd.yml \\\nnrfu --catalog .personal/tests-bases.yml json --tags leaf\n\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Settings \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Running ANTA tests:                                  \u2502\n\u2502 - ANTA Inventory contains 6 devices (AsyncEOSDevice) \u2502\n\u2502 - Tests catalog contains 10 tests                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[10:21:51] INFO     Running ANTA tests...                                                                                                           runner.py:75\n  \u2022 Running NRFU Tests...100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 40/40 \u2022 0:00:02 \u2022 0:00:00\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 JSON results of all tests                                                                                                                                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[\n{\n\"name\": \"leaf01\",\n    \"test\": \"VerifyEOSVersion\",\n    \"categories\": [\n\"software\"\n],\n    \"description\": \"Verifies the device is running one of the allowed EOS version.\",\n    \"result\": \"success\",\n    \"messages\": [],\n    \"custom_field\": \"None\",\n  },\n  {\n\"name\": \"leaf01\",\n    \"test\": \"VerifyTerminAttrVersion\",\n    \"categories\": [\n\"software\"\n],\n    \"description\": \"Verifies the device is running one of the allowed TerminAttr version.\",\n    \"result\": \"success\",\n    \"messages\": [],\n    \"custom_field\": \"None\",\n  },\n[...]\n]\n

    You can find more information under the usage section of the website

    "},{"location":"requirements-and-installation/","title":"Installation","text":""},{"location":"requirements-and-installation/#anta-requirements","title":"ANTA Requirements","text":""},{"location":"requirements-and-installation/#python-version","title":"Python version","text":"

    Python 3 (>=3.8) is required:

    python --version\nPython 3.9.9\n
    "},{"location":"requirements-and-installation/#install-anta-package","title":"Install ANTA package","text":"

    This installation will deploy tests collection, scripts and all their Python requirements.

    The ANTA package and the cli require some packages that are not part of the Python standard library. They are indicated in the pyproject.toml file, under dependencies.

    "},{"location":"requirements-and-installation/#install-from-pypi-server","title":"Install from Pypi server","text":"
    pip install anta\n
    "},{"location":"requirements-and-installation/#install-anta-from-github","title":"Install ANTA from github","text":"
    pip install git+https://github.com/arista-netdevops-community/anta.git\n\n# You can even specify the branch, tag or commit:\npip install git+https://github.com/arista-netdevops-community/anta.git@<cool-feature-branch>\npip install git+https://github.com/arista-netdevops-community/anta.git@<cool-tag>\npip install git+https://github.com/arista-netdevops-community/anta.git@<more-or-less-cool-hash>\n
    "},{"location":"requirements-and-installation/#check-installation","title":"Check installation","text":"

    After installing ANTA, verify the installation with the following commands:

    # Check ANTA has been installed in your python path\npip list | grep anta\n\n# Check scripts are in your $PATH\n# Path may differ but it means CLI is in your path\nwhich anta\n/home/tom/.pyenv/shims/anta\n

    Warning

    Before running the anta --version command, please be aware that some users have reported issues related to the urllib3 package. If you encounter an error at this step, please refer to our FAQ page for guidance on resolving it.

    # Check ANTA version\nanta --version\nanta, version v0.7.2\n
    "},{"location":"requirements-and-installation/#eos-requirements","title":"EOS Requirements","text":"

    To get ANTA working, the targetted Arista EOS devices must have the following configuration (assuming you connect to the device using Management interface in MGMT VRF):

    configure\n!\nvrf instance MGMT\n!\ninterface Management1\n   description oob_management\n   vrf MGMT\n   ip address 10.73.1.105/24\n!\nend\n

    Enable eAPI on the MGMT vrf:

    configure\n!\nmanagement api http-commands\n   protocol https port 443\n   no shutdown\n   vrf MGMT\n      no shutdown\n!\nend\n

    Now the swicth accepts on port 443 in the MGMT VRF HTTPS requests containing a list of CLI commands.

    Run these EOS commands to verify:

    show management http-server\nshow management api http-commands\n
    "},{"location":"usage-inventory-catalog/","title":"Inventory & Tests catalog","text":""},{"location":"usage-inventory-catalog/#inventory-and-catalog-definition","title":"Inventory and Catalog definition","text":"

    This page describes how to create an inventory and a tests catalog.

    "},{"location":"usage-inventory-catalog/#create-an-inventory-file","title":"Create an inventory file","text":"

    anta cli needs an inventory file to list all devices to tests. This inventory is a YAML file with the folowing keys:

    anta_inventory:\nhosts:\n- host: < ip address value >\nport: < TCP port for eAPI. Default is 443 (Optional)>\nname: < name to display in report. Default is host:port (Optional) >\ntags: < list of tags to use to filter inventory during tests. Default is ['all']. (Optional) >\nnetworks:\n- network: < network using CIDR notation >\ntags: < list of tags to use to filter inventory during tests. Default is ['all']. (Optional) >\nranges:\n- start: < first ip address value of the range >\nend: < last ip address value of the range >\ntags: < list of tags to use to filter inventory during tests. Default is ['all']. (Optional) >\n

    Your inventory file can be based on any of these 3 keys and MUST start with anta_inventory key. A full description of the inventory model is available in API documentation

    An inventory example:

    ---\nanta_inventory:\nhosts:\n- host: 192.168.0.10\nname: spine01\ntags: ['fabric', 'spine']\n- host: 192.168.0.11\nname: spine02\ntags: ['fabric', 'spine']\nnetworks:\n- network: '192.168.110.0/24'\ntags: ['fabric', 'leaf']\nranges:\n- start: 10.0.0.9\nend: 10.0.0.11\ntags: ['fabric', 'l2leaf']\n
    "},{"location":"usage-inventory-catalog/#test-catalog","title":"Test Catalog","text":"

    In addition to your inventory file, you also have to define a catalog of tests to execute against all your devices. This catalog list all your tests and their parameters. Its format is a YAML file and keys are tests functions inherited from the python path.

    "},{"location":"usage-inventory-catalog/#default-tests-catalog","title":"Default tests catalog","text":"

    All tests are located under anta.tests module and are categorised per family (one submodule). So to run test for software version, you can do:

    anta.tests.software:\n- VerifyEosVersion:\n

    It will load the test VerifyEosVersion located in anta.tests.software. But since this function has parameters, we will create a catalog with the following structure:

    anta.tests.software:\n- VerifyEosVersion:\n# List of allowed EOS versions.\nversions:\n- 4.25.4M\n- 4.26.1F\n

    To get a list of all available tests and their respective parameters, you can read the tests section of this website.

    The following example gives a very minimal tests catalog you can use in almost any situation

    ---\n# Load anta.tests.software\nanta.tests.software:\n# Verifies the device is running one of the allowed EOS version.\n- VerifyEosVersion:\n# List of allowed EOS versions.\nversions:\n- 4.25.4M\n- 4.26.1F\n\n# Load anta.tests.system\nanta.tests.system:\n# Verifies the device uptime is higher than a value.\n- VerifyUptime:\nminimum: 1\n\n# Load anta.tests.configuration\nanta.tests.configuration:\n# Verifies ZeroTouch is disabled.\n- VerifyZeroTouch:\n- VerifyRunningConfigDiffs:\n
    "},{"location":"usage-inventory-catalog/#custom-tests-catalog","title":"Custom tests catalog","text":"

    In case you want to leverage your own tests collection, you can use the following syntax:

    <your package name>:\n- <your test in your package name>:\n

    So for instance, it could be:

    titom73.tests.system:\n- VerifyPlatform:\ntype: ['cEOS-LAB']\n

    How to create custom tests

    To create your custom tests, you should refer to this following documentation

    "},{"location":"usage-inventory-catalog/#customize-test-description-and-categories","title":"Customize test description and categories","text":"

    It might be interesting to use your own categories and customized test description to build a better report for your environment. ANTA comes with a handy feature to define your own categories and description in the report.

    In your test catalog, use result_overwrite dictionary with categories and description to just overwrite this values in your report:

    anta.tests.configuration:\n- VerifyZeroTouch: # Verifies ZeroTouch is disabled.\nresult_overwrite:\ncategories: ['demo', 'pr296']\ndescription: A custom test\n- VerifyRunningConfigDiffs:\nanta.tests.interfaces:\n- VerifyInterfaceUtilization:\n

    Once you run anta nrfu table, you will see following output:

    \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Device IP \u2503 Test Name                  \u2503 Test Status \u2503 Message(s) \u2503 Test description                              \u2503 Test category \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 spine01   \u2502 VerifyZeroTouch            \u2502 success     \u2502            \u2502 A custom test                                 \u2502 demo, pr296   \u2502\n\u2502 spine01   \u2502 VerifyRunningConfigDiffs   \u2502 success     \u2502            \u2502                                               \u2502 configuration \u2502\n\u2502 spine01   \u2502 VerifyInterfaceUtilization \u2502 success     \u2502            \u2502 Verifies interfaces utilization is below 75%. \u2502 interfaces    \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
    "},{"location":"advanced_usages/as-python-lib/","title":"ANTA as a Python Library","text":"

    ANTA is a Python library that can be used in user applications. This section describes how you can leverage ANTA Python modules to help you create your own NRFU solution.

    Tip

    If you are unfamiliar with asyncio, refer to the Python documentation relevant to your Python version - https://docs.python.org/3/library/asyncio.html

    "},{"location":"advanced_usages/as-python-lib/#antadevice-abstract-class","title":"AntaDevice Abstract Class","text":"

    A device is represented in ANTA as a instance of a subclass of the AntaDevice abstract class. There are few abstract methods that needs to be implemented by child classes:

    • The collect() coroutine is in charge of collecting outputs of AntaCommand instances.
    • The refresh() coroutine is in charge of updating attributes of the AntaDevice instance. These attributes are used by AntaInventory to filter out unreachable devices or by AntaTest to skip devices based on their hardware models.

    The copy() coroutine is used to copy files to and from the device. It does not need to be implemented if tests are not using it.

    "},{"location":"advanced_usages/as-python-lib/#asynceosdevice-class","title":"AsyncEOSDevice Class","text":"

    The AsyncEOSDevice class is an implementation of AntaDevice for Arista EOS. It uses the aio-eapi eAPI client and the AsyncSSH library.

    • The collect() coroutine collects AntaCommand outputs using eAPI.
    • The refresh() coroutine tries to open a TCP connection on the eAPI port and update the is_online attribute accordingly. If the TCP connection succeeds, it sends a show version command to gather the hardware model of the device and updates the established and hw_model attributes.
    • The copy() coroutine copies files to and from the device using the SCP protocol.
    "},{"location":"advanced_usages/as-python-lib/#antainventory-class","title":"AntaInventory Class","text":"

    The AntaInventory class is a subclass of the standard Python type dict. The keys of this dictionary are the device names, the values are AntaDevice instances.

    AntaInventory provides methods to interact with the ANTA inventory:

    • The add_device() method adds an AntaDevice instance to the inventory. Adding an entry to AntaInventory with a key different from the device name is not allowed.
    • The get_inventory() returns a new AntaInventory instance with filtered out devices based on the method inputs.
    • The connect_inventory() coroutine will execute the refresh() coroutines of all the devices in the inventory.
    • The parse() static method creates an AntaInventory instance from a YAML file and returns it. The devices are AsyncEOSDevice instances.

    To parse a YAML inventory file and print the devices connection status:

    \"\"\"\nExample\n\"\"\"\nimport asyncio\n\nfrom anta.inventory import AntaInventory\n\n\nasync def main(inv: AntaInventory) -> None:\n\"\"\"\n    Take an AntaInventory and:\n    1. try to connect to every device in the inventory\n    2. print a message for every device connection status\n    \"\"\"\n    await inv.connect_inventory()\n\n    for device in inv.values():\n        if device.established:\n            print(f\"Device {device.name} is online\")\n        else:\n            print(f\"Could not connect to device {device.name}\")\n\nif __name__ == \"__main__\":\n    # Create the AntaInventory instance\n    inventory = AntaInventory.parse(\n        inventory_file=\"inv.yml\",\n        username=\"arista\",\n        password=\"@rista123\",\n        timeout=15,\n    )\n\n    # Run the main coroutine\n    res = asyncio.run(main(inventory))\n
    How to create your inventory file

    Please visit this dedicated section for how to use inventory and catalog files.

    To run an EOS commands list on the reachable devices from the inventory:

    \"\"\"\nExample\n\"\"\"\n# This is needed to run the script for python < 3.10 for typing annotations\nfrom __future__ import annotations\n\nimport asyncio\nfrom pprint import pprint\n\nfrom anta.inventory import AntaInventory\nfrom anta.models import AntaCommand\n\n\nasync def main(inv: AntaInventory, commands: list[str]) -> dict[str, list[AntaCommand]]:\n\"\"\"\n    Take an AntaInventory and a list of commands as string and:\n    1. try to connect to every device in the inventory\n    2. collect the results of the commands from each device\n\n    Returns:\n      a dictionary where key is the device name and the value is the list of AntaCommand ran towards the device\n    \"\"\"\n    await inv.connect_inventory()\n\n    # Make a list of coroutine to run commands towards each connected device\n    coros = []\n    # dict to keep track of the commands per device\n    result_dict = {}\n    for name, device in inv.get_inventory(established_only=True).items():\n        anta_commands = [AntaCommand(command=command, ofmt=\"json\") for command in commands]\n        result_dict[name] = anta_commands\n        coros.append(device.collect_commands(anta_commands))\n\n    # Run the coroutines\n    await asyncio.gather(*coros)\n\n    return result_dict\n\n\nif __name__ == \"__main__\":\n    # Create the AntaInventory instance\n    inventory = AntaInventory.parse(\n        inventory_file=\"inv.yml\",\n        username=\"arista\",\n        password=\"@rista123\",\n        timeout=15,\n    )\n\n    # Create a list of commands with json output\n    commands = [\"show version\", \"show ip bgp summary\"]\n\n    # Run the main asyncio  entry point\n    res = asyncio.run(main(inventory, commands))\n\n    pprint(res)\n

    "},{"location":"advanced_usages/as-python-lib/#use-tests-from-anta","title":"Use tests from ANTA","text":"

    All the test classes inherit from the same abstract Base Class AntaTest. The Class definition indicates which commands are required for the test and the user should focus only on writing the test function with optional keywords argument. The instance of the class upon creation instantiates a TestResult object that can be accessed later on to check the status of the test ([unset, skipped, success, failure, error]).

    "},{"location":"advanced_usages/as-python-lib/#test-structure","title":"Test structure","text":"

    All tests are built on a class named AntaTest which provides a complete toolset for a test:

    • Object creation
    • Test definition
    • TestResult definition
    • Abstracted method to collect data

    This approach means each time you create a test it will be based on this AntaTest class. Besides that, you will have to provide some elements:

    • name: Name of the test
    • description: A human readable description of your test
    • categories: a list of categories to sort test.
    • commands: a list of command to run. This list must be a list of AntaCommand which is described in the next part of this document.

    Here is an example of a hardware test related to device temperature:

    from __future__ import annotations\n\nimport logging\nfrom typing import Any, Dict, List, Optional, cast\n\nfrom anta.models import AntaTest, AntaCommand\n\n\nclass VerifyTemperature(AntaTest):\n\"\"\"\n    Verifies device temparture is currently OK.\n    \"\"\"\n\n    # The test name\n    name = \"VerifyTemperature\"\n    # A small description of the test, usually the first line of the class docstring\n    description = \"Verifies device temparture is currently OK\"\n    # The category of the test, usually the module name\n    categories = [\"hardware\"]\n    # The command(s) used for the test. Could be a template instead\n    commands = [AntaCommand(command=\"show system environment temperature\", ofmt=\"json\")]\n\n    # Decorator\n    @AntaTest.anta_test\n    # abstract method that must be defined by the child Test class\n    def test(self) -> None:\n\"\"\"Run VerifyTemperature validation\"\"\"\n        command_output = cast(Dict[str, Dict[Any, Any]], self.instance_commands[0].output)\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature is not OK, systemStatus: {temperature_status }\")\n

    When you run the test, object will automatically call its anta.models.AntaTest.collect() method to get device output for each command if no pre-collected data was given to the test. This method does a loop to call anta.inventory.models.InventoryDevice.collect() methods which is in charge of managing device connection and how to get data.

    run test offline

    You can also pass eos data directly to your test if you want to validate data collected in a different workflow. An example is provided below just for information:

    test = VerifyTemperature(mocked_device, eos_data=test_data[\"eos_data\"])\nasyncio.run(test.test())\n

    The test function is always the same and must be defined with the @AntaTest.anta_test decorator. This function takes at least one argument which is a anta.inventory.models.InventoryDevice object. In some cases a test would rely on some additional inputs from the user, for instance the number of expected peers or some expected numbers. All parameters must come with a default value and the test function should validate the parameters values (at this stage this is the only place where validation can be done but there are future plans to make this better).

    class VerifyTemperature(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pass\n\nclass VerifyTransceiversManufacturers(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self, manufacturers: Optional[List[str]] = None) -> None:\n        # validate the manufactures parameter\n        pass\n

    The test itself does not return any value, but the result is directly availble from your AntaTest object and exposes a anta.result_manager.models.TestResult object with result, name of the test and optional messages:

    • name (str): Device name where the test has run.
    • test (str): Test name runs on the device.
    • categories (List[str]): List of categories the TestResult belongs to, by default the AntaTest categories.
    • description (str): TestResult description, by default the AntaTest description.
    • results (str): Result of the test. Can be one of [\u201cunset\u201d, \u201csuccess\u201d, \u201cfailure\u201d, \u201cerror\u201d, \u201cskipped\u201d].
    • message (str, optional): Message to report after the test if any.
    • custom_field (str, optional): Custom field to store a string for flexibility in integrating with ANTA
    from anta.tests.hardware import VerifyTemperature\n\ntest = VerifyTemperature(mocked_device, eos_data=test_data[\"eos_data\"])\nasyncio.run(test.test())\nassert test.result.result == \"success\"\n
    "},{"location":"advanced_usages/as-python-lib/#classes-for-commands","title":"Classes for commands","text":"

    To make it easier to get data, ANTA defines 2 different classes to manage commands to send to devices:

    "},{"location":"advanced_usages/as-python-lib/#antacommand-class","title":"AntaCommand Class","text":"

    Represent a command with following information:

    • Command to run
    • Ouput format expected
    • eAPI version
    • Output of the command

    Usage example:

    from anta.models import AntaCommand\n\ncmd1 = AntaCommand(command=\"show zerotouch\")\ncmd2 = AntaCommand(command=\"show running-config diffs\", ofmt=\"text\")\n

    Command revision and version

    • Most of EOS commands return a JSON structure according to a model (some commands may not be modeled hence the necessity to use text outformat sometimes.
    • The model can change across time (adding feature, \u2026 ) and when the model is changed in a non backward-compatible way, the revision number is bumped. The initial model starts with revision 1.
    • A revision applies to a particular CLI command whereas a version is global to an eAPI call. The version is internally translated to a specific revision for each CLI command in the RPC call. The currently supported version vaues are 1 and latest.
    • A revision takes precedence over a version (e.g. if a command is run with version=\u201dlatest\u201d and revision=1, the first revision of the model is returned)
    • By default eAPI returns the first revision of each model to ensure that when upgrading, intergation with existing tools is not broken. This is done by using by default version=1 in eAPI calls.

    ANTA uses by default version=\"latest\" in AntaCommand. For some commands, you may want to run them with a different revision or version.

    For instance the VerifyRoutingTableSize test leverages the first revision of show bfd peers:

    # revision 1 as later revision introduce additional nesting for type\ncommands = [AntaCommand(command=\"show bfd peers\", revision=1)]\n
    "},{"location":"advanced_usages/as-python-lib/#antatemplate-class","title":"AntaTemplate Class","text":"

    Because some command can require more dynamic than just a command with no parameter provided by user, ANTA supports command template: you define a template in your test class and user provide parameters when creating test object.

    class RunArbitraryTemplateCommand(AntaTest):\n\"\"\"\n    Run an EOS command and return result\n    Based on AntaTest to build relevant output for pytest\n    \"\"\"\n\n    name = \"Run aributrary EOS command\"\n    description = \"To be used only with anta debug commands\"\n    template = AntaTemplate(template=\"show interfaces {ifd}\")\n    categories = [\"debug\"]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        errdisabled_interfaces = [interface for interface, value in response[\"interfaceStatuses\"].items() if value[\"linkStatus\"] == \"errdisabled\"]\n        ...\n\n\nparams = [{\"ifd\": \"Ethernet2\"}, {\"ifd\": \"Ethernet49/1\"}]\nrun_command1 = RunArbitraryTemplateCommand(device_anta, params)\n

    In this example, test waits for interfaces to check from user setup and will only check for interfaces in params

    "},{"location":"advanced_usages/custom-tests/","title":"Developing ANTA tests","text":"

    This documentation applies for both creating tests in ANTA or creating your own test package.

    ANTA is not only a Python library with a CLI and a collection of built-in tests, it is also a framework you can extend by building your own tests.

    "},{"location":"advanced_usages/custom-tests/#generic-approach","title":"Generic approach","text":"

    A test is a Python class where a test function is defined and will be run by the framework.

    ANTA provides an abstract class AntaTest. This class does the heavy lifting and provide the logic to define, collect and test data. The code below is an example of a simple test in ANTA, which is an AntaTest subclass:

    from anta.models import AntaTest, AntaCommand\nfrom anta.decorators import skip_on_platforms\n\n\nclass VerifyTemperature(AntaTest):\n\"\"\"\n    This test verifies if the device temperature is within acceptable limits.\n\n    Expected Results:\n      * success: The test will pass if the device temperature is currently OK: 'temperatureOk'.\n      * failure: The test will fail if the device temperature is NOT OK.\n    \"\"\"\n\n    name = \"VerifyTemperature\"\n    description = \"Verifies if the device temperature is within the acceptable range.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment temperature\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature exceeds acceptable limits. Current system status: '{temperature_status}'\")\n

    AntaTest also provide more advanced capabilities like AntaCommand templating using the AntaTemplate class or test inputs definition and validation using AntaTest.Input pydantic model. This will be discussed in the sections below.

    "},{"location":"advanced_usages/custom-tests/#antatest-structure","title":"AntaTest structure","text":""},{"location":"advanced_usages/custom-tests/#class-attributes","title":"Class Attributes","text":"
    • name (str): Name of the test. Used during reporting.
    • description (str): A human readable description of your test.
    • categories (list[str]): A list of categories in which the test belongs.
    • commands (list[Union[AntaTemplate, AntaCommand]]): A list of command to collect from devices. This list must be a list of AntaCommand or AntaTemplate instances. Rendering AntaTemplate instances will be discussed later.

    Info

    All these class attributes are mandatory. If any attribute is missing, a NotImplementedError exception will be raised during class instantiation.

    "},{"location":"advanced_usages/custom-tests/#instance-attributes","title":"Instance Attributes","text":"

    Info

    You can access an instance attribute in your code using the self reference. E.g. you can access the test input values using self.inputs.

    Logger object

    ANTA already provides comprehensive logging at every steps of a test execution. The AntaTest class also provides a logger attribute that is a Python logger specific to the test instance. See Python documentation for more information.

    AntaDevice object

    Even if device is not a private attribute, you should not need to access this object in your code.

    "},{"location":"advanced_usages/custom-tests/#test-inputs","title":"Test Inputs","text":"

    AntaTest.Input is a pydantic model that allow test developers to define their test inputs. pydantic provides out of the box error handling for test input validation based on the type hints defined by the test developer.

    The base definition of AntaTest.Input provides common test inputs for all AntaTest instances:

    "},{"location":"advanced_usages/custom-tests/#input-model","title":"Input model","text":""},{"location":"advanced_usages/custom-tests/#resultoverwrite-model","title":"ResultOverwrite model","text":"

    Attributes:

    Name Type Description description Optional[str]

    overwrite TestResult.description

    categories Optional[List[str]]

    overwrite TestResult.categories

    custom_field Optional[str]

    a free string that will be included in the TestResult object

    Note

    The pydantic model is configured using the extra=forbid that will fail input validation if extra fields are provided.

    "},{"location":"advanced_usages/custom-tests/#methods","title":"Methods","text":"
    • test(self) -> None: This is an abstract method that must be implemented. It contains the test logic that can access the collected command outputs using the instance_commands instance attribute, access the test inputs using the inputs instance attribute and must set the result instance attribute accordingly. It must be implemented using the AntaTest.anta_test decorator that provides logging and will collect commands before executing the test() method.
    • render(self, template: AntaTemplate) -> list[AntaCommand]: This method only needs to be implemented if AntaTemplate instances are present in the commands class attribute. It will be called for every AntaTemplate occurence and must return a list of AntaCommand using the AntaTemplate.render() method. It can access test inputs using the inputs instance attribute.
    "},{"location":"advanced_usages/custom-tests/#test-execution","title":"Test execution","text":"

    Below is a high level description of the test execution flow in ANTA:

    1. ANTA will parse the test catalog to get the list of AntaTest subclasses to instantiate and their associated input values. We consider a single AntaTest subclass in the following steps.

    2. ANTA will instantiate the AntaTest subclass and a single device will be provided to the test instance. The Input model defined in the class will also be instantiated at this moment. If any ValidationError is raised, the test execution will be stopped.

    3. If there is any AntaTemplate instance in the commands class attribute, render() will be called for every occurrence. At this moment, the instance_commands attribute has been initialized. If any rendering error occurs, the test execution will be stopped.

    4. The AntaTest.anta_test decorator will collect the commands from the device and update the instance_commands attribute with the outputs. If any collection error occurs, the test execution will be stopped.

    5. The test() method is executed.

    "},{"location":"advanced_usages/custom-tests/#writing-an-antatest-subclass","title":"Writing an AntaTest subclass","text":"

    In this section, we will go into all the details of writing an AntaTest subclass.

    "},{"location":"advanced_usages/custom-tests/#class-definition","title":"Class definition","text":"

    Import anta.models.AntaTest and define your own class. Define the mandatory class attributes using anta.models.AntaCommand, anta.models.AntaTemplate or both.

    from anta.models import AntaTest, AntaCommand, AntaTemplate\n\n\nclass <YourTestName>(AntaTest):\n\"\"\"\n    <a docstring description of your test>\n    \"\"\"\n\n    name = \"YourTestName\"                                           # should be your class name\n    description = \"<test description in human reading format>\"\n    categories = [\"<arbitrary category>\", \"<another arbitrary category>\"]\n    commands = [\n        AntaCommand(\n            command=\"<EOS command to run>\",\n            ofmt=\"<command format output>\",\n            version=\"<eAPI version to use>\",\n            revision=\"<revision to use for the command>\",           # revision has precedence over version\n        ),\n        AntaTemplate(\n            template=\"<Python f-string to render an EOS command>\",\n            ofmt=\"<command format output>\",\n            version=\"<eAPI version to use>\",\n            revision=\"<revision to use for the command>\",           # revision has precedence over version\n        )\n    ]\n
    "},{"location":"advanced_usages/custom-tests/#inputs-definition","title":"Inputs definition","text":"

    If the user needs to provide inputs for your test, you need to define a pydantic model that defines the schema of the test inputs:

    class <YourTestName>(AntaTest):\n    ...\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        <input field name>: <input field type>\n\"\"\"<input field docstring>\"\"\"\n

    To define an input field type, refer to the pydantic documentation about types. You can also leverage anta.custom_types that provides reusable types defined in ANTA tests.

    Regarding required, optional and nullable fields, refer to this documentation on how to define them.

    Note

    All the pydantic features are supported. For instance you can define validators for complex input validation.

    "},{"location":"advanced_usages/custom-tests/#template-rendering","title":"Template rendering","text":"

    Define the render() method if you have AntaTemplate instances in your commands class attribute:

    class <YourTestName>(AntaTest):\n    ...\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(<template param>=input_value) for input_value in self.inputs.<input_field>]\n

    You can access test inputs and render as many AntaCommand as desired.

    "},{"location":"advanced_usages/custom-tests/#test-definition","title":"Test definition","text":"

    Implement the test() method with your test logic:

    class <YourTestName>(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pass\n

    The logic usually includes the following different stages: 1. Parse the command outputs using the self.instance_commands instance attribute. 2. If needed, access the test inputs using the self.inputs instance attribute and write your conditional logic. 3. Set the result instance attribute to reflect the test result by either calling self.result.is_success() or self.result.is_failure(\"<FAILURE REASON>\"). Sometimes, setting the test result to skipped using self.result.is_skipped(\"<SKIPPED REASON>\") can make sense (e.g. testing the OSPF neighbor states but no neighbor was found). However, you should not need to catch any exception and set the test result to error since the error handling is done by the framework, see below.

    The example below is based on the VerifyTemperature test.

    class VerifyTemperature(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Grab output of the collected command\n        command_output = self.instance_commands[0].json_output\n\n        # Do your test: In this example we check a specific field of the JSON output from EOS\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature exceeds acceptable limits. Current system status: '{temperature_status}'\")\n

    As you can see there is no error handling to do in your code. Everything is packaged in the AntaTest.anta_tests decorator and below is a simple example of error captured when trying to access a dictionary with an incorrect key:

    class VerifyTemperature(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Grab output of the collected command\n        command_output = self.instance_commands[0].json_output\n\n        # Access the dictionary with an incorrect key\n        command_output['incorrectKey']\n
    ERROR    Exception raised for test VerifyTemperature (on device 192.168.0.10) - KeyError ('incorrectKey')\n

    Get stack trace for debugging

    If you want to access to the full exception stack, you can run ANTA in debug mode by setting the ANTA_DEBUG environment variable to true. Example:

    $ ANTA_DEBUG=true anta nrfu --catalog test_custom.yml text\n

    "},{"location":"advanced_usages/custom-tests/#test-decorators","title":"Test decorators","text":"

    In addition to the required AntaTest.anta_tests decorator, ANTA offers a set of optional decorators for further test customization:

    • anta.decorators.deprecated_test: Use this to log a message of WARNING severity when a test is deprecated.
    • anta.decorators.skip_on_platforms: Use this to skip tests for functionalities that are not supported on specific platforms.
    • anta.decorators.check_bgp_family_enable: Use this to skip tests when a particular BGP address family is not configured on the device.

    Warning

    The check_bgp_family_enable decorator is deprecated and will eventually be removed in a future major release of ANTA. For more details, please refer to the BGP tests section.

    from anta.decorators import skip_on_platforms\n\nclass VerifyTemperature(AntaTest):\n    ...\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pass\n
    "},{"location":"advanced_usages/custom-tests/#access-your-custom-tests-in-the-test-catalog","title":"Access your custom tests in the test catalog","text":"

    This section is required only if you are not merging your development into ANTA. Otherwise, just follow contribution guide.

    For that, you need to create your own Python package as described in this hitchhiker\u2019s guide to package Python code. We assume it is well known and we won\u2019t focus on this aspect. Thus, your package must be impartable by ANTA hence available in the module search path sys.path (you can use PYTHONPATH for example).

    It is very similar to what is documented in catalog section but you have to use your own package name.2

    Let say the custom Python package is anta_titom73 and the test is defined in anta_titom73.dc_project Python module, the test catalog would look like:

    anta_titom73.dc_project:\n- VerifyFeatureX:\nminimum: 1\n
    And now you can run your NRFU tests with the CLI:

    anta nrfu text --catalog test_custom.yml\nspine01 :: verify_dynamic_vlan :: FAILURE (Device has 0 configured, we expect at least 1)\nspine02 :: verify_dynamic_vlan :: FAILURE (Device has 0 configured, we expect at least 1)\nleaf01 :: verify_dynamic_vlan :: SUCCESS\nleaf02 :: verify_dynamic_vlan :: SUCCESS\nleaf03 :: verify_dynamic_vlan :: SUCCESS\nleaf04 :: verify_dynamic_vlan :: SUCCESS\n
    "},{"location":"api/device/","title":"Device models","text":""},{"location":"api/device/#antadevice-base-class","title":"AntaDevice base class","text":""},{"location":"api/device/#uml-representation","title":"UML representation","text":""},{"location":"api/device/#anta.device.AntaDevice","title":"AntaDevice","text":"
    AntaDevice(name: str, tags: Optional[list[str]] = None)\n

    Bases: ABC

    Abstract class representing a device in ANTA. An implementation of this class needs must override the abstract coroutines collect() and refresh().

    Attributes:

    Name Type Description name str

    Device name

    is_online bool

    True if the device IP is reachable and a port can be open

    established bool

    True if remote command execution succeeds

    hw_model Optional[str]

    Hardware model of the device

    tags list[str]

    List of tags for this device

    Parameters:

    Name Type Description Default name str

    Device name

    required tags Optional[list[str]]

    list of tags for this device

    None Source code in anta/device.py
    def __init__(self, name: str, tags: Optional[list[str]] = None) -> None:\n\"\"\"\n    Constructor of AntaDevice\n\n    Args:\n        name: Device name\n        tags: list of tags for this device\n    \"\"\"\n    self.name: str = name\n    self.hw_model: Optional[str] = None\n    self.tags: list[str] = tags if tags is not None else []\n    self.is_online: bool = False\n    self.established: bool = False\n\n    # Ensure tag 'all' is always set\n    if DEFAULT_TAG not in self.tags:\n        self.tags.append(DEFAULT_TAG)\n
    "},{"location":"api/device/#anta.device.AntaDevice.collect","title":"collect abstractmethod async","text":"
    collect(command: AntaCommand) -> None\n

    Collect device command output. This abstract coroutine can be used to implement any command collection method for a device in ANTA.

    The collect() implementation needs to populate the output attribute of the AntaCommand object passed as argument.

    If a failure occurs, the collect() implementation is expected to catch the exception and implement proper logging, the output attribute of the AntaCommand object passed as argument would be None in this case.

    Parameters:

    Name Type Description Default command AntaCommand

    the command to collect

    required Source code in anta/device.py
    @abstractmethod\nasync def collect(self, command: AntaCommand) -> None:\n\"\"\"\n    Collect device command output.\n    This abstract coroutine can be used to implement any command collection method\n    for a device in ANTA.\n\n    The `collect()` implementation needs to populate the `output` attribute\n    of the `AntaCommand` object passed as argument.\n\n    If a failure occurs, the `collect()` implementation is expected to catch the\n    exception and implement proper logging, the `output` attribute of the\n    `AntaCommand` object passed as argument would be `None` in this case.\n\n    Args:\n        command: the command to collect\n    \"\"\"\n
    "},{"location":"api/device/#anta.device.AntaDevice.collect_commands","title":"collect_commands async","text":"
    collect_commands(commands: list[AntaCommand]) -> None\n

    Collect multiple commands.

    Parameters:

    Name Type Description Default commands list[AntaCommand]

    the commands to collect

    required Source code in anta/device.py
    async def collect_commands(self, commands: list[AntaCommand]) -> None:\n\"\"\"\n    Collect multiple commands.\n\n    Args:\n        commands: the commands to collect\n    \"\"\"\n    await asyncio.gather(*(self.collect(command=command) for command in commands))\n
    "},{"location":"api/device/#anta.device.AntaDevice.copy","title":"copy async","text":"
    copy(sources: list[Path], destination: Path, direction: Literal['to', 'from'] = 'from') -> None\n

    Copy files to and from the device, usually through SCP. It is not mandatory to implement this for a valid AntaDevice subclass.

    Parameters:

    Name Type Description Default sources list[Path]

    List of files to copy to or from the device.

    required destination Path

    Local or remote destination when copying the files. Can be a folder.

    required direction Literal['to', 'from']

    Defines if this coroutine copies files to or from the device.

    'from' Source code in anta/device.py
    async def copy(self, sources: list[Path], destination: Path, direction: Literal[\"to\", \"from\"] = \"from\") -> None:\n\"\"\"\n    Copy files to and from the device, usually through SCP.\n    It is not mandatory to implement this for a valid AntaDevice subclass.\n\n    Args:\n        sources: List of files to copy to or from the device.\n        destination: Local or remote destination when copying the files. Can be a folder.\n        direction: Defines if this coroutine copies files to or from the device.\n    \"\"\"\n    raise NotImplementedError(f\"copy() method has not been implemented in {self.__class__.__name__} definition\")\n
    "},{"location":"api/device/#anta.device.AntaDevice.refresh","title":"refresh abstractmethod async","text":"
    refresh() -> None\n

    Update attributes of an AntaDevice instance.

    This coroutine must update the following attributes of AntaDevice
    • is_online: When the device IP is reachable and a port can be open
    • established: When a command execution succeeds
    • hw_model: The hardware model of the device
    Source code in anta/device.py
    @abstractmethod\nasync def refresh(self) -> None:\n\"\"\"\n    Update attributes of an AntaDevice instance.\n\n    This coroutine must update the following attributes of AntaDevice:\n        - `is_online`: When the device IP is reachable and a port can be open\n        - `established`: When a command execution succeeds\n        - `hw_model`: The hardware model of the device\n    \"\"\"\n
    "},{"location":"api/device/#async-eos-device-class","title":"Async EOS device class","text":""},{"location":"api/device/#uml-representation_1","title":"UML representation","text":""},{"location":"api/device/#anta.device.AsyncEOSDevice","title":"AsyncEOSDevice","text":"
    AsyncEOSDevice(host: str, username: str, password: str, name: Optional[str] = None, enable: bool = False, enable_password: Optional[str] = None, port: Optional[int] = None, ssh_port: Optional[int] = 22, tags: Optional[list[str]] = None, timeout: Optional[float] = None, insecure: bool = False, proto: Literal['http', 'https'] = 'https')\n

    Bases: AntaDevice

    Implementation of AntaDevice for EOS using aio-eapi.

    Attributes:

    Name Type Description name

    Device name

    is_online

    True if the device IP is reachable and a port can be open

    established

    True if remote command execution succeeds

    hw_model

    Hardware model of the device

    tags

    List of tags for this device

    Parameters:

    Name Type Description Default host str

    Device FQDN or IP

    required username str

    Username to connect to eAPI and SSH

    required password str

    Password to connect to eAPI and SSH

    required name Optional[str]

    Device name

    None enable bool

    Device needs privileged access

    False enable_password Optional[str]

    Password used to gain privileged access on EOS

    None port Optional[int]

    eAPI port. Defaults to 80 is proto is \u2018http\u2019 or 443 if proto is \u2018https\u2019.

    None ssh_port Optional[int]

    SSH port

    22 tags Optional[list[str]]

    List of tags for this device

    None timeout Optional[float]

    Timeout value in seconds for outgoing connections. Default to 10 secs.

    None insecure bool

    Disable SSH Host Key validation

    False proto Literal['http', 'https']

    eAPI protocol. Value can be \u2018http\u2019 or \u2018https\u2019

    'https' Source code in anta/device.py
    def __init__(  # pylint: disable=R0913\n    self,\n    host: str,\n    username: str,\n    password: str,\n    name: Optional[str] = None,\n    enable: bool = False,\n    enable_password: Optional[str] = None,\n    port: Optional[int] = None,\n    ssh_port: Optional[int] = 22,\n    tags: Optional[list[str]] = None,\n    timeout: Optional[float] = None,\n    insecure: bool = False,\n    proto: Literal[\"http\", \"https\"] = \"https\",\n) -> None:\n\"\"\"\n    Constructor of AsyncEOSDevice\n\n    Args:\n        host: Device FQDN or IP\n        username: Username to connect to eAPI and SSH\n        password: Password to connect to eAPI and SSH\n        name: Device name\n        enable: Device needs privileged access\n        enable_password: Password used to gain privileged access on EOS\n        port: eAPI port. Defaults to 80 is proto is 'http' or 443 if proto is 'https'.\n        ssh_port: SSH port\n        tags: List of tags for this device\n        timeout: Timeout value in seconds for outgoing connections. Default to 10 secs.\n        insecure: Disable SSH Host Key validation\n        proto: eAPI protocol. Value can be 'http' or 'https'\n    \"\"\"\n    if name is None:\n        name = f\"{host}{f':{port}' if port else ''}\"\n    super().__init__(name, tags)\n    self.enable = enable\n    self._enable_password = enable_password\n    self._session: Device = Device(host=host, port=port, username=username, password=password, proto=proto, timeout=timeout)\n    ssh_params: dict[str, Any] = {}\n    if insecure:\n        ssh_params.update({\"known_hosts\": None})\n    self._ssh_opts: SSHClientConnectionOptions = SSHClientConnectionOptions(host=host, port=ssh_port, username=username, password=password, **ssh_params)\n
    "},{"location":"api/device/#anta.device.AsyncEOSDevice.collect","title":"collect async","text":"
    collect(command: AntaCommand) -> None\n

    Collect device command output from EOS using aio-eapi.

    Supports outformat json and text as output structure. Gain privileged access using the enable_password attribute of the AntaDevice instance if populated.

    Parameters:

    Name Type Description Default command AntaCommand

    the command to collect

    required Source code in anta/device.py
    async def collect(self, command: AntaCommand) -> None:\n\"\"\"\n    Collect device command output from EOS using aio-eapi.\n\n    Supports outformat `json` and `text` as output structure.\n    Gain privileged access using the `enable_password` attribute\n    of the `AntaDevice` instance if populated.\n\n    Args:\n        command: the command to collect\n    \"\"\"\n    try:\n        commands = []\n        if self.enable and self._enable_password is not None:\n            commands.append(\n                {\n                    \"cmd\": \"enable\",\n                    \"input\": str(self._enable_password),\n                }\n            )\n        elif self.enable:\n            # No password\n            commands.append({\"cmd\": \"enable\"})\n        if command.revision:\n            commands.append({\"cmd\": command.command, \"revision\": command.revision})\n        else:\n            commands.append({\"cmd\": command.command})\n        response = await self._session.cli(\n            commands=commands,\n            ofmt=command.ofmt,\n            version=command.version,\n        )\n        # remove first dict related to enable command\n        # only applicable to json output\n        if command.ofmt in [\"json\", \"text\"]:\n            # selecting only our command output\n            response = response[-1]\n        command.output = response\n        logger.debug(f\"{self.name}: {command}\")\n\n    except EapiCommandError as e:\n        message = f\"Command '{command.command}' failed on {self.name}\"\n        anta_log_exception(e, message, logger)\n        command.failed = e\n    except (HTTPError, ConnectError) as e:\n        message = f\"Cannot connect to device {self.name}\"\n        anta_log_exception(e, message, logger)\n        command.failed = e\n    except Exception as e:  # pylint: disable=broad-exception-caught\n        message = f\"Exception raised while collecting command '{command.command}' on device {self.name}\"\n        anta_log_exception(e, message, logger)\n        command.failed = e\n        logger.debug(command)\n
    "},{"location":"api/device/#anta.device.AsyncEOSDevice.copy","title":"copy async","text":"
    copy(sources: list[Path], destination: Path, direction: Literal['to', 'from'] = 'from') -> None\n

    Copy files to and from the device using asyncssh.scp().

    Parameters:

    Name Type Description Default sources list[Path]

    List of files to copy to or from the device.

    required destination Path

    Local or remote destination when copying the files. Can be a folder.

    required direction Literal['to', 'from']

    Defines if this coroutine copies files to or from the device.

    'from' Source code in anta/device.py
    async def copy(self, sources: list[Path], destination: Path, direction: Literal[\"to\", \"from\"] = \"from\") -> None:\n\"\"\"\n    Copy files to and from the device using asyncssh.scp().\n\n    Args:\n        sources: List of files to copy to or from the device.\n        destination: Local or remote destination when copying the files. Can be a folder.\n        direction: Defines if this coroutine copies files to or from the device.\n    \"\"\"\n    async with asyncssh.connect(\n        host=self._ssh_opts.host,\n        port=self._ssh_opts.port,\n        tunnel=self._ssh_opts.tunnel,\n        family=self._ssh_opts.family,\n        local_addr=self._ssh_opts.local_addr,\n        options=self._ssh_opts,\n    ) as conn:\n        src: Union[list[tuple[SSHClientConnection, Path]], list[Path]]\n        dst: Union[tuple[SSHClientConnection, Path], Path]\n        if direction == \"from\":\n            src = [(conn, file) for file in sources]\n            dst = destination\n            for file in sources:\n                logger.info(f\"Copying '{file}' from device {self.name} to '{destination}' locally\")\n        elif direction == \"to\":\n            src = sources\n            dst = (conn, destination)\n            for file in sources:\n                logger.info(f\"Copying '{file}' to device {self.name} to '{destination}' remotely\")\n        else:\n            logger.critical(f\"'direction' argument to copy() fonction is invalid: {direction}\")\n            return\n        await asyncssh.scp(src, dst)\n
    "},{"location":"api/device/#anta.device.AsyncEOSDevice.refresh","title":"refresh async","text":"
    refresh() -> None\n

    Update attributes of an AsyncEOSDevice instance.

    This coroutine must update the following attributes of AsyncEOSDevice: - is_online: When a device IP is reachable and a port can be open - established: When a command execution succeeds - hw_model: The hardware model of the device

    Source code in anta/device.py
    async def refresh(self) -> None:\n\"\"\"\n    Update attributes of an AsyncEOSDevice instance.\n\n    This coroutine must update the following attributes of AsyncEOSDevice:\n    - is_online: When a device IP is reachable and a port can be open\n    - established: When a command execution succeeds\n    - hw_model: The hardware model of the device\n    \"\"\"\n    # Refresh command\n    COMMAND: str = \"show version\"\n    # Hardware model definition in show version\n    HW_MODEL_KEY: str = \"modelName\"\n    logger.debug(f\"Refreshing device {self.name}\")\n    self.is_online = await self._session.check_connection()\n    if self.is_online:\n        try:\n            response = await self._session.cli(command=COMMAND)\n        except EapiCommandError as e:\n            logger.warning(f\"Cannot get hardware information from device {self.name}: {e.errmsg}\")\n        except (HTTPError, ConnectError) as e:\n            logger.warning(f\"Cannot get hardware information from device {self.name}: {exc_to_str(e)}\")\n        else:\n            if HW_MODEL_KEY in response:\n                self.hw_model = response[HW_MODEL_KEY]\n            else:\n                logger.warning(f\"Cannot get hardware information from device {self.name}: cannot parse '{COMMAND}'\")\n    else:\n        logger.warning(f\"Could not connect to device {self.name}: cannot open eAPI port\")\n    self.established = bool(self.is_online and self.hw_model)\n
    "},{"location":"api/inventory/","title":"Inventory module","text":""},{"location":"api/inventory/#anta.inventory.AntaInventory","title":"AntaInventory","text":"

    Bases: dict

    Inventory abstraction for ANTA framework.

    "},{"location":"api/inventory/#anta.inventory.AntaInventory.add_device","title":"add_device","text":"
    add_device(device: AntaDevice) -> None\n

    Add a device to final inventory.

    Parameters:

    Name Type Description Default device AntaDevice

    Device object to be added

    required Source code in anta/inventory/__init__.py
    def add_device(self, device: AntaDevice) -> None:\n\"\"\"Add a device to final inventory.\n\n    Args:\n        device: Device object to be added\n    \"\"\"\n    self[device.name] = device\n
    "},{"location":"api/inventory/#anta.inventory.AntaInventory.connect_inventory","title":"connect_inventory async","text":"
    connect_inventory() -> None\n

    Run refresh() coroutines for all AntaDevice objects in this inventory.

    Source code in anta/inventory/__init__.py
    async def connect_inventory(self) -> None:\n\"\"\"Run `refresh()` coroutines for all AntaDevice objects in this inventory.\"\"\"\n    logger.debug(\"Refreshing devices...\")\n    results = await asyncio.gather(\n        *(device.refresh() for device in self.values()),\n        return_exceptions=True,\n    )\n    for r in results:\n        if isinstance(r, Exception):\n            message = \"Error when refreshing inventory\"\n            anta_log_exception(r, message, logger)\n
    "},{"location":"api/inventory/#anta.inventory.AntaInventory.get_inventory","title":"get_inventory","text":"
    get_inventory(established_only: bool = False, tags: Optional[list[str]] = None) -> AntaInventory\n

    Returns a filtered inventory.

    Parameters:

    Name Type Description Default established_only bool

    Whether or not to include only established devices. Default False.

    False tags Optional[list[str]]

    List of tags to filter devices.

    None

    Returns:

    Name Type Description AntaInventory AntaInventory

    An inventory with filtered AntaDevice objects.

    Source code in anta/inventory/__init__.py
    def get_inventory(self, established_only: bool = False, tags: Optional[list[str]] = None) -> AntaInventory:\n\"\"\"\n    Returns a filtered inventory.\n\n    Args:\n        established_only: Whether or not to include only established devices. Default False.\n        tags: List of tags to filter devices.\n\n    Returns:\n        AntaInventory: An inventory with filtered AntaDevice objects.\n    \"\"\"\n\n    def _filter_devices(device: AntaDevice) -> bool:\n\"\"\"\n        Helper function to select the devices based on the input tags\n        and the requirement for an established connection.\n        \"\"\"\n        if tags is not None and all(tag not in tags for tag in device.tags):\n            return False\n        return bool(not established_only or device.established)\n\n    devices: list[AntaDevice] = list(filter(_filter_devices, self.values()))\n    result = AntaInventory()\n    for device in devices:\n        result.add_device(device)\n    return result\n
    "},{"location":"api/inventory/#anta.inventory.AntaInventory.parse","title":"parse staticmethod","text":"
    parse(inventory_file: str, username: str, password: str, enable: bool = False, enable_password: Optional[str] = None, timeout: Optional[float] = None, insecure: bool = False) -> AntaInventory\n

    Create an AntaInventory instance from an inventory file. The inventory devices are AsyncEOSDevice instances.

    Parameters:

    Name Type Description Default inventory_file str

    Path to inventory YAML file where user has described his inputs

    required username str

    Username to use to connect to devices

    required password str

    Password to use to connect to devices

    required enable bool

    Whether or not the commands need to be run in enable mode towards the devices

    False timeout float

    timeout in seconds for every API call.

    None

    Raises:

    Type Description InventoryRootKeyError

    Root key of inventory is missing.

    InventoryIncorrectSchema

    Inventory file is not following AntaInventory Schema.

    InventoryUnknownFormat

    Output format is not supported.

    Source code in anta/inventory/__init__.py
    @staticmethod\ndef parse(\n    inventory_file: str,\n    username: str,\n    password: str,\n    enable: bool = False,\n    enable_password: Optional[str] = None,\n    timeout: Optional[float] = None,\n    insecure: bool = False,\n) -> AntaInventory:\n    # pylint: disable=too-many-arguments\n\"\"\"\n    Create an AntaInventory instance from an inventory file.\n    The inventory devices are AsyncEOSDevice instances.\n\n    Args:\n        inventory_file (str): Path to inventory YAML file where user has described his inputs\n        username (str): Username to use to connect to devices\n        password (str): Password to use to connect to devices\n        enable (bool): Whether or not the commands need to be run in enable mode towards the devices\n        timeout (float, optional): timeout in seconds for every API call.\n\n    Raises:\n        InventoryRootKeyError: Root key of inventory is missing.\n        InventoryIncorrectSchema: Inventory file is not following AntaInventory Schema.\n        InventoryUnknownFormat: Output format is not supported.\n    \"\"\"\n\n    inventory = AntaInventory()\n    kwargs: dict[str, Any] = {\n        \"username\": username,\n        \"password\": password,\n        \"enable\": enable,\n        \"enable_password\": enable_password,\n        \"timeout\": timeout,\n        \"insecure\": insecure,\n    }\n    kwargs = {k: v for k, v in kwargs.items() if v is not None}\n\n    with open(inventory_file, \"r\", encoding=\"UTF-8\") as file:\n        data = safe_load(file)\n\n    # Load data using Pydantic\n    try:\n        inventory_input = AntaInventoryInput(**data[AntaInventory.INVENTORY_ROOT_KEY])\n    except KeyError as exc:\n        logger.error(f\"Inventory root key is missing: {AntaInventory.INVENTORY_ROOT_KEY}\")\n        raise InventoryRootKeyError(f\"Inventory root key ({AntaInventory.INVENTORY_ROOT_KEY}) is not defined in your inventory\") from exc\n    except ValidationError as exc:\n        logger.error(\"Inventory data are not compliant with inventory models\")\n        raise InventoryIncorrectSchema(f\"Inventory is not following the schema: {str(exc)}\") from exc\n\n    # Read data from input\n    AntaInventory._parse_hosts(inventory_input, inventory, **kwargs)\n    AntaInventory._parse_networks(inventory_input, inventory, **kwargs)\n    AntaInventory._parse_ranges(inventory_input, inventory, **kwargs)\n\n    return inventory\n
    "},{"location":"api/inventory/#anta.inventory.exceptions","title":"exceptions","text":"

    Manage Exception in Inventory module.

    "},{"location":"api/inventory/#anta.inventory.exceptions.InventoryIncorrectSchema","title":"InventoryIncorrectSchema","text":"

    Bases: Exception

    Error when user data does not follow ANTA schema.

    "},{"location":"api/inventory/#anta.inventory.exceptions.InventoryRootKeyError","title":"InventoryRootKeyError","text":"

    Bases: Exception

    Error raised when inventory root key is not found.

    "},{"location":"api/inventory.models.input/","title":"Inventory models","text":""},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryInput","title":"AntaInventoryInput","text":"

    Bases: BaseModel

    User\u2019s inventory model.

    Attributes:

    Name Type Description networks (list[AntaInventoryNetwork], Optional)

    List of AntaInventoryNetwork objects for networks.

    hosts (list[AntaInventoryHost], Optional)

    List of AntaInventoryHost objects for hosts.

    range (list[AntaInventoryRange], Optional)

    List of AntaInventoryRange objects for ranges.

    "},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryHost","title":"AntaInventoryHost","text":"

    Bases: BaseModel

    Host definition for user\u2019s inventory.

    Attributes:

    Name Type Description host IPvAnyAddress

    IPv4 or IPv6 address of the device

    port int

    (Optional) eAPI port to use Default is 443.

    name str

    (Optional) Name to display during tests report. Default is hostname:port

    tags list[str]

    List of attached tags read from inventory file.

    "},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryNetwork","title":"AntaInventoryNetwork","text":"

    Bases: BaseModel

    Network definition for user\u2019s inventory.

    Attributes:

    Name Type Description network IPvAnyNetwork

    Subnet to use for testing.

    tags list[str]

    List of attached tags read from inventory file.

    "},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryRange","title":"AntaInventoryRange","text":"

    Bases: BaseModel

    IP Range definition for user\u2019s inventory.

    Attributes:

    Name Type Description start IPvAnyAddress

    IPv4 or IPv6 address for the begining of the range.

    stop IPvAnyAddress

    IPv4 or IPv6 address for the end of the range.

    tags list[str]

    List of attached tags read from inventory file.

    "},{"location":"api/models/","title":"Test models","text":""},{"location":"api/models/#test-definition","title":"Test definition","text":""},{"location":"api/models/#uml-diagram","title":"UML Diagram","text":""},{"location":"api/models/#anta.models.AntaTest","title":"AntaTest","text":"
    AntaTest(device: AntaDevice, inputs: Optional[dict[str, Any]], eos_data: Optional[list[dict[Any, Any] | str]] = None)\n

    Bases: ABC

    Abstract class defining a test in ANTA

    The goal of this class is to handle the heavy lifting and make writing a test as simple as possible.

    Examples:

    The following is an example of an AntaTest subclass implementation:

        class VerifyReachability(AntaTest):\n        name = \"VerifyReachability\"\n        description = \"Test the network reachability to one or many destination IP(s).\"\n        categories = [\"connectivity\"]\n        commands = [AntaTemplate(template=\"ping vrf {vrf} {dst} source {src} repeat 2\")]\n\n        class Input(AntaTest.Input):\n            hosts: list[Host]\n            class Host(BaseModel):\n                dst: IPv4Address\n                src: IPv4Address\n                vrf: str = \"default\"\n\n        def render(self, template: AntaTemplate) -> list[AntaCommand]:\n            return [template.render({\"dst\": host.dst, \"src\": host.src, \"vrf\": host.vrf}) for host in self.inputs.hosts]\n\n        @AntaTest.anta_test\n        def test(self) -> None:\n            failures = []\n            for command in self.instance_commands:\n                if command.params and (\"src\" and \"dst\") in command.params:\n                    src, dst = command.params[\"src\"], command.params[\"dst\"]\n                if \"2 received\" not in command.json_output[\"messages\"][0]:\n                    failures.append((str(src), str(dst)))\n            if not failures:\n                self.result.is_success()\n            else:\n                self.result.is_failure(f\"Connectivity test failed for the following source-destination pairs: {failures}\")\n
    Attributes: device: AntaDevice instance on which this test is run inputs: AntaTest.Input instance carrying the test inputs instance_commands: List of AntaCommand instances of this test result: TestResult instance representing the result of this test logger: Python logger for this test instance

    Parameters:

    Name Type Description Default device AntaDevice

    AntaDevice instance on which the test will be run

    required inputs Optional[dict[str, Any]]

    dictionary of attributes used to instantiate the AntaTest.Input instance

    required eos_data Optional[list[dict[Any, Any] | str]]

    Populate outputs of the test commands instead of collecting from devices. This list must have the same length and order than the instance_commands instance attribute.

    None Source code in anta/models.py
    def __init__(\n    self,\n    device: AntaDevice,\n    inputs: Optional[dict[str, Any]],\n    eos_data: Optional[list[dict[Any, Any] | str]] = None,\n):\n\"\"\"AntaTest Constructor\n\n    Args:\n        device: AntaDevice instance on which the test will be run\n        inputs: dictionary of attributes used to instantiate the AntaTest.Input instance\n        eos_data: Populate outputs of the test commands instead of collecting from devices.\n                  This list must have the same length and order than the `instance_commands` instance attribute.\n    \"\"\"\n    self.logger: logging.Logger = logging.getLogger(f\"{self.__module__}.{self.__class__.__name__}\")\n    self.device: AntaDevice = device\n    self.inputs: AntaTest.Input\n    self.instance_commands: list[AntaCommand] = []\n    self.result: TestResult = TestResult(name=device.name, test=self.name, categories=self.categories, description=self.description)\n    self._init_inputs(inputs)\n    if self.result.result == \"unset\":\n        self._init_commands(eos_data)\n
    "},{"location":"api/models/#anta.models.AntaTest.collected","title":"collected property","text":"
    collected: bool\n

    Returns True if all commands for this test have been collected.

    "},{"location":"api/models/#anta.models.AntaTest.failed_commands","title":"failed_commands property","text":"
    failed_commands: list[AntaCommand]\n

    Returns a list of all the commands that have failed.

    "},{"location":"api/models/#anta.models.AntaTest.Input","title":"Input","text":"

    Bases: BaseModel

    Class defining inputs for a test in ANTA.

    Examples:

    A valid test catalog will look like the following:

    <Python module>:\n- <AntaTest subclass>:\nresult_overwrite:\ncategories:\n- \"Overwritten category 1\"\ndescription: \"Test with overwritten description\"\ncustom_field: \"Test run by John Doe\"\n
    Attributes: result_overwrite: Define fields to overwrite in the TestResult object

    "},{"location":"api/models/#anta.models.AntaTest.Input.ResultOverwrite","title":"ResultOverwrite","text":"

    Bases: BaseModel

    Test inputs model to overwrite result fields

    Attributes:

    Name Type Description description Optional[str]

    overwrite TestResult.description

    categories Optional[List[str]]

    overwrite TestResult.categories

    custom_field Optional[str]

    a free string that will be included in the TestResult object

    "},{"location":"api/models/#anta.models.AntaTest.anta_test","title":"anta_test staticmethod","text":"
    anta_test(function: F) -> Callable[..., Coroutine[Any, Any, TestResult]]\n

    Decorator for the test() method.

    This decorator implements (in this order):

    1. Instantiate the command outputs if eos_data is provided to the test() method
    2. Collect the commands from the device
    3. Run the test() method
    4. Catches any exception in test() user code and set the result instance attribute
    Source code in anta/models.py
    @staticmethod\ndef anta_test(function: F) -> Callable[..., Coroutine[Any, Any, TestResult]]:\n\"\"\"\n    Decorator for the `test()` method.\n\n    This decorator implements (in this order):\n\n    1. Instantiate the command outputs if `eos_data` is provided to the `test()` method\n    2. Collect the commands from the device\n    3. Run the `test()` method\n    4. Catches any exception in `test()` user code and set the `result` instance attribute\n    \"\"\"\n\n    @wraps(function)\n    async def wrapper(\n        self: AntaTest,\n        eos_data: Optional[list[dict[Any, Any] | str]] = None,\n        **kwargs: Any,\n    ) -> TestResult:\n\"\"\"\n        Args:\n            eos_data: Populate outputs of the test commands instead of collecting from devices.\n                      This list must have the same length and order than the `instance_commands` instance attribute.\n\n        Returns:\n            result: TestResult instance attribute populated with error status if any\n        \"\"\"\n\n        def format_td(seconds: float, digits: int = 3) -> str:\n            isec, fsec = divmod(round(seconds * 10**digits), 10**digits)\n            return f\"{timedelta(seconds=isec)}.{fsec:0{digits}.0f}\"\n\n        start_time = time.time()\n        if self.result.result != \"unset\":\n            return self.result\n\n        # TODO maybe_skip decorators\n\n        # Data\n        if eos_data is not None:\n            self.save_commands_data(eos_data)\n            self.logger.debug(f\"Test {self.name} initialized with input data {eos_data}\")\n\n        # If some data is missing, try to collect\n        if not self.collected:\n            await self.collect()\n            if self.result.result != \"unset\":\n                return self.result\n\n        try:\n            if self.failed_commands:\n                self.result.is_error(\n                    message=\"\\n\".join(\n                        [f\"{cmd.command} has failed: {exc_to_str(cmd.failed)}\" if cmd.failed else f\"{cmd.command} has failed\" for cmd in self.failed_commands]\n                    )\n                )\n                return self.result\n            function(self, **kwargs)\n        except Exception as e:  # pylint: disable=broad-exception-caught\n            message = f\"Exception raised for test {self.name} (on device {self.device.name})\"\n            anta_log_exception(e, message, self.logger)\n            self.result.is_error(message=exc_to_str(e))\n\n        test_duration = time.time() - start_time\n        self.logger.debug(f\"Executing test {self.name} on device {self.device.name} took {format_td(test_duration)}\")\n\n        AntaTest.update_progress()\n        return self.result\n\n    return wrapper\n
    "},{"location":"api/models/#anta.models.AntaTest.collect","title":"collect async","text":"
    collect() -> None\n

    Method used to collect outputs of all commands of this test class from the device of this test instance.

    Source code in anta/models.py
    async def collect(self) -> None:\n\"\"\"\n    Method used to collect outputs of all commands of this test class from the device of this test instance.\n    \"\"\"\n    try:\n        await self.device.collect_commands(self.instance_commands)\n    except Exception as e:  # pylint: disable=broad-exception-caught\n        message = f\"Exception raised while collecting commands for test {self.name} (on device {self.device.name})\"\n        anta_log_exception(e, message, self.logger)\n        self.result.is_error(message=exc_to_str(e))\n
    "},{"location":"api/models/#anta.models.AntaTest.render","title":"render","text":"
    render(template: AntaTemplate) -> list[AntaCommand]\n

    Render an AntaTemplate instance of this AntaTest using the provided AntaTest.Input instance at self.inputs.

    This is not an abstract method because it does not need to be implemented if there is no AntaTemplate for this test.

    Source code in anta/models.py
    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n\"\"\"Render an AntaTemplate instance of this AntaTest using the provided\n       AntaTest.Input instance at self.inputs.\n\n    This is not an abstract method because it does not need to be implemented if there is\n    no AntaTemplate for this test.\"\"\"\n    raise NotImplementedError(f\"AntaTemplate are provided but render() method has not been implemented for {self.__module__}.{self.name}\")\n
    "},{"location":"api/models/#anta.models.AntaTest.save_commands_data","title":"save_commands_data","text":"
    save_commands_data(eos_data: list[dict[str, Any] | str]) -> None\n

    Populate output of all AntaCommand instances in instance_commands

    Source code in anta/models.py
    def save_commands_data(self, eos_data: list[dict[str, Any] | str]) -> None:\n\"\"\"Populate output of all AntaCommand instances in `instance_commands`\"\"\"\n    if len(eos_data) != len(self.instance_commands):\n        self.result.is_error(message=\"Test initialization error: Trying to save more data than there are commands for the test\")\n        return\n    for index, data in enumerate(eos_data or []):\n        self.instance_commands[index].output = data\n
    "},{"location":"api/models/#anta.models.AntaTest.test","title":"test abstractmethod","text":"
    test() -> Coroutine[Any, Any, TestResult]\n

    This abstract method is the core of the test logic. It must set the correct status of the result instance attribute with the appropriate outcome of the test.

    Examples:

    It must be implemented using the AntaTest.anta_test decorator:

    @AntaTest.anta_test\ndef test(self) -> None:\n    self.result.is_success()\n    for command in self.instance_commands:\n        if not self._test_command(command): # _test_command() is an arbitrary test logic\n            self.result.is_failure(\"Failure reson\")\n

    Source code in anta/models.py
    @abstractmethod\ndef test(self) -> Coroutine[Any, Any, TestResult]:\n\"\"\"\n    This abstract method is the core of the test logic.\n    It must set the correct status of the `result` instance attribute\n    with the appropriate outcome of the test.\n\n    Examples:\n    It must be implemented using the `AntaTest.anta_test` decorator:\n        ```python\n        @AntaTest.anta_test\n        def test(self) -> None:\n            self.result.is_success()\n            for command in self.instance_commands:\n                if not self._test_command(command): # _test_command() is an arbitrary test logic\n                    self.result.is_failure(\"Failure reson\")\n        ```\n    \"\"\"\n
    "},{"location":"api/models/#command-definition","title":"Command definition","text":""},{"location":"api/models/#uml-diagram_1","title":"UML Diagram","text":""},{"location":"api/models/#anta.models.AntaCommand","title":"AntaCommand","text":"

    Bases: BaseModel

    Class to define a command.

    Info

    eAPI models are revisioned, this means that if a model is modified in a non-backwards compatible way, then its revision will be bumped up (revisions are numbers, default value is 1).

    By default an eAPI request will return revision 1 of the model instance, this ensures that older management software will not suddenly stop working when a switch is upgraded. A revision applies to a particular CLI command whereas a version is global and is internally translated to a specific revision for each CLI command in the RPC.

    Revision has precedence over version.

    Attributes:

    Name Type Description command str

    Device command

    version Literal[1, 'latest']

    eAPI version - valid values are 1 or \u201clatest\u201d - default is \u201clatest\u201d

    revision Optional[conint(ge=1, le=99)]

    eAPI revision of the command. Valid values are 1 to 99. Revision has precedence over version.

    ofmt Literal['json', 'text']

    eAPI output - json or text - default is json

    template Optional[AntaTemplate]

    AntaTemplate object used to render this command

    params Optional[Dict[str, Any]]

    dictionary of variables with string values to render the template

    failed Optional[Exception]

    If the command execution fails, the Exception object is stored in this field

    "},{"location":"api/models/#anta.models.AntaCommand.collected","title":"collected property","text":"
    collected: bool\n

    Return True if the command has been collected

    "},{"location":"api/models/#anta.models.AntaCommand.json_output","title":"json_output property","text":"
    json_output: dict[str, Any]\n

    Get the command output as JSON

    "},{"location":"api/models/#anta.models.AntaCommand.text_output","title":"text_output property","text":"
    text_output: str\n

    Get the command output as a string

    "},{"location":"api/models/#template-definition","title":"Template definition","text":""},{"location":"api/models/#uml-diagram_2","title":"UML Diagram","text":""},{"location":"api/models/#anta.models.AntaTemplate","title":"AntaTemplate","text":"

    Bases: BaseModel

    Class to define a command template as Python f-string. Can render a command from parameters.

    Attributes:

    Name Type Description template str

    Python f-string. Example: \u2018show vlan {vlan_id}\u2019

    version Literal[1, 'latest']

    eAPI version - valid values are 1 or \u201clatest\u201d - default is \u201clatest\u201d

    revision Optional[conint(ge=1, le=99)]

    Revision of the command. Valid values are 1 to 99. Revision has precedence over version.

    ofmt Literal['json', 'text']

    eAPI output - json or text - default is json

    "},{"location":"api/models/#anta.models.AntaTemplate.render","title":"render","text":"
    render(**params: dict[str, Any]) -> AntaCommand\n

    Render an AntaCommand from an AntaTemplate instance. Keep the parameters used in the AntaTemplate instance.

    Parameters:

    Name Type Description Default params dict[str, Any]

    dictionary of variables with string values to render the Python f-string

    {}

    Returns:

    Name Type Description command AntaCommand

    The rendered AntaCommand. This AntaCommand instance have a template attribute that references this AntaTemplate instance.

    Source code in anta/models.py
    def render(self, **params: dict[str, Any]) -> AntaCommand:\n\"\"\"Render an AntaCommand from an AntaTemplate instance.\n    Keep the parameters used in the AntaTemplate instance.\n\n    Args:\n        params: dictionary of variables with string values to render the Python f-string\n\n    Returns:\n        command: The rendered AntaCommand.\n                 This AntaCommand instance have a template attribute that references this\n                 AntaTemplate instance.\n    \"\"\"\n    try:\n        return AntaCommand(command=self.template.format(**params), ofmt=self.ofmt, version=self.version, revision=self.revision, template=self, params=params)\n    except KeyError as e:\n        raise AntaTemplateRenderError(self, e.args[0]) from e\n
    "},{"location":"api/report_manager/","title":"Report Manager module","text":""},{"location":"api/report_manager/#anta.reporter.ReportTable","title":"ReportTable","text":"
    ReportTable()\n

    TableReport Generate a Table based on TestResult.

    Source code in anta/reporter/__init__.py
    def __init__(self) -> None:\n\"\"\"\n    __init__ Class constructor\n    \"\"\"\n    self.colors = []\n    self.colors.append(ColorManager(level=\"success\", color=RICH_COLOR_PALETTE.SUCCESS))\n    self.colors.append(ColorManager(level=\"failure\", color=RICH_COLOR_PALETTE.FAILURE))\n    self.colors.append(ColorManager(level=\"error\", color=RICH_COLOR_PALETTE.ERROR))\n    self.colors.append(ColorManager(level=\"skipped\", color=RICH_COLOR_PALETTE.SKIPPED))\n
    "},{"location":"api/report_manager/#anta.reporter.ReportTable.report_all","title":"report_all","text":"
    report_all(result_manager: ResultManager, host: Optional[str] = None, testcase: Optional[str] = None, title: str = 'All tests results') -> Table\n

    Create a table report with all tests for one or all devices.

    Create table with full output: Host / Test / Status / Message

    Parameters:

    Name Type Description Default result_manager ResultManager

    A manager with a list of tests.

    required host str

    IP Address of a host to search for. Defaults to None.

    None testcase str

    A test name to search for. Defaults to None.

    None title str

    Title for the report. Defaults to \u2018All tests results\u2019.

    'All tests results'

    Returns:

    Name Type Description Table Table

    A fully populated rich Table

    Source code in anta/reporter/__init__.py
    def report_all(\n    self,\n    result_manager: ResultManager,\n    host: Optional[str] = None,\n    testcase: Optional[str] = None,\n    title: str = \"All tests results\",\n) -> Table:\n\"\"\"\n    Create a table report with all tests for one or all devices.\n\n    Create table with full output: Host / Test / Status / Message\n\n    Args:\n        result_manager (ResultManager): A manager with a list of tests.\n        host (str, optional): IP Address of a host to search for. Defaults to None.\n        testcase (str, optional): A test name to search for. Defaults to None.\n        title (str, optional): Title for the report. Defaults to 'All tests results'.\n\n    Returns:\n        Table: A fully populated rich Table\n    \"\"\"\n    table = Table(title=title)\n    headers = [\"Device\", \"Test Name\", \"Test Status\", \"Message(s)\", \"Test description\", \"Test category\"]\n    table = self._build_headers(headers=headers, table=table)\n\n    for result in result_manager.get_results(output_format=\"list\"):\n        # pylint: disable=R0916\n        if (host is None and testcase is None) or (host is not None and str(result.name) == host) or (testcase is not None and testcase == str(result.test)):\n            state = self._color_result(status=str(result.result), output_type=\"str\")\n            message = self._split_list_to_txt_list(result.messages) if len(result.messages) > 0 else \"\"\n            categories = \", \".join(result.categories)\n            table.add_row(str(result.name), result.test, state, message, result.description, categories)\n    return table\n
    "},{"location":"api/report_manager/#anta.reporter.ReportTable.report_summary_hosts","title":"report_summary_hosts","text":"
    report_summary_hosts(result_manager: ResultManager, host: Optional[str] = None, title: str = 'Summary per host') -> Table\n

    Create a table report with result agregated per host.

    Create table with full output: Host / Number of success / Number of failure / Number of error / List of nodes in error or failure

    Parameters:

    Name Type Description Default result_manager ResultManager

    A manager with a list of tests.

    required host str

    IP Address of a host to search for. Defaults to None.

    None title str

    Title for the report. Defaults to \u2018All tests results\u2019.

    'Summary per host'

    Returns:

    Name Type Description Table Table

    A fully populated rich Table

    Source code in anta/reporter/__init__.py
    def report_summary_hosts(\n    self,\n    result_manager: ResultManager,\n    host: Optional[str] = None,\n    title: str = \"Summary per host\",\n) -> Table:\n\"\"\"\n    Create a table report with result agregated per host.\n\n    Create table with full output: Host / Number of success / Number of failure / Number of error / List of nodes in error or failure\n\n    Args:\n        result_manager (ResultManager): A manager with a list of tests.\n        host (str, optional): IP Address of a host to search for. Defaults to None.\n        title (str, optional): Title for the report. Defaults to 'All tests results'.\n\n    Returns:\n        Table: A fully populated rich Table\n    \"\"\"\n    table = Table(title=title)\n    headers = [\n        \"Device\",\n        \"# of success\",\n        \"# of skipped\",\n        \"# of failure\",\n        \"# of errors\",\n        \"List of failed or error test cases\",\n    ]\n    table = self._build_headers(headers=headers, table=table)\n    for host_read in result_manager.get_hosts():\n        if host is None or str(host_read) == host:\n            results = result_manager.get_result_by_host(host_read)\n            logger.debug(\"data to use for computation\")\n            logger.debug(f\"{host}: {results}\")\n            nb_failure = len([result for result in results if result.result == \"failure\"])\n            nb_error = len([result for result in results if result.result == \"error\"])\n            list_failure = [str(result.test) for result in results if result.result in [\"failure\", \"error\"]]\n            nb_success = len([result for result in results if result.result == \"success\"])\n            nb_skipped = len([result for result in results if result.result == \"skipped\"])\n            table.add_row(\n                str(host_read),\n                str(nb_success),\n                str(nb_skipped),\n                str(nb_failure),\n                str(nb_error),\n                str(list_failure),\n            )\n    return table\n
    "},{"location":"api/report_manager/#anta.reporter.ReportTable.report_summary_tests","title":"report_summary_tests","text":"
    report_summary_tests(result_manager: ResultManager, testcase: Optional[str] = None, title: str = 'Summary per test case') -> Table\n

    Create a table report with result agregated per test.

    Create table with full output: Test / Number of success / Number of failure / Number of error / List of nodes in error or failure

    Parameters:

    Name Type Description Default result_manager ResultManager

    A manager with a list of tests.

    required testcase str

    A test name to search for. Defaults to None.

    None title str

    Title for the report. Defaults to \u2018All tests results\u2019.

    'Summary per test case'

    Returns:

    Name Type Description Table Table

    A fully populated rich Table

    Source code in anta/reporter/__init__.py
    def report_summary_tests(\n    self,\n    result_manager: ResultManager,\n    testcase: Optional[str] = None,\n    title: str = \"Summary per test case\",\n) -> Table:\n\"\"\"\n    Create a table report with result agregated per test.\n\n    Create table with full output: Test / Number of success / Number of failure / Number of error / List of nodes in error or failure\n\n    Args:\n        result_manager (ResultManager): A manager with a list of tests.\n        testcase (str, optional): A test name to search for. Defaults to None.\n        title (str, optional): Title for the report. Defaults to 'All tests results'.\n\n    Returns:\n        Table: A fully populated rich Table\n    \"\"\"\n    # sourcery skip: class-extract-method\n    table = Table(title=title)\n    headers = [\n        \"Test Case\",\n        \"# of success\",\n        \"# of skipped\",\n        \"# of failure\",\n        \"# of errors\",\n        \"List of failed or error nodes\",\n    ]\n    table = self._build_headers(headers=headers, table=table)\n    for testcase_read in result_manager.get_testcases():\n        if testcase is None or str(testcase_read) == testcase:\n            results = result_manager.get_result_by_test(testcase_read)\n            nb_failure = len([result for result in results if result.result == \"failure\"])\n            nb_error = len([result for result in results if result.result == \"error\"])\n            list_failure = [str(result.name) for result in results if result.result in [\"failure\", \"error\"]]\n            nb_success = len([result for result in results if result.result == \"success\"])\n            nb_skipped = len([result for result in results if result.result == \"skipped\"])\n            table.add_row(\n                testcase_read,\n                str(nb_success),\n                str(nb_skipped),\n                str(nb_failure),\n                str(nb_error),\n                str(list_failure),\n            )\n    return table\n
    "},{"location":"api/report_manager_models/","title":"Report Manager models","text":""},{"location":"api/report_manager_models/#anta.reporter.models.ColorManager","title":"ColorManager","text":"

    Bases: BaseModel

    Color management for status report.

    Attributes:

    Name Type Description level str

    Test result value.

    color str

    Associated color.

    "},{"location":"api/report_manager_models/#anta.reporter.models.ColorManager.string","title":"string","text":"
    string() -> str\n

    Build an str with color code

    Returns:

    Name Type Description str str

    String with level and its associated color

    Source code in anta/reporter/models.py
    def string(self) -> str:\n\"\"\"\n    Build an str with color code\n\n    Returns:\n        str: String with level and its associated color\n    \"\"\"\n    return f\"[{self.color}]{self.level}\"\n
    "},{"location":"api/report_manager_models/#anta.reporter.models.ColorManager.style_rich","title":"style_rich","text":"
    style_rich() -> Text\n

    Build a rich Text syntax with color

    Returns:

    Name Type Description Text Text

    object with level string and its associated color.

    Source code in anta/reporter/models.py
    def style_rich(self) -> Text:\n\"\"\"\n    Build a rich Text syntax with color\n\n    Returns:\n        Text: object with level string and its associated color.\n    \"\"\"\n    return Text(self.level, style=self.color)\n
    "},{"location":"api/result_manager/","title":"Result Manager module","text":""},{"location":"api/result_manager/#result-manager-definition","title":"Result Manager definition","text":""},{"location":"api/result_manager/#uml-diagram","title":"UML Diagram","text":""},{"location":"api/result_manager/#anta.result_manager.ResultManager","title":"ResultManager","text":"
    ResultManager()\n

    Helper to manage Test Results and generate reports.

    Examples:

    Create Inventory:\n\n    inventory_anta = AntaInventory.parse(\n        inventory_file='examples/inventory.yml',\n        username='ansible',\n        password='ansible',\n        timeout=0.5\n    )\n\nCreate Result Manager:\n\n    manager = ResultManager()\n\nRun tests for all connected devices:\n\n    for device in inventory_anta.get_inventory():\n        manager.add_test_result(\n            VerifyNTP(device=device).test()\n        )\n        manager.add_test_result(\n            VerifyEOSVersion(device=device).test(version='4.28.3M')\n        )\n\nPrint result in native format:\n\n    manager.get_results()\n    [\n        TestResult(\n            host=IPv4Address('192.168.0.10'),\n            test='VerifyNTP',\n            result='failure',\n            message=\"device is not running NTP correctly\"\n        ),\n        TestResult(\n            host=IPv4Address('192.168.0.10'),\n            test='VerifyEOSVersion',\n            result='success',\n            message=None\n        ),\n    ]\n

    The status of the class is initialized to \u201cunset\u201d

    Then when adding a test with a status that is NOT \u2018error\u2019 the following table shows the updated status:

    Current Status Added test Status Updated Status unset Any Any skipped unset, skipped skipped skipped success success skipped failure failure success unset, skipped, success success success failure failure failure unset, skipped success, failure failure

    If the status of the added test is error, the status is untouched and the error_status is set to True.

    Source code in anta/result_manager/__init__.py
    def __init__(self) -> None:\n\"\"\"\n    Class constructor.\n\n    The status of the class is initialized to \"unset\"\n\n    Then when adding a test with a status that is NOT 'error' the following\n    table shows the updated status:\n\n    | Current Status |         Added test Status       | Updated Status |\n    | -------------- | ------------------------------- | -------------- |\n    |      unset     |              Any                |       Any      |\n    |     skipped    |         unset, skipped          |     skipped    |\n    |     skipped    |            success              |     success    |\n    |     skipped    |            failure              |     failure    |\n    |     success    |     unset, skipped, success     |     success    |\n    |     success    |            failure              |     failure    |\n    |     failure    | unset, skipped success, failure |     failure    |\n\n    If the status of the added test is error, the status is untouched and the\n    error_status is set to True.\n    \"\"\"\n    self._result_entries = ListResult()\n    # Initialize status\n    self.status: TestStatus = \"unset\"\n    self.error_status = False\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.add_test_result","title":"add_test_result","text":"
    add_test_result(entry: TestResult) -> None\n

    Add a result to the list

    Parameters:

    Name Type Description Default entry TestResult

    TestResult data to add to the report

    required Source code in anta/result_manager/__init__.py
    def add_test_result(self, entry: TestResult) -> None:\n\"\"\"Add a result to the list\n\n    Args:\n        entry (TestResult): TestResult data to add to the report\n    \"\"\"\n    self._result_entries.append(entry)\n    self._update_status(entry.result)\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.add_test_results","title":"add_test_results","text":"
    add_test_results(entries: list[TestResult]) -> None\n

    Add a list of results to the list

    Parameters:

    Name Type Description Default entries list[TestResult]

    List of TestResult data to add to the report

    required Source code in anta/result_manager/__init__.py
    def add_test_results(self, entries: list[TestResult]) -> None:\n\"\"\"Add a list of results to the list\n\n    Args:\n        entries (list[TestResult]): List of TestResult data to add to the report\n    \"\"\"\n    for e in entries:\n        self.add_test_result(e)\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_hosts","title":"get_hosts","text":"
    get_hosts() -> list[str]\n

    Get list of IP addresses in current manager.

    Returns:

    Type Description list[str]

    list[str]: List of IP addresses.

    Source code in anta/result_manager/__init__.py
    def get_hosts(self) -> list[str]:\n\"\"\"\n    Get list of IP addresses in current manager.\n\n    Returns:\n        list[str]: List of IP addresses.\n    \"\"\"\n    result_list = []\n    for testcase in self._result_entries:\n        if str(testcase.name) not in result_list:\n            result_list.append(str(testcase.name))\n    return result_list\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_result_by_host","title":"get_result_by_host","text":"
    get_result_by_host(host_ip: str, output_format: str = 'native') -> Any\n

    Get list of test result for a given host.

    Parameters:

    Name Type Description Default host_ip str

    IP Address of the host to use to filter results.

    required output_format str

    format selector. Can be either native/list. Defaults to \u2018native\u2019.

    'native'

    Returns:

    Name Type Description Any Any

    List of results related to the host.

    Source code in anta/result_manager/__init__.py
    def get_result_by_host(self, host_ip: str, output_format: str = \"native\") -> Any:\n\"\"\"\n    Get list of test result for a given host.\n\n    Args:\n        host_ip (str): IP Address of the host to use to filter results.\n        output_format (str, optional): format selector. Can be either native/list. Defaults to 'native'.\n\n    Returns:\n        Any: List of results related to the host.\n    \"\"\"\n    if output_format == \"list\":\n        return [result for result in self._result_entries if str(result.name) == host_ip]\n\n    result_manager_filtered = ListResult()\n    for result in self._result_entries:\n        if str(result.name) == host_ip:\n            result_manager_filtered.append(result)\n    return result_manager_filtered\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_result_by_test","title":"get_result_by_test","text":"
    get_result_by_test(test_name: str, output_format: str = 'native') -> Any\n

    Get list of test result for a given test.

    Parameters:

    Name Type Description Default test_name str

    Test name to use to filter results

    required output_format str

    format selector. Can be either native/list. Defaults to \u2018native\u2019.

    'native'

    Returns:

    Type Description Any

    list[TestResult]: List of results related to the test.

    Source code in anta/result_manager/__init__.py
    def get_result_by_test(self, test_name: str, output_format: str = \"native\") -> Any:\n\"\"\"\n    Get list of test result for a given test.\n\n    Args:\n        test_name (str): Test name to use to filter results\n        output_format (str, optional): format selector. Can be either native/list. Defaults to 'native'.\n\n    Returns:\n        list[TestResult]: List of results related to the test.\n    \"\"\"\n    if output_format == \"list\":\n        return [result for result in self._result_entries if str(result.test) == test_name]\n\n    result_manager_filtered = ListResult()\n    for result in self._result_entries:\n        if result.test == test_name:\n            result_manager_filtered.append(result)\n    return result_manager_filtered\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_results","title":"get_results","text":"
    get_results(output_format: str = 'native') -> Any\n

    Expose list of all test results in different format

    Support multiple format
    • native: ListResults format
    • list: a list of TestResult
    • json: a native JSON format

    Parameters:

    Name Type Description Default output_format str

    format selector. Can be either native/list/json. Defaults to \u2018native\u2019.

    'native'

    Returns:

    Name Type Description any Any

    List of results.

    Source code in anta/result_manager/__init__.py
    def get_results(self, output_format: str = \"native\") -> Any:\n\"\"\"\n    Expose list of all test results in different format\n\n    Support multiple format:\n      - native: ListResults format\n      - list: a list of TestResult\n      - json: a native JSON format\n\n    Args:\n        output_format (str, optional): format selector. Can be either native/list/json. Defaults to 'native'.\n\n    Returns:\n        any: List of results.\n    \"\"\"\n    if output_format == \"list\":\n        return list(self._result_entries)\n\n    if output_format == \"json\":\n        return json.dumps(pydantic_to_dict(self._result_entries), indent=4)\n\n    if output_format == \"native\":\n        # Default return for native format.\n        return self._result_entries\n    raise ValueError(f\"{output_format} is not a valid value ['list', 'json', 'native']\")\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_status","title":"get_status","text":"
    get_status(ignore_error: bool = False) -> str\n

    Returns the current status including error_status if ignore_error is False

    Source code in anta/result_manager/__init__.py
    def get_status(self, ignore_error: bool = False) -> str:\n\"\"\"\n    Returns the current status including error_status if ignore_error is False\n    \"\"\"\n    return \"error\" if self.error_status and not ignore_error else self.status\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_testcases","title":"get_testcases","text":"
    get_testcases() -> list[str]\n

    Get list of name of all test cases in current manager.

    Returns:

    Type Description list[str]

    list[str]: List of names for all tests.

    Source code in anta/result_manager/__init__.py
    def get_testcases(self) -> list[str]:\n\"\"\"\n    Get list of name of all test cases in current manager.\n\n    Returns:\n        list[str]: List of names for all tests.\n    \"\"\"\n    result_list = []\n    for testcase in self._result_entries:\n        if str(testcase.test) not in result_list:\n            result_list.append(str(testcase.test))\n    return result_list\n
    "},{"location":"api/result_manager_models/","title":"Result Manager models","text":""},{"location":"api/result_manager_models/#test-result-model","title":"Test Result model","text":""},{"location":"api/result_manager_models/#uml-diagram","title":"UML Diagram","text":""},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult","title":"TestResult","text":"

    Bases: BaseModel

    Describe the result of a test from a single device.

    Attributes:

    Name Type Description name str

    Device name where the test has run.

    test str

    Test name runs on the device.

    categories List[str]

    List of categories the TestResult belongs to, by default the AntaTest categories.

    description str

    TestResult description, by default the AntaTest description.

    results str

    Result of the test. Can be one of [\u201cunset\u201d, \u201csuccess\u201d, \u201cfailure\u201d, \u201cerror\u201d, \u201cskipped\u201d].

    message str

    Message to report after the test if any.

    error Optional[Exception]

    Exception object if the test result is \u201cerror\u201d and an Exception occured

    custom_field Optional[str]

    Custom field to store a string for flexibility in integrating with ANTA

    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_error","title":"is_error","text":"
    is_error(message: str | None = None, exception: Exception | None = None) -> None\n

    Helper to set status to error

    Parameters:

    Name Type Description Default exception Exception | None

    Optional Exception objet related to the error

    None Source code in anta/result_manager/models.py
    def is_error(self, message: str | None = None, exception: Exception | None = None) -> None:\n\"\"\"\n    Helper to set status to error\n\n    Args:\n        exception: Optional Exception objet related to the error\n    \"\"\"\n    self._set_status(\"error\", message)\n    self.error = exception\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_failure","title":"is_failure","text":"
    is_failure(message: str | None = None) -> None\n

    Helper to set status to failure

    Parameters:

    Name Type Description Default message str | None

    Optional message related to the test

    None Source code in anta/result_manager/models.py
    def is_failure(self, message: str | None = None) -> None:\n\"\"\"\n    Helper to set status to failure\n\n    Args:\n        message: Optional message related to the test\n    \"\"\"\n    self._set_status(\"failure\", message)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_skipped","title":"is_skipped","text":"
    is_skipped(message: str | None = None) -> None\n

    Helper to set status to skipped

    Parameters:

    Name Type Description Default message str | None

    Optional message related to the test

    None Source code in anta/result_manager/models.py
    def is_skipped(self, message: str | None = None) -> None:\n\"\"\"\n    Helper to set status to skipped\n\n    Args:\n        message: Optional message related to the test\n    \"\"\"\n    self._set_status(\"skipped\", message)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_success","title":"is_success","text":"
    is_success(message: str | None = None) -> None\n

    Helper to set status to success

    Parameters:

    Name Type Description Default message str | None

    Optional message related to the test

    None Source code in anta/result_manager/models.py
    def is_success(self, message: str | None = None) -> None:\n\"\"\"\n    Helper to set status to success\n\n    Args:\n        message: Optional message related to the test\n    \"\"\"\n    self._set_status(\"success\", message)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.ListResult","title":"ListResult","text":"

    Bases: RootModel[List[TestResult]]

    list result for all tests on all devices.

    Attributes:

    Name Type Description __root__ list[TestResult]

    A list of TestResult objects.

    "},{"location":"api/result_manager_models/#anta.result_manager.models.ListResult.append","title":"append","text":"
    append(value: TestResult) -> None\n

    Add support for append method.

    Source code in anta/result_manager/models.py
    def append(self, value: TestResult) -> None:\n\"\"\"Add support for append method.\"\"\"\n    self.root.append(value)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.ListResult.extend","title":"extend","text":"
    extend(values: list[TestResult]) -> None\n

    Add support for extend method.

    Source code in anta/result_manager/models.py
    def extend(self, values: list[TestResult]) -> None:\n\"\"\"Add support for extend method.\"\"\"\n    self.root.extend(values)\n
    "},{"location":"api/tests.aaa/","title":"AAA","text":""},{"location":"api/tests.aaa/#anta-catalog-for-interfaces-tests","title":"ANTA catalog for interfaces tests","text":"

    Test functions related to the EOS various AAA settings

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods","title":"VerifyAcctConsoleMethods","text":"

    Bases: AntaTest

    Verifies the AAA accounting console method lists for different accounting types (system, exec, commands, dot1x).

    Expected Results
    • success: The test will pass if the provided AAA accounting console method list is matching in the configured accounting types.
    • failure: The test will fail if the provided AAA accounting console method list is NOT matching in the configured accounting types.
    Source code in anta/tests/aaa.py
    class VerifyAcctConsoleMethods(AntaTest):\n\"\"\"\n    Verifies the AAA accounting console method lists for different accounting types (system, exec, commands, dot1x).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA accounting console method list is matching in the configured accounting types.\n        * failure: The test will fail if the provided AAA accounting console method list is NOT matching in the configured accounting types.\n    \"\"\"\n\n    name = \"VerifyAcctConsoleMethods\"\n    description = \"Verifies the AAA accounting console method lists for different accounting types (system, exec, commands, dot1x).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods accounting\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting console methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting console types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        not_configured = []\n        for k, v in command_output.items():\n            acct_type = k.replace(\"AcctMethods\", \"\")\n            if acct_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            for methods in v.values():\n                if \"consoleAction\" not in methods:\n                    not_configured.append(acct_type)\n                if methods[\"consoleMethods\"] != self.inputs.methods:\n                    not_matching.append(acct_type)\n        if not_configured:\n            self.result.is_failure(f\"AAA console accounting is not configured for {not_configured}\")\n            return\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA accounting console methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting console methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting console types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA accounting console methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['commands', 'exec', 'system', 'dot1x']]\n

    List of accounting console types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods","title":"VerifyAcctDefaultMethods","text":"

    Bases: AntaTest

    Verifies the AAA accounting default method lists for different accounting types (system, exec, commands, dot1x).

    Expected Results
    • success: The test will pass if the provided AAA accounting default method list is matching in the configured accounting types.
    • failure: The test will fail if the provided AAA accounting default method list is NOT matching in the configured accounting types.
    Source code in anta/tests/aaa.py
    class VerifyAcctDefaultMethods(AntaTest):\n\"\"\"\n    Verifies the AAA accounting default method lists for different accounting types (system, exec, commands, dot1x).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA accounting default method list is matching in the configured accounting types.\n        * failure: The test will fail if the provided AAA accounting default method list is NOT matching in the configured accounting types.\n    \"\"\"\n\n    name = \"VerifyAcctDefaultMethods\"\n    description = \"Verifies the AAA accounting default method lists for different accounting types (system, exec, commands, dot1x).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods accounting\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        not_configured = []\n        for k, v in command_output.items():\n            acct_type = k.replace(\"AcctMethods\", \"\")\n            if acct_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            for methods in v.values():\n                if \"defaultAction\" not in methods:\n                    not_configured.append(acct_type)\n                if methods[\"defaultMethods\"] != self.inputs.methods:\n                    not_matching.append(acct_type)\n        if not_configured:\n            self.result.is_failure(f\"AAA default accounting is not configured for {not_configured}\")\n            return\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA accounting default methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA accounting methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['commands', 'exec', 'system', 'dot1x']]\n

    List of accounting types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods","title":"VerifyAuthenMethods","text":"

    Bases: AntaTest

    Verifies the AAA authentication method lists for different authentication types (login, enable, dot1x).

    Expected Results
    • success: The test will pass if the provided AAA authentication method list is matching in the configured authentication types.
    • failure: The test will fail if the provided AAA authentication method list is NOT matching in the configured authentication types.
    Source code in anta/tests/aaa.py
    class VerifyAuthenMethods(AntaTest):\n\"\"\"\n    Verifies the AAA authentication method lists for different authentication types (login, enable, dot1x).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA authentication method list is matching in the configured authentication types.\n        * failure: The test will fail if the provided AAA authentication method list is NOT matching in the configured authentication types.\n    \"\"\"\n\n    name = \"VerifyAuthenMethods\"\n    description = \"Verifies the AAA authentication method lists for different authentication types (login, enable, dot1x).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods authentication\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA authentication methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"login\", \"enable\", \"dot1x\"]]\n\"\"\"List of authentication types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        for k, v in command_output.items():\n            auth_type = k.replace(\"AuthenMethods\", \"\")\n            if auth_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            if auth_type == \"login\":\n                if \"login\" not in v:\n                    self.result.is_failure(\"AAA authentication methods are not configured for login console\")\n                    return\n                if v[\"login\"][\"methods\"] != self.inputs.methods:\n                    self.result.is_failure(f\"AAA authentication methods {self.inputs.methods} are not matching for login console\")\n                    return\n            for methods in v.values():\n                if methods[\"methods\"] != self.inputs.methods:\n                    not_matching.append(auth_type)\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA authentication methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA authentication methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"login\", \"enable\", \"dot1x\"]]\n\"\"\"List of authentication types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA authentication methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['login', 'enable', 'dot1x']]\n

    List of authentication types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods","title":"VerifyAuthzMethods","text":"

    Bases: AntaTest

    Verifies the AAA authorization method lists for different authorization types (commands, exec).

    Expected Results
    • success: The test will pass if the provided AAA authorization method list is matching in the configured authorization types.
    • failure: The test will fail if the provided AAA authorization method list is NOT matching in the configured authorization types.
    Source code in anta/tests/aaa.py
    class VerifyAuthzMethods(AntaTest):\n\"\"\"\n    Verifies the AAA authorization method lists for different authorization types (commands, exec).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA authorization method list is matching in the configured authorization types.\n        * failure: The test will fail if the provided AAA authorization method list is NOT matching in the configured authorization types.\n    \"\"\"\n\n    name = \"VerifyAuthzMethods\"\n    description = \"Verifies the AAA authorization method lists for different authorization types (commands, exec).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods authorization\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA authorization methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"commands\", \"exec\"]]\n\"\"\"List of authorization types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        for k, v in command_output.items():\n            authz_type = k.replace(\"AuthzMethods\", \"\")\n            if authz_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            for methods in v.values():\n                if methods[\"methods\"] != self.inputs.methods:\n                    not_matching.append(authz_type)\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA authorization methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA authorization methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"commands\", \"exec\"]]\n\"\"\"List of authorization types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA authorization methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['commands', 'exec']]\n

    List of authorization types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServerGroups","title":"VerifyTacacsServerGroups","text":"

    Bases: AntaTest

    Verifies if the provided TACACS server group(s) are configured.

    Expected Results
    • success: The test will pass if the provided TACACS server group(s) are configured.
    • failure: The test will fail if one or all the provided TACACS server group(s) are NOT configured.
    Source code in anta/tests/aaa.py
    class VerifyTacacsServerGroups(AntaTest):\n\"\"\"\n    Verifies if the provided TACACS server group(s) are configured.\n\n    Expected Results:\n        * success: The test will pass if the provided TACACS server group(s) are configured.\n        * failure: The test will fail if one or all the provided TACACS server group(s) are NOT configured.\n    \"\"\"\n\n    name = \"VerifyTacacsServerGroups\"\n    description = \"Verifies if the provided TACACS server group(s) are configured.\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show tacacs\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        groups: List[str]\n\"\"\"List of TACACS server group\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        tacacs_groups = command_output[\"groups\"]\n        if not tacacs_groups:\n            self.result.is_failure(\"No TACACS server group(s) are configured\")\n            return\n        not_configured = [group for group in self.inputs.groups if group not in tacacs_groups]\n        if not not_configured:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"TACACS server group(s) {not_configured} are not configured\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServerGroups.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    groups: List[str]\n\"\"\"List of TACACS server group\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServerGroups.Input.groups","title":"groups instance-attribute","text":"
    groups: List[str]\n

    List of TACACS server group

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers","title":"VerifyTacacsServers","text":"

    Bases: AntaTest

    Verifies TACACS servers are configured for a specified VRF.

    Expected Results
    • success: The test will pass if the provided TACACS servers are configured in the specified VRF.
    • failure: The test will fail if the provided TACACS servers are NOT configured in the specified VRF.
    Source code in anta/tests/aaa.py
    class VerifyTacacsServers(AntaTest):\n\"\"\"\n    Verifies TACACS servers are configured for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided TACACS servers are configured in the specified VRF.\n        * failure: The test will fail if the provided TACACS servers are NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyTacacsServers\"\n    description = \"Verifies TACACS servers are configured for a specified VRF.\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show tacacs\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        servers: List[IPv4Address]\n\"\"\"List of TACACS servers\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        tacacs_servers = command_output[\"tacacsServers\"]\n        if not tacacs_servers:\n            self.result.is_failure(\"No TACACS servers are configured\")\n            return\n        not_configured = [\n            str(server)\n            for server in self.inputs.servers\n            if not any(\n                str(server) == tacacs_server[\"serverInfo\"][\"hostname\"] and self.inputs.vrf == tacacs_server[\"serverInfo\"][\"vrf\"] for tacacs_server in tacacs_servers\n            )\n        ]\n        if not not_configured:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"TACACS servers {not_configured} are not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    servers: List[IPv4Address]\n\"\"\"List of TACACS servers\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers.Input.servers","title":"servers instance-attribute","text":"
    servers: List[IPv4Address]\n

    List of TACACS servers

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport TACACS messages

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf","title":"VerifyTacacsSourceIntf","text":"

    Bases: AntaTest

    Verifies TACACS source-interface for a specified VRF.

    Expected Results
    • success: The test will pass if the provided TACACS source-interface is configured in the specified VRF.
    • failure: The test will fail if the provided TACACS source-interface is NOT configured in the specified VRF.
    Source code in anta/tests/aaa.py
    class VerifyTacacsSourceIntf(AntaTest):\n\"\"\"\n    Verifies TACACS source-interface for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided TACACS source-interface is configured in the specified VRF.\n        * failure: The test will fail if the provided TACACS source-interface is NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyTacacsSourceIntf\"\n    description = \"Verifies TACACS source-interface for a specified VRF.\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show tacacs\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        intf: str\n\"\"\"Source-interface to use as source IP of TACACS messages\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        try:\n            if command_output[\"srcIntf\"][self.inputs.vrf] == self.inputs.intf:\n                self.result.is_success()\n            else:\n                self.result.is_failure(f\"Wrong source-interface configured in VRF {self.inputs.vrf}\")\n        except KeyError:\n            self.result.is_failure(f\"Source-interface {self.inputs.intf} is not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    intf: str\n\"\"\"Source-interface to use as source IP of TACACS messages\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf.Input.intf","title":"intf instance-attribute","text":"
    intf: str\n

    Source-interface to use as source IP of TACACS messages

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport TACACS messages

    "},{"location":"api/tests.configuration/","title":"Configuration","text":""},{"location":"api/tests.configuration/#anta-catalog-for-configuration-tests","title":"ANTA catalog for configuration tests","text":"

    Test functions related to the device configuration

    "},{"location":"api/tests.configuration/#anta.tests.configuration.VerifyRunningConfigDiffs","title":"VerifyRunningConfigDiffs","text":"

    Bases: AntaTest

    Verifies there is no difference between the running-config and the startup-config

    Source code in anta/tests/configuration.py
    class VerifyRunningConfigDiffs(AntaTest):\n\"\"\"\n    Verifies there is no difference between the running-config and the startup-config\n    \"\"\"\n\n    name = \"VerifyRunningConfigDiffs\"\n    description = \"Verifies there is no difference between the running-config and the startup-config\"\n    categories = [\"configuration\"]\n    commands = [AntaCommand(command=\"show running-config diffs\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].output\n        if command_output is None or command_output == \"\":\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            self.result.is_failure(str(command_output))\n
    "},{"location":"api/tests.configuration/#anta.tests.configuration.VerifyZeroTouch","title":"VerifyZeroTouch","text":"

    Bases: AntaTest

    Verifies ZeroTouch is disabled

    Source code in anta/tests/configuration.py
    class VerifyZeroTouch(AntaTest):\n\"\"\"\n    Verifies ZeroTouch is disabled\n    \"\"\"\n\n    name = \"VerifyZeroTouch\"\n    description = \"Verifies ZeroTouch is disabled\"\n    categories = [\"configuration\"]\n    commands = [AntaCommand(command=\"show zerotouch\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].output\n        assert isinstance(command_output, dict)\n        if command_output[\"mode\"] == \"disabled\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"ZTP is NOT disabled\")\n
    "},{"location":"api/tests.connectivity/","title":"Connectivity","text":""},{"location":"api/tests.connectivity/#anta-catalog-for-connectivity-tests","title":"ANTA catalog for connectivity tests","text":"

    Test functions related to various connectivity checks

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors","title":"VerifyLLDPNeighbors","text":"

    Bases: AntaTest

    This test verifies that the provided LLDP neighbors are present and connected with the correct configuration.

    Expected Results
    • success: The test will pass if each of the provided LLDP neighbors is present and connected to the specified port and device.
    • failure: The test will fail if any of the following conditions are met:
      • The provided LLDP neighbor is not found.
      • The system name or port of the LLDP neighbor does not match the provided information.
    Source code in anta/tests/connectivity.py
    class VerifyLLDPNeighbors(AntaTest):\n\"\"\"\n    This test verifies that the provided LLDP neighbors are present and connected with the correct configuration.\n\n    Expected Results:\n        * success: The test will pass if each of the provided LLDP neighbors is present and connected to the specified port and device.\n        * failure: The test will fail if any of the following conditions are met:\n            - The provided LLDP neighbor is not found.\n            - The system name or port of the LLDP neighbor does not match the provided information.\n    \"\"\"\n\n    name = \"VerifyLLDPNeighbors\"\n    description = \"Verifies that the provided LLDP neighbors are present and connected with the correct configuration.\"\n    categories = [\"connectivity\"]\n    commands = [AntaCommand(command=\"show lldp neighbors detail\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        neighbors: List[Neighbor]\n\"\"\"List of LLDP neighbors\"\"\"\n\n        class Neighbor(BaseModel):\n\"\"\"LLDP neighbor\"\"\"\n\n            port: Interface\n\"\"\"LLDP port\"\"\"\n            neighbor_device: str\n\"\"\"LLDP neighbor device\"\"\"\n            neighbor_port: Interface\n\"\"\"LLDP neighbor port\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        self.result.is_success()\n\n        no_lldp_neighbor = []\n        wrong_lldp_neighbor = []\n\n        for neighbor in self.inputs.neighbors:\n            if len(lldp_neighbor_info := command_output[\"lldpNeighbors\"][neighbor.port][\"lldpNeighborInfo\"]) == 0:\n                no_lldp_neighbor.append(neighbor.port)\n\n            elif (\n                lldp_neighbor_info[0][\"systemName\"] != neighbor.neighbor_device\n                or lldp_neighbor_info[0][\"neighborInterfaceInfo\"][\"interfaceId_v2\"] != neighbor.neighbor_port\n            ):\n                wrong_lldp_neighbor.append(neighbor.port)\n\n        if no_lldp_neighbor:\n            self.result.is_failure(f\"The following port(s) have no LLDP neighbor: {no_lldp_neighbor}\")\n\n        if wrong_lldp_neighbor:\n            self.result.is_failure(f\"The following port(s) have the wrong LLDP neighbor: {wrong_lldp_neighbor}\")\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/connectivity.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    neighbors: List[Neighbor]\n\"\"\"List of LLDP neighbors\"\"\"\n\n    class Neighbor(BaseModel):\n\"\"\"LLDP neighbor\"\"\"\n\n        port: Interface\n\"\"\"LLDP port\"\"\"\n        neighbor_device: str\n\"\"\"LLDP neighbor device\"\"\"\n        neighbor_port: Interface\n\"\"\"LLDP neighbor port\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.neighbors","title":"neighbors instance-attribute","text":"
    neighbors: List[Neighbor]\n

    List of LLDP neighbors

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor","title":"Neighbor","text":"

    Bases: BaseModel

    LLDP neighbor

    Source code in anta/tests/connectivity.py
    class Neighbor(BaseModel):\n\"\"\"LLDP neighbor\"\"\"\n\n    port: Interface\n\"\"\"LLDP port\"\"\"\n    neighbor_device: str\n\"\"\"LLDP neighbor device\"\"\"\n    neighbor_port: Interface\n\"\"\"LLDP neighbor port\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor.neighbor_device","title":"neighbor_device instance-attribute","text":"
    neighbor_device: str\n

    LLDP neighbor device

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor.neighbor_port","title":"neighbor_port instance-attribute","text":"
    neighbor_port: Interface\n

    LLDP neighbor port

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor.port","title":"port instance-attribute","text":"
    port: Interface\n

    LLDP port

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability","title":"VerifyReachability","text":"

    Bases: AntaTest

    Test network reachability to one or many destination IP(s).

    Expected Results
    • success: The test will pass if all destination IP(s) are reachable.
    • failure: The test will fail if one or many destination IP(s) are unreachable.
    Source code in anta/tests/connectivity.py
    class VerifyReachability(AntaTest):\n\"\"\"\n    Test network reachability to one or many destination IP(s).\n\n    Expected Results:\n        * success: The test will pass if all destination IP(s) are reachable.\n        * failure: The test will fail if one or many destination IP(s) are unreachable.\n    \"\"\"\n\n    name = \"VerifyReachability\"\n    description = \"Test the network reachability to one or many destination IP(s).\"\n    categories = [\"connectivity\"]\n    commands = [AntaTemplate(template=\"ping vrf {vrf} {destination} source {source} repeat 2\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        hosts: List[Host]\n\"\"\"List of hosts to ping\"\"\"\n\n        class Host(BaseModel):\n\"\"\"Remote host to ping\"\"\"\n\n            destination: IPv4Address\n\"\"\"IPv4 address to ping\"\"\"\n            source: Union[IPv4Address, Interface]\n\"\"\"IPv4 address source IP or Egress interface to use\"\"\"\n            vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(destination=host.destination, source=host.source, vrf=host.vrf) for host in self.inputs.hosts]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        failures = []\n        for command in self.instance_commands:\n            if command.params and \"source\" in command.params and \"destination\" in command.params:\n                src, dst = command.params[\"source\"], command.params[\"destination\"]\n            if \"2 received\" not in command.json_output[\"messages\"][0]:\n                failures.append((str(src), str(dst)))\n        if not failures:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Connectivity test failed for the following source-destination pairs: {failures}\")\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/connectivity.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    hosts: List[Host]\n\"\"\"List of hosts to ping\"\"\"\n\n    class Host(BaseModel):\n\"\"\"Remote host to ping\"\"\"\n\n        destination: IPv4Address\n\"\"\"IPv4 address to ping\"\"\"\n        source: Union[IPv4Address, Interface]\n\"\"\"IPv4 address source IP or Egress interface to use\"\"\"\n        vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.hosts","title":"hosts instance-attribute","text":"
    hosts: List[Host]\n

    List of hosts to ping

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host","title":"Host","text":"

    Bases: BaseModel

    Remote host to ping

    Source code in anta/tests/connectivity.py
    class Host(BaseModel):\n\"\"\"Remote host to ping\"\"\"\n\n    destination: IPv4Address\n\"\"\"IPv4 address to ping\"\"\"\n    source: Union[IPv4Address, Interface]\n\"\"\"IPv4 address source IP or Egress interface to use\"\"\"\n    vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host.destination","title":"destination instance-attribute","text":"
    destination: IPv4Address\n

    IPv4 address to ping

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host.source","title":"source instance-attribute","text":"
    source: Union[IPv4Address, Interface]\n

    IPv4 address source IP or Egress interface to use

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    VRF context

    "},{"location":"api/tests.field_notices/","title":"Field Notices","text":""},{"location":"api/tests.field_notices/#anta-catalog-for-field-notices-tests","title":"ANTA catalog for Field Notices tests","text":"

    Test functions to flag field notices

    "},{"location":"api/tests.field_notices/#anta.tests.field_notices.VerifyFieldNotice44Resolution","title":"VerifyFieldNotice44Resolution","text":"

    Bases: AntaTest

    Verifies the device is using an Aboot version that fix the bug discussed in the field notice 44 (Aboot manages system settings prior to EOS initialization).

    https://www.arista.com/en/support/advisories-notices/field-notice/8756-field-notice-44

    Source code in anta/tests/field_notices.py
    class VerifyFieldNotice44Resolution(AntaTest):\n\"\"\"\n    Verifies the device is using an Aboot version that fix the bug discussed\n    in the field notice 44 (Aboot manages system settings prior to EOS initialization).\n\n    https://www.arista.com/en/support/advisories-notices/field-notice/8756-field-notice-44\n    \"\"\"\n\n    name = \"VerifyFieldNotice44Resolution\"\n    description = (\n        \"Verifies the device is using an Aboot version that fix the bug discussed in the field notice 44 (Aboot manages system settings prior to EOS initialization)\"\n    )\n    categories = [\"field notices\", \"software\"]\n    commands = [AntaCommand(command=\"show version detail\")]\n\n    # TODO maybe implement ONLY ON PLATFORMS instead\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        devices = [\n            \"DCS-7010T-48\",\n            \"DCS-7010T-48-DC\",\n            \"DCS-7050TX-48\",\n            \"DCS-7050TX-64\",\n            \"DCS-7050TX-72\",\n            \"DCS-7050TX-72Q\",\n            \"DCS-7050TX-96\",\n            \"DCS-7050TX2-128\",\n            \"DCS-7050SX-64\",\n            \"DCS-7050SX-72\",\n            \"DCS-7050SX-72Q\",\n            \"DCS-7050SX2-72Q\",\n            \"DCS-7050SX-96\",\n            \"DCS-7050SX2-128\",\n            \"DCS-7050QX-32S\",\n            \"DCS-7050QX2-32S\",\n            \"DCS-7050SX3-48YC12\",\n            \"DCS-7050CX3-32S\",\n            \"DCS-7060CX-32S\",\n            \"DCS-7060CX2-32S\",\n            \"DCS-7060SX2-48YC6\",\n            \"DCS-7160-48YC6\",\n            \"DCS-7160-48TC6\",\n            \"DCS-7160-32CQ\",\n            \"DCS-7280SE-64\",\n            \"DCS-7280SE-68\",\n            \"DCS-7280SE-72\",\n            \"DCS-7150SC-24-CLD\",\n            \"DCS-7150SC-64-CLD\",\n            \"DCS-7020TR-48\",\n            \"DCS-7020TRA-48\",\n            \"DCS-7020SR-24C2\",\n            \"DCS-7020SRG-24C2\",\n            \"DCS-7280TR-48C6\",\n            \"DCS-7280TRA-48C6\",\n            \"DCS-7280SR-48C6\",\n            \"DCS-7280SRA-48C6\",\n            \"DCS-7280SRAM-48C6\",\n            \"DCS-7280SR2K-48C6-M\",\n            \"DCS-7280SR2-48YC6\",\n            \"DCS-7280SR2A-48YC6\",\n            \"DCS-7280SRM-40CX2\",\n            \"DCS-7280QR-C36\",\n            \"DCS-7280QRA-C36S\",\n        ]\n        variants = [\"-SSD-F\", \"-SSD-R\", \"-M-F\", \"-M-R\", \"-F\", \"-R\"]\n\n        model = command_output[\"modelName\"]\n        # TODO this list could be a regex\n        for variant in variants:\n            model = model.replace(variant, \"\")\n        if model not in devices:\n            self.result.is_skipped(\"device is not impacted by FN044\")\n            return\n\n        for component in command_output[\"details\"][\"components\"]:\n            if component[\"name\"] == \"Aboot\":\n                aboot_version = component[\"version\"].split(\"-\")[2]\n        self.result.is_success()\n        if aboot_version.startswith(\"4.0.\") and int(aboot_version.split(\".\")[2]) < 7:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n        elif aboot_version.startswith(\"4.1.\") and int(aboot_version.split(\".\")[2]) < 1:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n        elif aboot_version.startswith(\"6.0.\") and int(aboot_version.split(\".\")[2]) < 9:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n        elif aboot_version.startswith(\"6.1.\") and int(aboot_version.split(\".\")[2]) < 7:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n
    "},{"location":"api/tests.field_notices/#anta.tests.field_notices.VerifyFieldNotice72Resolution","title":"VerifyFieldNotice72Resolution","text":"

    Bases: AntaTest

    Checks if the device is potentially exposed to Field Notice 72, and if the issue has been mitigated.

    https://www.arista.com/en/support/advisories-notices/field-notice/17410-field-notice-0072

    Source code in anta/tests/field_notices.py
    class VerifyFieldNotice72Resolution(AntaTest):\n\"\"\"\n    Checks if the device is potentially exposed to Field Notice 72, and if the issue has been mitigated.\n\n    https://www.arista.com/en/support/advisories-notices/field-notice/17410-field-notice-0072\n    \"\"\"\n\n    name = \"VerifyFieldNotice72Resolution\"\n    description = \"Verifies if the device has exposeure to FN72, and if the issue has been mitigated\"\n    categories = [\"field notices\", \"software\"]\n    commands = [AntaCommand(command=\"show version detail\")]\n\n    # TODO maybe implement ONLY ON PLATFORMS instead\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        devices = [\"DCS-7280SR3-48YC8\", \"DCS-7280SR3K-48YC8\"]\n        variants = [\"-SSD-F\", \"-SSD-R\", \"-M-F\", \"-M-R\", \"-F\", \"-R\"]\n        model = command_output[\"modelName\"]\n\n        for variant in variants:\n            model = model.replace(variant, \"\")\n        if model not in devices:\n            self.result.is_skipped(\"Platform is not impacted by FN072\")\n            return\n\n        serial = command_output[\"serialNumber\"]\n        number = int(serial[3:7])\n\n        if \"JPE\" not in serial and \"JAS\" not in serial:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3-48YC8\" and \"JPE\" in serial and number >= 2131:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3-48YC8\" and \"JAS\" in serial and number >= 2041:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3K-48YC8\" and \"JPE\" in serial and number >= 2134:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3K-48YC8\" and \"JAS\" in serial and number >= 2041:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        # Because each of the if checks above will return if taken, we only run the long\n        # check if we get this far\n        for entry in command_output[\"details\"][\"components\"]:\n            if entry[\"name\"] == \"FixedSystemvrm1\":\n                if int(entry[\"version\"]) < 7:\n                    self.result.is_failure(\"Device is exposed to FN72\")\n                else:\n                    self.result.is_success(\"FN72 is mitigated\")\n                return\n        # We should never hit this point\n        self.result.is_error(message=\"Error in running test - FixedSystemvrm1 not found\")\n        return\n
    "},{"location":"api/tests.hardware/","title":"Hardware","text":""},{"location":"api/tests.hardware/#anta-catalog-for-hardware-tests","title":"ANTA catalog for hardware tests","text":"

    Test functions related to the hardware or environment

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyAdverseDrops","title":"VerifyAdverseDrops","text":"

    Bases: AntaTest

    This test verifies if there are no adverse drops on DCS7280E and DCS7500E.

    Expected Results
    • success: The test will pass if there are no adverse drops.
    • failure: The test will fail if there are adverse drops.
    Source code in anta/tests/hardware.py
    class VerifyAdverseDrops(AntaTest):\n\"\"\"\n    This test verifies if there are no adverse drops on DCS7280E and DCS7500E.\n\n    Expected Results:\n      * success: The test will pass if there are no adverse drops.\n      * failure: The test will fail if there are adverse drops.\n    \"\"\"\n\n    name = \"VerifyAdverseDrops\"\n    description = \"Verifies there are no adverse drops on DCS7280E and DCS7500E\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show hardware counter drop\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        total_adverse_drop = command_output[\"totalAdverseDrops\"] if \"totalAdverseDrops\" in command_output.keys() else \"\"\n        if total_adverse_drop == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device totalAdverseDrops counter is: '{total_adverse_drop}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentCooling","title":"VerifyEnvironmentCooling","text":"

    Bases: AntaTest

    This test verifies the fans status.

    Expected Results
    • success: The test will pass if the fans status are within the accepted states list.
    • failure: The test will fail if some fans status is not within the accepted states list.
    Source code in anta/tests/hardware.py
    class VerifyEnvironmentCooling(AntaTest):\n\"\"\"\n    This test verifies the fans status.\n\n    Expected Results:\n      * success: The test will pass if the fans status are within the accepted states list.\n      * failure: The test will fail if some fans status is not within the accepted states list.\n    \"\"\"\n\n    name = \"VerifyEnvironmentCooling\"\n    description = \"Verifies if the fans status are within the accepted states list.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment cooling\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        states: List[str]\n\"\"\"Accepted states list for fan status\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        # First go through power supplies fans\n        for power_supply in command_output.get(\"powerSupplySlots\", []):\n            for fan in power_supply.get(\"fans\", []):\n                if (state := fan[\"status\"]) not in self.inputs.states:\n                    self.result.is_failure(f\"Fan {fan['label']} on PowerSupply {power_supply['label']} is: '{state}'\")\n        # Then go through fan trays\n        for fan_tray in command_output.get(\"fanTraySlots\", []):\n            for fan in fan_tray.get(\"fans\", []):\n                if (state := fan[\"status\"]) not in self.inputs.states:\n                    self.result.is_failure(f\"Fan {fan['label']} on Fan Tray {fan_tray['label']} is: '{state}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentCooling.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/hardware.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    states: List[str]\n\"\"\"Accepted states list for fan status\"\"\"\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentCooling.Input.states","title":"states instance-attribute","text":"
    states: List[str]\n

    Accepted states list for fan status

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentPower","title":"VerifyEnvironmentPower","text":"

    Bases: AntaTest

    This test verifies the power supplies status.

    Expected Results
    • success: The test will pass if the power supplies status are within the accepted states list.
    • failure: The test will fail if some power supplies status is not within the accepted states list.
    Source code in anta/tests/hardware.py
    class VerifyEnvironmentPower(AntaTest):\n\"\"\"\n    This test verifies the power supplies status.\n\n    Expected Results:\n      * success: The test will pass if the power supplies status are within the accepted states list.\n      * failure: The test will fail if some power supplies status is not within the accepted states list.\n    \"\"\"\n\n    name = \"VerifyEnvironmentPower\"\n    description = \"Verifies if the power supplies status are within the accepted states list.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment power\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        states: List[str]\n\"\"\"Accepted states list for power supplies status\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        power_supplies = command_output[\"powerSupplies\"] if \"powerSupplies\" in command_output.keys() else \"{}\"\n        wrong_power_supplies = {\n            powersupply: {\"state\": value[\"state\"]} for powersupply, value in dict(power_supplies).items() if value[\"state\"] not in self.inputs.states\n        }\n        if not wrong_power_supplies:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following power supplies status are not in the accepted states list: {wrong_power_supplies}\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentPower.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/hardware.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    states: List[str]\n\"\"\"Accepted states list for power supplies status\"\"\"\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentPower.Input.states","title":"states instance-attribute","text":"
    states: List[str]\n

    Accepted states list for power supplies status

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentSystemCooling","title":"VerifyEnvironmentSystemCooling","text":"

    Bases: AntaTest

    This test verifies the device\u2019s system cooling.

    Expected Results
    • success: The test will pass if the system cooling status is OK: \u2018coolingOk\u2019.
    • failure: The test will fail if the system cooling status is NOT OK.
    Source code in anta/tests/hardware.py
    class VerifyEnvironmentSystemCooling(AntaTest):\n\"\"\"\n    This test verifies the device's system cooling.\n\n    Expected Results:\n      * success: The test will pass if the system cooling status is OK: 'coolingOk'.\n      * failure: The test will fail if the system cooling status is NOT OK.\n    \"\"\"\n\n    name = \"VerifyEnvironmentSystemCooling\"\n    description = \"Verifies the system cooling status.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment cooling\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        sys_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        self.result.is_success()\n        if sys_status != \"coolingOk\":\n            self.result.is_failure(f\"Device system cooling is not OK: '{sys_status}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTemperature","title":"VerifyTemperature","text":"

    Bases: AntaTest

    This test verifies if the device temperature is within acceptable limits.

    Expected Results
    • success: The test will pass if the device temperature is currently OK: \u2018temperatureOk\u2019.
    • failure: The test will fail if the device temperature is NOT OK.
    Source code in anta/tests/hardware.py
    class VerifyTemperature(AntaTest):\n\"\"\"\n    This test verifies if the device temperature is within acceptable limits.\n\n    Expected Results:\n      * success: The test will pass if the device temperature is currently OK: 'temperatureOk'.\n      * failure: The test will fail if the device temperature is NOT OK.\n    \"\"\"\n\n    name = \"VerifyTemperature\"\n    description = \"Verifies if the device temperature is within the acceptable range.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment temperature\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature exceeds acceptable limits. Current system status: '{temperature_status}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversManufacturers","title":"VerifyTransceiversManufacturers","text":"

    Bases: AntaTest

    This test verifies if all the transceivers come from approved manufacturers.

    Expected Results
    • success: The test will pass if all transceivers are from approved manufacturers.
    • failure: The test will fail if some transceivers are from unapproved manufacturers.
    Source code in anta/tests/hardware.py
    class VerifyTransceiversManufacturers(AntaTest):\n\"\"\"\n    This test verifies if all the transceivers come from approved manufacturers.\n\n    Expected Results:\n      * success: The test will pass if all transceivers are from approved manufacturers.\n      * failure: The test will fail if some transceivers are from unapproved manufacturers.\n    \"\"\"\n\n    name = \"VerifyTransceiversManufacturers\"\n    description = \"Verifies the transceiver's manufacturer against a list of approved manufacturers.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show inventory\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        manufacturers: List[str]\n\"\"\"List of approved transceivers manufacturers\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        wrong_manufacturers = {\n            interface: value[\"mfgName\"] for interface, value in command_output[\"xcvrSlots\"].items() if value[\"mfgName\"] not in self.inputs.manufacturers\n        }\n        if not wrong_manufacturers:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Some transceivers are from unapproved manufacturers: {wrong_manufacturers}\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversManufacturers.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/hardware.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    manufacturers: List[str]\n\"\"\"List of approved transceivers manufacturers\"\"\"\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversManufacturers.Input.manufacturers","title":"manufacturers instance-attribute","text":"
    manufacturers: List[str]\n

    List of approved transceivers manufacturers

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversTemperature","title":"VerifyTransceiversTemperature","text":"

    Bases: AntaTest

    This test verifies if all the transceivers are operating at an acceptable temperature.

    Expected Results
    • success: The test will pass if all transceivers status are OK: \u2018ok\u2019.
    • failure: The test will fail if some transceivers are NOT OK.
    Source code in anta/tests/hardware.py
    class VerifyTransceiversTemperature(AntaTest):\n\"\"\"\n    This test verifies if all the transceivers are operating at an acceptable temperature.\n\n    Expected Results:\n          * success: The test will pass if all transceivers status are OK: 'ok'.\n          * failure: The test will fail if some transceivers are NOT OK.\n    \"\"\"\n\n    name = \"VerifyTransceiversTemperature\"\n    description = \"Verifies that all transceivers are operating within the acceptable temperature range.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment temperature transceiver\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        sensors = command_output[\"tempSensors\"] if \"tempSensors\" in command_output.keys() else \"\"\n        wrong_sensors = {\n            sensor[\"name\"]: {\n                \"hwStatus\": sensor[\"hwStatus\"],\n                \"alertCount\": sensor[\"alertCount\"],\n            }\n            for sensor in sensors\n            if sensor[\"hwStatus\"] != \"ok\" or sensor[\"alertCount\"] != 0\n        }\n        if not wrong_sensors:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following sensors are operating outside the acceptable temperature range or have raised alerts: {wrong_sensors}\")\n
    "},{"location":"api/tests.interfaces/","title":"Interfaces","text":""},{"location":"api/tests.interfaces/#anta-catalog-for-interfaces-tests","title":"ANTA catalog for interfaces tests","text":"

    Test functions related to the device interfaces

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIPProxyARP","title":"VerifyIPProxyARP","text":"

    Bases: AntaTest

    Verifies if Proxy-ARP is enabled for the provided list of interface(s).

    Expected Results
    • success: The test will pass if Proxy-ARP is enabled on the specified interface(s).
    • failure: The test will fail if Proxy-ARP is disabled on the specified interface(s).
    Source code in anta/tests/interfaces.py
    class VerifyIPProxyARP(AntaTest):\n\"\"\"\n    Verifies if Proxy-ARP is enabled for the provided list of interface(s).\n\n    Expected Results:\n        * success: The test will pass if Proxy-ARP is enabled on the specified interface(s).\n        * failure: The test will fail if Proxy-ARP is disabled on the specified interface(s).\n    \"\"\"\n\n    name = \"VerifyIPProxyARP\"\n    description = \"Verifies if Proxy-ARP is enabled for the provided list of interface(s).\"\n    categories = [\"interfaces\"]\n    commands = [AntaTemplate(template=\"show ip interface {intf}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        interfaces: List[str]\n\"\"\"list of interfaces to be tested\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(intf=intf) for intf in self.inputs.interfaces]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        disabled_intf = []\n        for command in self.instance_commands:\n            if command.params and \"intf\" in command.params:\n                intf = command.params[\"intf\"]\n            if not command.json_output[\"interfaces\"][intf][\"proxyArp\"]:\n                disabled_intf.append(intf)\n        if disabled_intf:\n            self.result.is_failure(f\"The following interface(s) have Proxy-ARP disabled: {disabled_intf}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIPProxyARP.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    interfaces: List[str]\n\"\"\"list of interfaces to be tested\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIPProxyARP.Input.interfaces","title":"interfaces instance-attribute","text":"
    interfaces: List[str]\n

    list of interfaces to be tested

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIllegalLACP","title":"VerifyIllegalLACP","text":"

    Bases: AntaTest

    Verifies there is no illegal LACP packets received.

    Source code in anta/tests/interfaces.py
    class VerifyIllegalLACP(AntaTest):\n\"\"\"\n    Verifies there is no illegal LACP packets received.\n    \"\"\"\n\n    name = \"VerifyIllegalLACP\"\n    description = \"Verifies there is no illegal LACP packets received.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show lacp counters all-ports\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        po_with_illegal_lacp: list[dict[str, dict[str, int]]] = []\n        for portchannel, portchannel_dict in command_output[\"portChannels\"].items():\n            po_with_illegal_lacp.extend(\n                {portchannel: interface} for interface, interface_dict in portchannel_dict[\"interfaces\"].items() if interface_dict[\"illegalRxCount\"] != 0\n            )\n        if not po_with_illegal_lacp:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"The following port-channels have recieved illegal lacp packets on the \" f\"following ports: {po_with_illegal_lacp}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceDiscards","title":"VerifyInterfaceDiscards","text":"

    Bases: AntaTest

    Verifies interfaces packet discard counters are equal to zero.

    Source code in anta/tests/interfaces.py
    class VerifyInterfaceDiscards(AntaTest):\n\"\"\"\n    Verifies interfaces packet discard counters are equal to zero.\n    \"\"\"\n\n    name = \"VerifyInterfaceDiscards\"\n    description = \"Verifies interfaces packet discard counters are equal to zero.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces counters discards\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        wrong_interfaces: list[dict[str, dict[str, int]]] = []\n        for interface, outer_v in command_output[\"interfaces\"].items():\n            wrong_interfaces.extend({interface: outer_v} for counter, value in outer_v.items() if value > 0)\n        if not wrong_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interfaces have non 0 discard counter(s): {wrong_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceErrDisabled","title":"VerifyInterfaceErrDisabled","text":"

    Bases: AntaTest

    Verifies there is no interface in error disable state.

    Source code in anta/tests/interfaces.py
    class VerifyInterfaceErrDisabled(AntaTest):\n\"\"\"\n    Verifies there is no interface in error disable state.\n    \"\"\"\n\n    name = \"VerifyInterfaceErrDisabled\"\n    description = \"Verifies there is no interface in error disable state.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces status\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        errdisabled_interfaces = [interface for interface, value in command_output[\"interfaceStatuses\"].items() if value[\"linkStatus\"] == \"errdisabled\"]\n        if errdisabled_interfaces:\n            self.result.is_failure(f\"The following interfaces are in error disabled state: {errdisabled_interfaces}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceErrors","title":"VerifyInterfaceErrors","text":"

    Bases: AntaTest

    This test verifies that interfaces error counters are equal to zero.

    Expected Results
    • success: The test will pass if all interfaces have error counters equal to zero.
    • failure: The test will fail if one or more interfaces have non-zero error counters.
    Source code in anta/tests/interfaces.py
    class VerifyInterfaceErrors(AntaTest):\n\"\"\"\n    This test verifies that interfaces error counters are equal to zero.\n\n    Expected Results:\n        * success: The test will pass if all interfaces have error counters equal to zero.\n        * failure: The test will fail if one or more interfaces have non-zero error counters.\n    \"\"\"\n\n    name = \"VerifyInterfaceErrors\"\n    description = \"Verifies that interfaces error counters are equal to zero.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces counters errors\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        wrong_interfaces: list[dict[str, dict[str, int]]] = []\n        for interface, counters in command_output[\"interfaceErrorCounters\"].items():\n            if any(value > 0 for value in counters.values()) and all(interface not in wrong_interface for wrong_interface in wrong_interfaces):\n                wrong_interfaces.append({interface: counters})\n        if not wrong_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interface(s) have non-zero error counters: {wrong_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceUtilization","title":"VerifyInterfaceUtilization","text":"

    Bases: AntaTest

    Verifies interfaces utilization is below 75%.

    Source code in anta/tests/interfaces.py
    class VerifyInterfaceUtilization(AntaTest):\n\"\"\"\n    Verifies interfaces utilization is below 75%.\n    \"\"\"\n\n    name = \"VerifyInterfaceUtilization\"\n    description = \"Verifies interfaces utilization is below 75%.\"\n    categories = [\"interfaces\"]\n    # TODO - move from text to json if possible\n    commands = [AntaCommand(command=\"show interfaces counters rates\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        wrong_interfaces = {}\n        for line in command_output.split(\"\\n\")[1:]:\n            if len(line) > 0:\n                if line.split()[-5] == \"-\" or line.split()[-2] == \"-\":\n                    pass\n                elif float(line.split()[-5].replace(\"%\", \"\")) > 75.0:\n                    wrong_interfaces[line.split()[0]] = line.split()[-5]\n                elif float(line.split()[-2].replace(\"%\", \"\")) > 75.0:\n                    wrong_interfaces[line.split()[0]] = line.split()[-2]\n        if not wrong_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interfaces have a usage > 75%: {wrong_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfacesStatus","title":"VerifyInterfacesStatus","text":"

    Bases: AntaTest

    This test verifies if the provided list of interfaces are all in the expected state.

    Expected Results
    • success: The test will pass if the provided interfaces are all in the expected state.
    • failure: The test will fail if any interface is not in the expected state.
    Source code in anta/tests/interfaces.py
    class VerifyInterfacesStatus(AntaTest):\n\"\"\"\n    This test verifies if the provided list of interfaces are all in the expected state.\n\n    Expected Results:\n        * success: The test will pass if the provided interfaces are all in the expected state.\n        * failure: The test will fail if any interface is not in the expected state.\n    \"\"\"\n\n    name = \"VerifyInterfacesStatus\"\n    description = \"Verifies if the provided list of interfaces are all in the expected state.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces description\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        interfaces: List[InterfaceStatus]\n\"\"\"List of interfaces to validate with the expected state\"\"\"\n\n        class InterfaceStatus(BaseModel):  # pylint: disable=missing-class-docstring\n            interface: Interface\n            state: Literal[\"up\", \"adminDown\"]\n            protocol_status: Literal[\"up\", \"down\"] = \"up\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        self.result.is_success()\n\n        intf_not_configured = []\n        intf_wrong_state = []\n\n        for interface_status in self.inputs.interfaces:\n            intf_status = get_value(command_output[\"interfaceDescriptions\"], interface_status.interface)\n            if intf_status is None:\n                intf_not_configured.append(interface_status.interface)\n                continue\n\n            proto = intf_status[\"lineProtocolStatus\"]\n            status = intf_status[\"interfaceStatus\"]\n\n            if interface_status.state == \"up\" and not (re.match(r\"connected|up\", proto) and re.match(r\"connected|up\", status)):\n                intf_wrong_state.append(f\"{interface_status.interface} is {proto}/{status} expected {interface_status.protocol_status}/{interface_status.state}\")\n            elif interface_status.state == \"adminDown\":\n                if interface_status.protocol_status == \"up\" and not (re.match(r\"up\", proto) and re.match(r\"adminDown\", status)):\n                    intf_wrong_state.append(f\"{interface_status.interface} is {proto}/{status} expected {interface_status.protocol_status}/{interface_status.state}\")\n                elif interface_status.protocol_status == \"down\" and not (re.match(r\"down\", proto) and re.match(r\"adminDown\", status)):\n                    intf_wrong_state.append(f\"{interface_status.interface} is {proto}/{status} expected {interface_status.protocol_status}/{interface_status.state}\")\n\n        if intf_not_configured:\n            self.result.is_failure(f\"The following interface(s) are not configured: {intf_not_configured}\")\n\n        if intf_wrong_state:\n            self.result.is_failure(f\"The following interface(s) are not in the expected state: {intf_wrong_state}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfacesStatus.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    interfaces: List[InterfaceStatus]\n\"\"\"List of interfaces to validate with the expected state\"\"\"\n\n    class InterfaceStatus(BaseModel):  # pylint: disable=missing-class-docstring\n        interface: Interface\n        state: Literal[\"up\", \"adminDown\"]\n        protocol_status: Literal[\"up\", \"down\"] = \"up\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfacesStatus.Input.interfaces","title":"interfaces instance-attribute","text":"
    interfaces: List[InterfaceStatus]\n

    List of interfaces to validate with the expected state

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU","title":"VerifyL2MTU","text":"

    Bases: AntaTest

    Verifies the global layer 2 Maximum Transfer Unit (MTU) for all L2 interfaces.

    Test that L2 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces. You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.

    Expected Results
    • success: The test will pass if all layer 2 interfaces have the proper MTU configured.
    • failure: The test will fail if one or many layer 2 interfaces have the wrong MTU configured.
    Source code in anta/tests/interfaces.py
    class VerifyL2MTU(AntaTest):\n\"\"\"\n    Verifies the global layer 2 Maximum Transfer Unit (MTU) for all L2 interfaces.\n\n    Test that L2 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces.\n    You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.\n\n    Expected Results:\n        * success: The test will pass if all layer 2 interfaces have the proper MTU configured.\n        * failure: The test will fail if one or many layer 2 interfaces have the wrong MTU configured.\n    \"\"\"\n\n    name = \"VerifyL2MTU\"\n    description = \"Verifies the global layer 2 Maximum Transfer Unit (MTU) for all layer 2 interfaces.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mtu: int = 9214\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n        ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L2 interfaces to ignore\"\"\"\n        specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L2 interfaces with their specific MTU configured\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Parameter to save incorrect interface settings\n        wrong_l2mtu_intf: list[dict[str, int]] = []\n        command_output = self.instance_commands[0].json_output\n        # Set list of interfaces with specific settings\n        specific_interfaces: list[str] = []\n        if self.inputs.specific_mtu:\n            for d in self.inputs.specific_mtu:\n                specific_interfaces.extend(d)\n        for interface, values in command_output[\"interfaces\"].items():\n            if re.findall(r\"[a-z]+\", interface, re.IGNORECASE)[0] not in self.inputs.ignored_interfaces and values[\"forwardingModel\"] == \"bridged\":\n                if interface in specific_interfaces:\n                    wrong_l2mtu_intf.extend({interface: values[\"mtu\"]} for custom_data in self.inputs.specific_mtu if values[\"mtu\"] != custom_data[interface])\n                # Comparison with generic setting\n                elif values[\"mtu\"] != self.inputs.mtu:\n                    wrong_l2mtu_intf.append({interface: values[\"mtu\"]})\n        if wrong_l2mtu_intf:\n            self.result.is_failure(f\"Some L2 interfaces do not have correct MTU configured:\\n{wrong_l2mtu_intf}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mtu: int = 9214\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n    ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L2 interfaces to ignore\"\"\"\n    specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L2 interfaces with their specific MTU configured\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input.ignored_interfaces","title":"ignored_interfaces class-attribute instance-attribute","text":"
    ignored_interfaces: List[str] = ['Management', 'Loopback', 'Vxlan', 'Tunnel']\n

    A list of L2 interfaces to ignore

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input.mtu","title":"mtu class-attribute instance-attribute","text":"
    mtu: int = 9214\n

    Default MTU we should have configured on all non-excluded interfaces

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input.specific_mtu","title":"specific_mtu class-attribute instance-attribute","text":"
    specific_mtu: List[Dict[str, int]] = []\n

    A list of dictionary of L2 interfaces with their specific MTU configured

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU","title":"VerifyL3MTU","text":"

    Bases: AntaTest

    Verifies the global layer 3 Maximum Transfer Unit (MTU) for all L3 interfaces.

    Test that L3 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces. You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.

    Expected Results
    • success: The test will pass if all layer 3 interfaces have the proper MTU configured.
    • failure: The test will fail if one or many layer 3 interfaces have the wrong MTU configured.
    Source code in anta/tests/interfaces.py
    class VerifyL3MTU(AntaTest):\n\"\"\"\n    Verifies the global layer 3 Maximum Transfer Unit (MTU) for all L3 interfaces.\n\n    Test that L3 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces.\n    You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.\n\n    Expected Results:\n        * success: The test will pass if all layer 3 interfaces have the proper MTU configured.\n        * failure: The test will fail if one or many layer 3 interfaces have the wrong MTU configured.\n    \"\"\"\n\n    name = \"VerifyL3MTU\"\n    description = \"Verifies the global layer 3 Maximum Transfer Unit (MTU) for all layer 3 interfaces.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mtu: int = 1500\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n        ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L3 interfaces to ignore\"\"\"\n        specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L3 interfaces with their specific MTU configured\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Parameter to save incorrect interface settings\n        wrong_l3mtu_intf: list[dict[str, int]] = []\n        command_output = self.instance_commands[0].json_output\n        # Set list of interfaces with specific settings\n        specific_interfaces: list[str] = []\n        if self.inputs.specific_mtu:\n            for d in self.inputs.specific_mtu:\n                specific_interfaces.extend(d)\n        for interface, values in command_output[\"interfaces\"].items():\n            if re.findall(r\"[a-z]+\", interface, re.IGNORECASE)[0] not in self.inputs.ignored_interfaces and values[\"forwardingModel\"] == \"routed\":\n                if interface in specific_interfaces:\n                    wrong_l3mtu_intf.extend({interface: values[\"mtu\"]} for custom_data in self.inputs.specific_mtu if values[\"mtu\"] != custom_data[interface])\n                # Comparison with generic setting\n                elif values[\"mtu\"] != self.inputs.mtu:\n                    wrong_l3mtu_intf.append({interface: values[\"mtu\"]})\n        if wrong_l3mtu_intf:\n            self.result.is_failure(f\"Some interfaces do not have correct MTU configured:\\n{wrong_l3mtu_intf}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mtu: int = 1500\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n    ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L3 interfaces to ignore\"\"\"\n    specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L3 interfaces with their specific MTU configured\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input.ignored_interfaces","title":"ignored_interfaces class-attribute instance-attribute","text":"
    ignored_interfaces: List[str] = ['Management', 'Loopback', 'Vxlan', 'Tunnel']\n

    A list of L3 interfaces to ignore

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input.mtu","title":"mtu class-attribute instance-attribute","text":"
    mtu: int = 1500\n

    Default MTU we should have configured on all non-excluded interfaces

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input.specific_mtu","title":"specific_mtu class-attribute instance-attribute","text":"
    specific_mtu: List[Dict[str, int]] = []\n

    A list of dictionary of L3 interfaces with their specific MTU configured

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyLoopbackCount","title":"VerifyLoopbackCount","text":"

    Bases: AntaTest

    Verifies the number of loopback interfaces on the device is the one we expect and if none of the loopback is down.

    Source code in anta/tests/interfaces.py
    class VerifyLoopbackCount(AntaTest):\n\"\"\"\n    Verifies the number of loopback interfaces on the device is the one we expect and if none of the loopback is down.\n    \"\"\"\n\n    name = \"VerifyLoopbackCount\"\n    description = \"Verifies the number of loopback interfaces on the device is the one we expect and if none of the loopback is down.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show ip interface brief\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type: ignore\n\"\"\"Number of loopback interfaces expected to be present\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        loopback_count = 0\n        down_loopback_interfaces = []\n        for interface in command_output[\"interfaces\"]:\n            interface_dict = command_output[\"interfaces\"][interface]\n            if \"Loopback\" in interface:\n                loopback_count += 1\n                if not (interface_dict[\"lineProtocolStatus\"] == \"up\" and interface_dict[\"interfaceStatus\"] == \"connected\"):\n                    down_loopback_interfaces.append(interface)\n        if loopback_count == self.inputs.number and len(down_loopback_interfaces) == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            if loopback_count != self.inputs.number:\n                self.result.is_failure(f\"Found {loopback_count} Loopbacks when expecting {self.inputs.number}\")\n            elif len(down_loopback_interfaces) != 0:\n                self.result.is_failure(f\"The following Loopbacks are not up: {down_loopback_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyLoopbackCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type: ignore\n\"\"\"Number of loopback interfaces expected to be present\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyLoopbackCount.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    Number of loopback interfaces expected to be present

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyPortChannels","title":"VerifyPortChannels","text":"

    Bases: AntaTest

    Verifies there is no inactive port in port channels.

    Source code in anta/tests/interfaces.py
    class VerifyPortChannels(AntaTest):\n\"\"\"\n    Verifies there is no inactive port in port channels.\n    \"\"\"\n\n    name = \"VerifyPortChannels\"\n    description = \"Verifies there is no inactive port in port channels.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show port-channel\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        po_with_invactive_ports: list[dict[str, str]] = []\n        for portchannel, portchannel_dict in command_output[\"portChannels\"].items():\n            if len(portchannel_dict[\"inactivePorts\"]) != 0:\n                po_with_invactive_ports.extend({portchannel: portchannel_dict[\"inactivePorts\"]})\n        if not po_with_invactive_ports:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following port-channels have inactive port(s): {po_with_invactive_ports}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifySVI","title":"VerifySVI","text":"

    Bases: AntaTest

    Verifies there is no interface vlan down.

    Source code in anta/tests/interfaces.py
    class VerifySVI(AntaTest):\n\"\"\"\n    Verifies there is no interface vlan down.\n    \"\"\"\n\n    name = \"VerifySVI\"\n    description = \"Verifies there is no interface vlan down.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show ip interface brief\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        down_svis = []\n        for interface in command_output[\"interfaces\"]:\n            interface_dict = command_output[\"interfaces\"][interface]\n            if \"Vlan\" in interface:\n                if not (interface_dict[\"lineProtocolStatus\"] == \"up\" and interface_dict[\"interfaceStatus\"] == \"connected\"):\n                    down_svis.append(interface)\n        if len(down_svis) == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following SVIs are not up: {down_svis}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyStormControlDrops","title":"VerifyStormControlDrops","text":"

    Bases: AntaTest

    Verifies the device did not drop packets due its to storm-control configuration.

    Source code in anta/tests/interfaces.py
    class VerifyStormControlDrops(AntaTest):\n\"\"\"\n    Verifies the device did not drop packets due its to storm-control configuration.\n    \"\"\"\n\n    name = \"VerifyStormControlDrops\"\n    description = \"Verifies the device did not drop packets due its to storm-control configuration.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show storm-control\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        storm_controlled_interfaces: dict[str, dict[str, Any]] = {}\n        for interface, interface_dict in command_output[\"interfaces\"].items():\n            for traffic_type, traffic_type_dict in interface_dict[\"trafficTypes\"].items():\n                if \"drop\" in traffic_type_dict and traffic_type_dict[\"drop\"] != 0:\n                    storm_controlled_interface_dict = storm_controlled_interfaces.setdefault(interface, {})\n                    storm_controlled_interface_dict.update({traffic_type: traffic_type_dict[\"drop\"]})\n        if not storm_controlled_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interfaces have none 0 storm-control drop counters {storm_controlled_interfaces}\")\n
    "},{"location":"api/tests.logging/","title":"Logging","text":""},{"location":"api/tests.logging/#anta-catalog-for-logging-tests","title":"ANTA catalog for logging tests","text":"

    Test functions related to the EOS various logging settings

    NOTE: \u2018show logging\u2019 does not support json output yet

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingAccounting","title":"VerifyLoggingAccounting","text":"

    Bases: AntaTest

    Verifies if AAA accounting logs are generated.

    Expected Results
    • success: The test will pass if AAA accounting logs are generated.
    • failure: The test will fail if AAA accounting logs are NOT generated.
    Source code in anta/tests/logging.py
    class VerifyLoggingAccounting(AntaTest):\n\"\"\"\n    Verifies if AAA accounting logs are generated.\n\n    Expected Results:\n        * success: The test will pass if AAA accounting logs are generated.\n        * failure: The test will fail if AAA accounting logs are NOT generated.\n    \"\"\"\n\n    name = \"VerifyLoggingAccounting\"\n    description = \"Verifies if AAA accounting logs are generated.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show aaa accounting logs | tail\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pattern = r\"cmd=show aaa accounting logs\"\n        output = self.instance_commands[0].text_output\n        if re.search(pattern, output):\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"AAA accounting logs are not generated\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingErrors","title":"VerifyLoggingErrors","text":"

    Bases: AntaTest

    This test verifies there are no syslog messages with a severity of ERRORS or higher.

    Expected Results
    • success: The test will pass if there are NO syslog messages with a severity of ERRORS or higher.
    • failure: The test will fail if ERRORS or higher syslog messages are present.
    Source code in anta/tests/logging.py
    class VerifyLoggingErrors(AntaTest):\n\"\"\"\n    This test verifies there are no syslog messages with a severity of ERRORS or higher.\n\n    Expected Results:\n      * success: The test will pass if there are NO syslog messages with a severity of ERRORS or higher.\n      * failure: The test will fail if ERRORS or higher syslog messages are present.\n    \"\"\"\n\n    name = \"VerifyLoggingWarning\"\n    description = \"This test verifies there are no syslog messages with a severity of ERRORS or higher.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show logging threshold errors\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n\"\"\"\n        Run VerifyLoggingWarning validation\n        \"\"\"\n        command_output = self.instance_commands[0].text_output\n\n        if len(command_output) == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Device has reported syslog messages with a severity of ERRORS or higher\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingErrors.test","title":"test","text":"
    test() -> None\n

    Run VerifyLoggingWarning validation

    Source code in anta/tests/logging.py
    @AntaTest.anta_test\ndef test(self) -> None:\n\"\"\"\n    Run VerifyLoggingWarning validation\n    \"\"\"\n    command_output = self.instance_commands[0].text_output\n\n    if len(command_output) == 0:\n        self.result.is_success()\n    else:\n        self.result.is_failure(\"Device has reported syslog messages with a severity of ERRORS or higher\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHostname","title":"VerifyLoggingHostname","text":"

    Bases: AntaTest

    Verifies if logs are generated with the device FQDN.

    Expected Results
    • success: The test will pass if logs are generated with the device FQDN.
    • failure: The test will fail if logs are NOT generated with the device FQDN.
    Source code in anta/tests/logging.py
    class VerifyLoggingHostname(AntaTest):\n\"\"\"\n    Verifies if logs are generated with the device FQDN.\n\n    Expected Results:\n        * success: The test will pass if logs are generated with the device FQDN.\n        * failure: The test will fail if logs are NOT generated with the device FQDN.\n    \"\"\"\n\n    name = \"VerifyLoggingHostname\"\n    description = \"Verifies if logs are generated with the device FQDN.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"show hostname\"),\n        AntaCommand(command=\"send log level informational message ANTA VerifyLoggingHostname validation\"),\n        AntaCommand(command=\"show logging informational last 30 seconds | grep ANTA\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        output_hostname = self.instance_commands[0].json_output\n        output_logging = self.instance_commands[2].text_output\n        fqdn = output_hostname[\"fqdn\"]\n        lines = output_logging.strip().split(\"\\n\")[::-1]\n        log_pattern = r\"ANTA VerifyLoggingHostname validation\"\n        last_line_with_pattern = \"\"\n        for line in lines:\n            if re.search(log_pattern, line):\n                last_line_with_pattern = line\n                break\n        if fqdn in last_line_with_pattern:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Logs are not generated with the device FQDN\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts","title":"VerifyLoggingHosts","text":"

    Bases: AntaTest

    Verifies logging hosts (syslog servers) for a specified VRF.

    Expected Results
    • success: The test will pass if the provided syslog servers are configured in the specified VRF.
    • failure: The test will fail if the provided syslog servers are NOT configured in the specified VRF.
    Source code in anta/tests/logging.py
    class VerifyLoggingHosts(AntaTest):\n\"\"\"\n    Verifies logging hosts (syslog servers) for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided syslog servers are configured in the specified VRF.\n        * failure: The test will fail if the provided syslog servers are NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyLoggingHosts\"\n    description = \"Verifies logging hosts (syslog servers) for a specified VRF.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show logging\", ofmt=\"text\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        hosts: List[IPv4Address]\n\"\"\"List of hosts (syslog servers) IP addresses\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        output = self.instance_commands[0].text_output\n        not_configured = []\n        for host in self.inputs.hosts:\n            pattern = rf\"Logging to '{str(host)}'.*VRF {self.inputs.vrf}\"\n            if not re.search(pattern, _get_logging_states(self.logger, output)):\n                not_configured.append(str(host))\n\n        if not not_configured:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Syslog servers {not_configured} are not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/logging.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    hosts: List[IPv4Address]\n\"\"\"List of hosts (syslog servers) IP addresses\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts.Input.hosts","title":"hosts instance-attribute","text":"
    hosts: List[IPv4Address]\n

    List of hosts (syslog servers) IP addresses

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport log messages

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingLogsGeneration","title":"VerifyLoggingLogsGeneration","text":"

    Bases: AntaTest

    Verifies if logs are generated.

    Expected Results
    • success: The test will pass if logs are generated.
    • failure: The test will fail if logs are NOT generated.
    Source code in anta/tests/logging.py
    class VerifyLoggingLogsGeneration(AntaTest):\n\"\"\"\n    Verifies if logs are generated.\n\n    Expected Results:\n        * success: The test will pass if logs are generated.\n        * failure: The test will fail if logs are NOT generated.\n    \"\"\"\n\n    name = \"VerifyLoggingLogsGeneration\"\n    description = \"Verifies if logs are generated.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"send log level informational message ANTA VerifyLoggingLogsGeneration validation\"),\n        AntaCommand(command=\"show logging informational last 30 seconds | grep ANTA\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        log_pattern = r\"ANTA VerifyLoggingLogsGeneration validation\"\n        output = self.instance_commands[1].text_output\n        lines = output.strip().split(\"\\n\")[::-1]\n        for line in lines:\n            if re.search(log_pattern, line):\n                self.result.is_success()\n                return\n        self.result.is_failure(\"Logs are not generated\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingPersistent","title":"VerifyLoggingPersistent","text":"

    Bases: AntaTest

    Verifies if logging persistent is enabled and logs are saved in flash.

    Expected Results
    • success: The test will pass if logging persistent is enabled and logs are in flash.
    • failure: The test will fail if logging persistent is disabled or no logs are saved in flash.
    Source code in anta/tests/logging.py
    class VerifyLoggingPersistent(AntaTest):\n\"\"\"\n    Verifies if logging persistent is enabled and logs are saved in flash.\n\n    Expected Results:\n        * success: The test will pass if logging persistent is enabled and logs are in flash.\n        * failure: The test will fail if logging persistent is disabled or no logs are saved in flash.\n    \"\"\"\n\n    name = \"VerifyLoggingPersistent\"\n    description = \"Verifies if logging persistent is enabled and logs are saved in flash.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"show logging\", ofmt=\"text\"),\n        AntaCommand(command=\"dir flash:/persist/messages\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n        log_output = self.instance_commands[0].text_output\n        dir_flash_output = self.instance_commands[1].text_output\n        if \"Persistent logging: disabled\" in _get_logging_states(self.logger, log_output):\n            self.result.is_failure(\"Persistent logging is disabled\")\n            return\n        pattern = r\"-rw-\\s+(\\d+)\"\n        persist_logs = re.search(pattern, dir_flash_output)\n        if not persist_logs or int(persist_logs.group(1)) == 0:\n            self.result.is_failure(\"No persistent logs are saved in flash\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf","title":"VerifyLoggingSourceIntf","text":"

    Bases: AntaTest

    Verifies logging source-interface for a specified VRF.

    Expected Results
    • success: The test will pass if the provided logging source-interface is configured in the specified VRF.
    • failure: The test will fail if the provided logging source-interface is NOT configured in the specified VRF.
    Source code in anta/tests/logging.py
    class VerifyLoggingSourceIntf(AntaTest):\n\"\"\"\n    Verifies logging source-interface for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided logging source-interface is configured in the specified VRF.\n        * failure: The test will fail if the provided logging source-interface is NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyLoggingSourceInt\"\n    description = \"Verifies logging source-interface for a specified VRF.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show logging\", ofmt=\"text\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        interface: str\n\"\"\"Source-interface to use as source IP of log messages\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        output = self.instance_commands[0].text_output\n        pattern = rf\"Logging source-interface '{self.inputs.interface}'.*VRF {self.inputs.vrf}\"\n        if re.search(pattern, _get_logging_states(self.logger, output)):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Source-interface '{self.inputs.interface}' is not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/logging.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    interface: str\n\"\"\"Source-interface to use as source IP of log messages\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf.Input.interface","title":"interface instance-attribute","text":"
    interface: str\n

    Source-interface to use as source IP of log messages

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport log messages

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingTimestamp","title":"VerifyLoggingTimestamp","text":"

    Bases: AntaTest

    Verifies if logs are generated with the approprate timestamp.

    Expected Results
    • success: The test will pass if logs are generated with the appropriated timestamp.
    • failure: The test will fail if logs are NOT generated with the appropriated timestamp.
    Source code in anta/tests/logging.py
    class VerifyLoggingTimestamp(AntaTest):\n\"\"\"\n    Verifies if logs are generated with the approprate timestamp.\n\n    Expected Results:\n        * success: The test will pass if logs are generated with the appropriated timestamp.\n        * failure: The test will fail if logs are NOT generated with the appropriated timestamp.\n    \"\"\"\n\n    name = \"VerifyLoggingTimestamp\"\n    description = \"Verifies if logs are generated with the appropriate timestamp.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"send log level informational message ANTA VerifyLoggingTimestamp validation\"),\n        AntaCommand(command=\"show logging informational last 30 seconds | grep ANTA\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        log_pattern = r\"ANTA VerifyLoggingTimestamp validation\"\n        timestamp_pattern = r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d{6}-\\d{2}:\\d{2}\"\n        output = self.instance_commands[1].text_output\n        lines = output.strip().split(\"\\n\")[::-1]\n        last_line_with_pattern = \"\"\n        for line in lines:\n            if re.search(log_pattern, line):\n                last_line_with_pattern = line\n                break\n        if re.search(timestamp_pattern, last_line_with_pattern):\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Logs are not generated with the appropriate timestamp format\")\n
    "},{"location":"api/tests/","title":"Overview","text":""},{"location":"api/tests/#anta-tests-landing-page","title":"ANTA Tests landing page","text":"

    This section describes all the available tests provided by ANTA package.

    • AAA
    • Configuration
    • Connectivity
    • Field Notice
    • Hardware
    • Interfaces
    • Logging
    • MLAG
    • Multicast
    • Profiles
    • Routing Generic
    • Routing BGP
    • Routing OSPF
    • Security
    • SNMP
    • Software
    • STP
    • System
    • VXLAN

    All these tests can be imported in a catalog to be used by the anta cli or in your own framework

    "},{"location":"api/tests.mlag/","title":"MLAG","text":""},{"location":"api/tests.mlag/#anta-catalog-for-mlag-tests","title":"ANTA catalog for mlag tests","text":"

    Test functions related to Multi-chassis Link Aggregation (MLAG)

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagConfigSanity","title":"VerifyMlagConfigSanity","text":"

    Bases: AntaTest

    This test verifies there are no MLAG config-sanity inconsistencies.

    Expected Results
    • success: The test will pass if there are NO MLAG config-sanity inconsistencies.
    • failure: The test will fail if there are MLAG config-sanity inconsistencies.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    • error: The test will give an error if \u2018mlagActive\u2019 is not found in the JSON response.
    Source code in anta/tests/mlag.py
    class VerifyMlagConfigSanity(AntaTest):\n\"\"\"\n    This test verifies there are no MLAG config-sanity inconsistencies.\n\n    Expected Results:\n        * success: The test will pass if there are NO MLAG config-sanity inconsistencies.\n        * failure: The test will fail if there are MLAG config-sanity inconsistencies.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n        * error: The test will give an error if 'mlagActive' is not found in the JSON response.\n    \"\"\"\n\n    name = \"VerifyMlagConfigSanity\"\n    description = \"This test verifies there are no MLAG config-sanity inconsistencies.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag config-sanity\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if (mlag_status := get_value(command_output, \"mlagActive\")) is None:\n            self.result.is_error(message=\"Incorrect JSON response - 'mlagActive' state was not found\")\n            return\n        if mlag_status is False:\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        keys_to_verify = [\"globalConfiguration\", \"interfaceConfiguration\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if not any(verified_output.values()):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"MLAG config-sanity returned inconsistencies: {verified_output}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary","title":"VerifyMlagDualPrimary","text":"

    Bases: AntaTest

    This test verifies the dual-primary detection and its parameters of the MLAG configuration.

    Expected Results
    • success: The test will pass if the dual-primary detection is enabled and its parameters are configured properly.
    • failure: The test will fail if the dual-primary detection is NOT enabled or its parameters are NOT configured properly.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagDualPrimary(AntaTest):\n\"\"\"\n    This test verifies the dual-primary detection and its parameters of the MLAG configuration.\n\n    Expected Results:\n        * success: The test will pass if the dual-primary detection is enabled and its parameters are configured properly.\n        * failure: The test will fail if the dual-primary detection is NOT enabled or its parameters are NOT configured properly.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagDualPrimary\"\n    description = \"This test verifies the dual-primary detection and its parameters of the MLAG configuration.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag detail\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        detection_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay detection (seconds)\"\"\"\n        errdisabled: bool = False\n\"\"\"Errdisabled all interfaces when dual-primary is detected\"\"\"\n        recovery_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until non peer-link ports that are part of an MLAG are enabled\"\"\"\n        recovery_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until ports that are not part of an MLAG are enabled\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        errdisabled_action = \"errdisableAllInterfaces\" if self.inputs.errdisabled else \"none\"\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        if command_output[\"dualPrimaryDetectionState\"] == \"disabled\":\n            self.result.is_failure(\"Dual-primary detection is disabled\")\n            return\n        keys_to_verify = [\"detail.dualPrimaryDetectionDelay\", \"detail.dualPrimaryAction\", \"dualPrimaryMlagRecoveryDelay\", \"dualPrimaryNonMlagRecoveryDelay\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if (\n            verified_output[\"detail.dualPrimaryDetectionDelay\"] == self.inputs.detection_delay\n            and verified_output[\"detail.dualPrimaryAction\"] == errdisabled_action\n            and verified_output[\"dualPrimaryMlagRecoveryDelay\"] == self.inputs.recovery_delay\n            and verified_output[\"dualPrimaryNonMlagRecoveryDelay\"] == self.inputs.recovery_delay_non_mlag\n        ):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The dual-primary parameters are not configured properly: {verified_output}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/mlag.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    detection_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay detection (seconds)\"\"\"\n    errdisabled: bool = False\n\"\"\"Errdisabled all interfaces when dual-primary is detected\"\"\"\n    recovery_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until non peer-link ports that are part of an MLAG are enabled\"\"\"\n    recovery_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until ports that are not part of an MLAG are enabled\"\"\"\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.detection_delay","title":"detection_delay instance-attribute","text":"
    detection_delay: conint(ge=0)\n

    Delay detection (seconds)

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.errdisabled","title":"errdisabled class-attribute instance-attribute","text":"
    errdisabled: bool = False\n

    Errdisabled all interfaces when dual-primary is detected

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.recovery_delay","title":"recovery_delay instance-attribute","text":"
    recovery_delay: conint(ge=0)\n

    Delay (seconds) after dual-primary detection resolves until non peer-link ports that are part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.recovery_delay_non_mlag","title":"recovery_delay_non_mlag instance-attribute","text":"
    recovery_delay_non_mlag: conint(ge=0)\n

    Delay (seconds) after dual-primary detection resolves until ports that are not part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagInterfaces","title":"VerifyMlagInterfaces","text":"

    Bases: AntaTest

    This test verifies there are no inactive or active-partial MLAG ports.

    Expected Results
    • success: The test will pass if there are NO inactive or active-partial MLAG ports.
    • failure: The test will fail if there are inactive or active-partial MLAG ports.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagInterfaces(AntaTest):\n\"\"\"\n    This test verifies there are no inactive or active-partial MLAG ports.\n\n    Expected Results:\n        * success: The test will pass if there are NO inactive or active-partial MLAG ports.\n        * failure: The test will fail if there are inactive or active-partial MLAG ports.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagInterfaces\"\n    description = \"This test verifies there are no inactive or active-partial MLAG ports.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        if command_output[\"mlagPorts\"][\"Inactive\"] == 0 and command_output[\"mlagPorts\"][\"Active-partial\"] == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"MLAG status is not OK: {command_output['mlagPorts']}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay","title":"VerifyMlagReloadDelay","text":"

    Bases: AntaTest

    This test verifies the reload-delay parameters of the MLAG configuration.

    Expected Results
    • success: The test will pass if the reload-delay parameters are configured properly.
    • failure: The test will fail if the reload-delay parameters are NOT configured properly.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagReloadDelay(AntaTest):\n\"\"\"\n    This test verifies the reload-delay parameters of the MLAG configuration.\n\n    Expected Results:\n        * success: The test will pass if the reload-delay parameters are configured properly.\n        * failure: The test will fail if the reload-delay parameters are NOT configured properly.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagReloadDelay\"\n    description = \"This test verifies the reload-delay parameters of the MLAG configuration.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        reload_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until non peer-link ports that are part of an MLAG are enabled\"\"\"\n        reload_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until ports that are not part of an MLAG are enabled\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        keys_to_verify = [\"reloadDelay\", \"reloadDelayNonMlag\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if verified_output[\"reloadDelay\"] == self.inputs.reload_delay and verified_output[\"reloadDelayNonMlag\"] == self.inputs.reload_delay_non_mlag:\n            self.result.is_success()\n\n        else:\n            self.result.is_failure(f\"The reload-delay parameters are not configured properly: {verified_output}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/mlag.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    reload_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until non peer-link ports that are part of an MLAG are enabled\"\"\"\n    reload_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until ports that are not part of an MLAG are enabled\"\"\"\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay.Input.reload_delay","title":"reload_delay instance-attribute","text":"
    reload_delay: conint(ge=0)\n

    Delay (seconds) after reboot until non peer-link ports that are part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay.Input.reload_delay_non_mlag","title":"reload_delay_non_mlag instance-attribute","text":"
    reload_delay_non_mlag: conint(ge=0)\n

    Delay (seconds) after reboot until ports that are not part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagStatus","title":"VerifyMlagStatus","text":"

    Bases: AntaTest

    This test verifies the health status of the MLAG configuration.

    Expected Results
    • success: The test will pass if the MLAG state is \u2018active\u2019, negotiation status is \u2018connected\u2019, peer-link status and local interface status are \u2018up\u2019.
    • failure: The test will fail if the MLAG state is not \u2018active\u2019, negotiation status is not \u2018connected\u2019, peer-link status or local interface status are not \u2018up\u2019.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagStatus(AntaTest):\n\"\"\"\n    This test verifies the health status of the MLAG configuration.\n\n    Expected Results:\n        * success: The test will pass if the MLAG state is 'active', negotiation status is 'connected',\n                   peer-link status and local interface status are 'up'.\n        * failure: The test will fail if the MLAG state is not 'active', negotiation status is not 'connected',\n                   peer-link status or local interface status are not 'up'.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagStatus\"\n    description = \"This test verifies the health status of the MLAG configuration.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        keys_to_verify = [\"state\", \"negStatus\", \"localIntfStatus\", \"peerLinkStatus\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if (\n            verified_output[\"state\"] == \"active\"\n            and verified_output[\"negStatus\"] == \"connected\"\n            and verified_output[\"localIntfStatus\"] == \"up\"\n            and verified_output[\"peerLinkStatus\"] == \"up\"\n        ):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"MLAG status is not OK: {verified_output}\")\n
    "},{"location":"api/tests.multicast/","title":"Multicast","text":""},{"location":"api/tests.multicast/#anta-catalog-for-multicast-tests","title":"ANTA catalog for multicast tests","text":"

    Test functions related to multicast

    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingGlobal","title":"VerifyIGMPSnoopingGlobal","text":"

    Bases: AntaTest

    Verifies the IGMP snooping global configuration.

    Source code in anta/tests/multicast.py
    class VerifyIGMPSnoopingGlobal(AntaTest):\n\"\"\"\n    Verifies the IGMP snooping global configuration.\n    \"\"\"\n\n    name = \"VerifyIGMPSnoopingGlobal\"\n    description = \"Verifies the IGMP snooping global configuration.\"\n    categories = [\"multicast\", \"igmp\"]\n    commands = [AntaCommand(command=\"show ip igmp snooping\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        enabled: bool\n\"\"\"Expected global IGMP snooping configuration (True=enabled, False=disabled)\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        igmp_state = command_output[\"igmpSnoopingState\"]\n        if igmp_state != \"enabled\" if self.inputs.enabled else igmp_state != \"disabled\":\n            self.result.is_failure(f\"IGMP state is not valid: {igmp_state}\")\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingGlobal.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/multicast.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    enabled: bool\n\"\"\"Expected global IGMP snooping configuration (True=enabled, False=disabled)\"\"\"\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingGlobal.Input.enabled","title":"enabled instance-attribute","text":"
    enabled: bool\n

    Expected global IGMP snooping configuration (True=enabled, False=disabled)

    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingVlans","title":"VerifyIGMPSnoopingVlans","text":"

    Bases: AntaTest

    Verifies the IGMP snooping configuration for some VLANs.

    Source code in anta/tests/multicast.py
    class VerifyIGMPSnoopingVlans(AntaTest):\n\"\"\"\n    Verifies the IGMP snooping configuration for some VLANs.\n    \"\"\"\n\n    name = \"VerifyIGMPSnoopingVlans\"\n    description = \"Verifies the IGMP snooping configuration for some VLANs.\"\n    categories = [\"multicast\", \"igmp\"]\n    commands = [AntaCommand(command=\"show ip igmp snooping\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vlans: Dict[Vlan, bool]\n\"\"\"Dictionary of VLANs with associated IGMP configuration status (True=enabled, False=disabled)\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        for vlan, enabled in self.inputs.vlans.items():\n            if str(vlan) not in command_output[\"vlans\"]:\n                self.result.is_failure(f\"Supplied vlan {vlan} is not present on the device.\")\n                continue\n\n            igmp_state = command_output[\"vlans\"][str(vlan)][\"igmpSnoopingState\"]\n            if igmp_state != \"enabled\" if enabled else igmp_state != \"disabled\":\n                self.result.is_failure(f\"IGMP state for vlan {vlan} is {igmp_state}\")\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingVlans.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/multicast.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vlans: Dict[Vlan, bool]\n\"\"\"Dictionary of VLANs with associated IGMP configuration status (True=enabled, False=disabled)\"\"\"\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingVlans.Input.vlans","title":"vlans instance-attribute","text":"
    vlans: Dict[Vlan, bool]\n

    Dictionary of VLANs with associated IGMP configuration status (True=enabled, False=disabled)

    "},{"location":"api/tests.profiles/","title":"Profiles","text":""},{"location":"api/tests.profiles/#anta-catalog-for-profiles-tests","title":"ANTA catalog for profiles tests","text":"

    Test functions related to ASIC profiles

    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyTcamProfile","title":"VerifyTcamProfile","text":"

    Bases: AntaTest

    Verifies the device is using the configured TCAM profile.

    Source code in anta/tests/profiles.py
    class VerifyTcamProfile(AntaTest):\n\"\"\"\n    Verifies the device is using the configured TCAM profile.\n    \"\"\"\n\n    name = \"VerifyTcamProfile\"\n    description = \"Verify that the assigned TCAM profile is actually running on the device\"\n    categories = [\"profiles\"]\n    commands = [AntaCommand(command=\"show hardware tcam profile\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        profile: str\n\"\"\"Expected TCAM profile\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"pmfProfiles\"][\"FixedSystem\"][\"status\"] == command_output[\"pmfProfiles\"][\"FixedSystem\"][\"config\"] == self.inputs.profile:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Incorrect profile running on device: {command_output['pmfProfiles']['FixedSystem']['status']}\")\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyTcamProfile.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/profiles.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    profile: str\n\"\"\"Expected TCAM profile\"\"\"\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyTcamProfile.Input.profile","title":"profile instance-attribute","text":"
    profile: str\n

    Expected TCAM profile

    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyUnifiedForwardingTableMode","title":"VerifyUnifiedForwardingTableMode","text":"

    Bases: AntaTest

    Verifies the device is using the expected Unified Forwarding Table mode.

    Source code in anta/tests/profiles.py
    class VerifyUnifiedForwardingTableMode(AntaTest):\n\"\"\"\n    Verifies the device is using the expected Unified Forwarding Table mode.\n    \"\"\"\n\n    name = \"VerifyUnifiedForwardingTableMode\"\n    description = \"\"\n    categories = [\"profiles\"]\n    commands = [AntaCommand(command=\"show platform trident forwarding-table partition\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mode: Literal[0, 1, 2, 3, 4, \"flexible\"]\n\"\"\"Expected UFT mode\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"uftMode\"] == str(self.inputs.mode):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device is not running correct UFT mode (expected: {self.inputs.mode} / running: {command_output['uftMode']})\")\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyUnifiedForwardingTableMode.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/profiles.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mode: Literal[0, 1, 2, 3, 4, \"flexible\"]\n\"\"\"Expected UFT mode\"\"\"\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyUnifiedForwardingTableMode.Input.mode","title":"mode instance-attribute","text":"
    mode: Literal[0, 1, 2, 3, 4, 'flexible']\n

    Expected UFT mode

    "},{"location":"api/tests.routing.bgp/","title":"BGP","text":""},{"location":"api/tests.routing.bgp/#anta-catalog-for-bgp-tests","title":"ANTA catalog for BGP tests","text":"

    Deprecation Notice

    As part of our ongoing effort to improve the ANTA catalog and align it with best practices, we are announcing the deprecation of certain BGP tests along with a specific decorator. These will be removed in a future major release of ANTA.

    What is being deprecated?

    • Tests: The following BGP tests in the ANTA catalog are marked for deprecation.
    anta.tests.routing:\nbgp:\n- VerifyBGPIPv4UnicastState:\n- VerifyBGPIPv4UnicastCount:\n- VerifyBGPIPv6UnicastState:\n- VerifyBGPEVPNState:\n- VerifyBGPEVPNCount:\n- VerifyBGPRTCState:\n- VerifyBGPRTCCount:\n
    • Decorator: The check_bgp_family_enable decorator is also being deprecated as it is no longer needed with the new refactored BGP tests.

    What should you do?

    We strongly recommend transitioning to the new set of BGP tests that have been introduced to replace the deprecated ones. Please refer to each test documentation on this page below.

    anta.tests.routing:\nbgp:\n- VerifyBGPPeerCount:\n- VerifyBGPPeersHealth:\n- VerifyBGPSpecificPeers:\n

    BGP test functions

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNCount","title":"VerifyBGPEVPNCount","text":"

    Bases: AntaTest

    Verifies all EVPN BGP sessions are established (default VRF) and the actual number of BGP EVPN neighbors is the one we expect (default VRF).

    • self.result = \u201csuccess\u201d if all EVPN BGP sessions are Established and if the actual number of BGP EVPN neighbors is the one we expect.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPEVPNCount(AntaTest):\n\"\"\"\n    Verifies all EVPN BGP sessions are established (default VRF)\n    and the actual number of BGP EVPN neighbors is the one we expect (default VRF).\n\n    * self.result = \"success\" if all EVPN BGP sessions are Established and if the actual\n                         number of BGP EVPN neighbors is the one we expect.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPEVPNCount\"\n    description = \"Verifies all EVPN BGP sessions are established (default VRF) and the actual number of BGP EVPN neighbors is the one we expect (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp evpn summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: int\n\"\"\"The expected number of BGP EVPN neighbors in the default VRF\"\"\"\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeerCount\", \"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"evpn\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        peers = command_output[\"vrfs\"][\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers and len(peers) == self.inputs.number:\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            if len(peers) != self.inputs.number:\n                self.result.is_failure(f\"Expecting {self.inputs.number} BGP EVPN peers and got {len(peers)}\")\n            if non_established_peers:\n                self.result.is_failure(f\"The following EVPN peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: int\n\"\"\"The expected number of BGP EVPN neighbors in the default VRF\"\"\"\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNCount.Input.number","title":"number instance-attribute","text":"
    number: int\n

    The expected number of BGP EVPN neighbors in the default VRF

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNState","title":"VerifyBGPEVPNState","text":"

    Bases: AntaTest

    Verifies all EVPN BGP sessions are established (default VRF).

    • self.result = \u201cskipped\u201d if no BGP EVPN peers are returned by the device
    • self.result = \u201csuccess\u201d if all EVPN BGP sessions are established.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPEVPNState(AntaTest):\n\"\"\"\n    Verifies all EVPN BGP sessions are established (default VRF).\n\n    * self.result = \"skipped\" if no BGP EVPN peers are returned by the device\n    * self.result = \"success\" if all EVPN BGP sessions are established.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPEVPNState\"\n    description = \"Verifies all EVPN BGP sessions are established (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp evpn summary\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"evpn\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        bgp_vrfs = command_output[\"vrfs\"]\n        peers = bgp_vrfs[\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following EVPN peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastCount","title":"VerifyBGPIPv4UnicastCount","text":"

    Bases: AntaTest

    Verifies all IPv4 unicast BGP sessions are established and all BGP messages queues for these sessions are empty and the actual number of BGP IPv4 unicast neighbors is the one we expect in all VRFs specified as input.

    • self.result = \u201csuccess\u201d if all IPv4 unicast BGP sessions are established and if all BGP messages queues for these sessions are empty and if the actual number of BGP IPv4 unicast neighbors is equal to `number in all VRFs specified as input.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPIPv4UnicastCount(AntaTest):\n\"\"\"\n    Verifies all IPv4 unicast BGP sessions are established\n    and all BGP messages queues for these sessions are empty\n    and the actual number of BGP IPv4 unicast neighbors is the one we expect\n    in all VRFs specified as input.\n\n    * self.result = \"success\" if all IPv4 unicast BGP sessions are established\n                         and if all BGP messages queues for these sessions are empty\n                         and if the actual number of BGP IPv4 unicast neighbors is equal to `number\n                         in all VRFs specified as input.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPIPv4UnicastCount\"\n    description = (\n        \"Verifies all IPv4 unicast BGP sessions are established and all their BGP messages queues are empty and \"\n        \" the actual number of BGP IPv4 unicast neighbors is the one we expect.\"\n    )\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaTemplate(template=\"show bgp ipv4 unicast summary vrf {vrf}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vrfs: Dict[str, int]\n\"\"\"VRFs associated with neighbors count to verify\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vrf=vrf) for vrf in self.inputs.vrfs]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeerCount\", \"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"ipv4\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n        for command in self.instance_commands:\n            if command.params and \"vrf\" in command.params:\n                vrf = command.params[\"vrf\"]\n                count = self.inputs.vrfs[vrf]\n                if vrf not in command.json_output[\"vrfs\"]:\n                    self.result.is_failure(f\"VRF {vrf} is not configured\")\n                    return\n                peers = command.json_output[\"vrfs\"][vrf][\"peers\"]\n                state_issue = _check_bgp_vrfs(command.json_output[\"vrfs\"])\n                if len(peers) != count:\n                    self.result.is_failure(f\"Expecting {count} BGP peer(s) in vrf {vrf} but got {len(peers)} peer(s)\")\n                if state_issue:\n                    self.result.is_failure(f\"The following IPv4 peer(s) are not established: {state_issue}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vrfs: Dict[str, int]\n\"\"\"VRFs associated with neighbors count to verify\"\"\"\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastCount.Input.vrfs","title":"vrfs instance-attribute","text":"
    vrfs: Dict[str, int]\n

    VRFs associated with neighbors count to verify

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastState","title":"VerifyBGPIPv4UnicastState","text":"

    Bases: AntaTest

    Verifies all IPv4 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).

    • self.result = \u201cskipped\u201d if no BGP vrf are returned by the device
    • self.result = \u201csuccess\u201d if all IPv4 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPIPv4UnicastState(AntaTest):\n\"\"\"\n    Verifies all IPv4 unicast BGP sessions are established (for all VRF)\n    and all BGP messages queues for these sessions are empty (for all VRF).\n\n    * self.result = \"skipped\" if no BGP vrf are returned by the device\n    * self.result = \"success\" if all IPv4 unicast BGP sessions are established (for all VRF)\n                         and all BGP messages queues for these sessions are empty (for all VRF).\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPIPv4UnicastState\"\n    description = \"Verifies all IPv4 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp ipv4 unicast summary vrf all\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"ipv4\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        state_issue = _check_bgp_vrfs(command_output[\"vrfs\"])\n        if not state_issue:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Some IPv4 Unicast BGP Peer are not up: {state_issue}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv6UnicastState","title":"VerifyBGPIPv6UnicastState","text":"

    Bases: AntaTest

    Verifies all IPv6 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).

    • self.result = \u201cskipped\u201d if no BGP vrf are returned by the device
    • self.result = \u201csuccess\u201d if all IPv6 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPIPv6UnicastState(AntaTest):\n\"\"\"\n    Verifies all IPv6 unicast BGP sessions are established (for all VRF)\n    and all BGP messages queues for these sessions are empty (for all VRF).\n\n    * self.result = \"skipped\" if no BGP vrf are returned by the device\n    * self.result = \"success\" if all IPv6 unicast BGP sessions are established (for all VRF)\n                         and all BGP messages queues for these sessions are empty (for all VRF).\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPIPv6UnicastState\"\n    description = \"Verifies all IPv6 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp ipv6 unicast summary vrf all\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"ipv6\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        state_issue = _check_bgp_vrfs(command_output[\"vrfs\"])\n        if not state_issue:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Some IPv4 Unicast BGP Peer are not up: {state_issue}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount","title":"VerifyBGPPeerCount","text":"

    Bases: AntaTest

    This test verifies the count of BGP peers for a given address family.

    It supports multiple types of address families (AFI) and subsequent service families (SAFI). Please refer to the Input class attributes below for details.

    Expected Results
    • success: If the count of BGP peers matches the expected count for each address family and VRF.
    • failure: If the count of BGP peers does not match the expected count, or if BGP is not configured for an expected VRF or address family.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPPeerCount(AntaTest):\n\"\"\"\n    This test verifies the count of BGP peers for a given address family.\n\n    It supports multiple types of address families (AFI) and subsequent service families (SAFI).\n    Please refer to the Input class attributes below for details.\n\n    Expected Results:\n        * success: If the count of BGP peers matches the expected count for each address family and VRF.\n        * failure: If the count of BGP peers does not match the expected count, or if BGP is not configured for an expected VRF or address family.\n    \"\"\"\n\n    name = \"VerifyBGPPeerCount\"\n    description = \"Verifies the count of BGP peers.\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [\n        AntaTemplate(template=\"show bgp {afi} {safi} summary vrf {vrf}\"),\n        AntaTemplate(template=\"show bgp {afi} summary\"),\n    ]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        address_families: List[BgpAfi]\n\"\"\"\n        List of BGP address families (BgpAfi)\n        \"\"\"\n\n        class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n            afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n            safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n            If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n            \"\"\"\n            vrf: str = \"default\"\n\"\"\"\n            Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n            If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n            \"\"\"\n            num_peers: PositiveInt\n\"\"\"Number of expected BGP peer(s)\"\"\"\n\n            @model_validator(mode=\"after\")\n            def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n                Validate the inputs provided to the BgpAfi class.\n\n                If afi is either ipv4 or ipv6, safi must be provided.\n\n                If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n                \"\"\"\n                if self.afi in [\"ipv4\", \"ipv6\"]:\n                    if self.safi is None:\n                        raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                elif self.safi is not None:\n                    raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n                elif self.vrf != \"default\":\n                    raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n                return self\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        commands = []\n        for afi in self.inputs.address_families:\n            if template == VerifyBGPPeerCount.commands[0] and afi.afi in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, safi=afi.safi, vrf=afi.vrf, num_peers=afi.num_peers))\n            elif template == VerifyBGPPeerCount.commands[1] and afi.afi not in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, vrf=afi.vrf, num_peers=afi.num_peers))\n        return commands\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n\n        failures: dict[tuple[str, Any], dict[str, Any]] = {}\n\n        for command in self.instance_commands:\n            if command.params:\n                peer_count = 0\n                command_output = command.json_output\n\n                afi = cast(Afi, command.params.get(\"afi\"))\n                safi = cast(Optional[Safi], command.params.get(\"safi\"))\n                afi_vrf = cast(str, command.params.get(\"vrf\"))\n                num_peers = cast(PositiveInt, command.params.get(\"num_peers\"))\n\n                if not (vrfs := command_output.get(\"vrfs\")):\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"Not Configured\")\n                    continue\n\n                if afi_vrf == \"all\":\n                    for vrf_data in vrfs.values():\n                        peer_count += len(vrf_data[\"peers\"])\n                else:\n                    peer_count += len(command_output[\"vrfs\"][afi_vrf][\"peers\"])\n\n                if peer_count != num_peers:\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=f\"Expected: {num_peers}, Actual: {peer_count}\")\n\n        if failures:\n            self.result.is_failure(f\"Failures: {list(failures.values())}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    address_families: List[BgpAfi]\n\"\"\"\n    List of BGP address families (BgpAfi)\n    \"\"\"\n\n    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n        afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n        safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n        If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n        \"\"\"\n        vrf: str = \"default\"\n\"\"\"\n        Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n        If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n        \"\"\"\n        num_peers: PositiveInt\n\"\"\"Number of expected BGP peer(s)\"\"\"\n\n        @model_validator(mode=\"after\")\n        def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n            Validate the inputs provided to the BgpAfi class.\n\n            If afi is either ipv4 or ipv6, safi must be provided.\n\n            If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n            \"\"\"\n            if self.afi in [\"ipv4\", \"ipv6\"]:\n                if self.safi is None:\n                    raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n            elif self.safi is not None:\n                raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n            elif self.vrf != \"default\":\n                raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n            return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.address_families","title":"address_families instance-attribute","text":"
    address_families: List[BgpAfi]\n

    List of BGP address families (BgpAfi)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi","title":"BgpAfi","text":"

    Bases: BaseModel

    Source code in anta/tests/routing/bgp.py
    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n    afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n    safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n    If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n    \"\"\"\n    vrf: str = \"default\"\n\"\"\"\n    Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n    If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n    \"\"\"\n    num_peers: PositiveInt\n\"\"\"Number of expected BGP peer(s)\"\"\"\n\n    @model_validator(mode=\"after\")\n    def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n        Validate the inputs provided to the BgpAfi class.\n\n        If afi is either ipv4 or ipv6, safi must be provided.\n\n        If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n        \"\"\"\n        if self.afi in [\"ipv4\", \"ipv6\"]:\n            if self.safi is None:\n                raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n        elif self.safi is not None:\n            raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n        elif self.vrf != \"default\":\n            raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n        return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.afi","title":"afi instance-attribute","text":"
    afi: Afi\n

    BGP address family (AFI)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.num_peers","title":"num_peers instance-attribute","text":"
    num_peers: PositiveInt\n

    Number of expected BGP peer(s)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.safi","title":"safi class-attribute instance-attribute","text":"
    safi: Optional[Safi] = None\n

    Optional BGP subsequent service family (SAFI).

    If the input afi is ipv4 or ipv6, a valid safi must be provided.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    Optional VRF for IPv4 and IPv6. If not provided, it defaults to default.

    If the input afi is not ipv4 or ipv6, e.g. evpn, vrf must be default.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.validate_inputs","title":"validate_inputs","text":"
    validate_inputs() -> BaseModel\n

    Validate the inputs provided to the BgpAfi class.

    If afi is either ipv4 or ipv6, safi must be provided.

    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.

    Source code in anta/tests/routing/bgp.py
    @model_validator(mode=\"after\")\ndef validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n    Validate the inputs provided to the BgpAfi class.\n\n    If afi is either ipv4 or ipv6, safi must be provided.\n\n    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n    \"\"\"\n    if self.afi in [\"ipv4\", \"ipv6\"]:\n        if self.safi is None:\n            raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n    elif self.safi is not None:\n        raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n    elif self.vrf != \"default\":\n        raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n    return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth","title":"VerifyBGPPeersHealth","text":"

    Bases: AntaTest

    This test verifies the health of BGP peers.

    It will validate that all BGP sessions are established and all message queues for these BGP sessions are empty for a given address family.

    It supports multiple types of address families (AFI) and subsequent service families (SAFI). Please refer to the Input class attributes below for details.

    Expected Results
    • success: If all BGP sessions are established and all messages queues are empty for each address family and VRF.
    • failure: If there are issues with any of the BGP sessions, or if BGP is not configured for an expected VRF or address family.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPPeersHealth(AntaTest):\n\"\"\"\n    This test verifies the health of BGP peers.\n\n    It will validate that all BGP sessions are established and all message queues for these BGP sessions are empty for a given address family.\n\n    It supports multiple types of address families (AFI) and subsequent service families (SAFI).\n    Please refer to the Input class attributes below for details.\n\n    Expected Results:\n        * success: If all BGP sessions are established and all messages queues are empty for each address family and VRF.\n        * failure: If there are issues with any of the BGP sessions, or if BGP is not configured for an expected VRF or address family.\n    \"\"\"\n\n    name = \"VerifyBGPPeersHealth\"\n    description = \"Verifies the health of BGP peers\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [\n        AntaTemplate(template=\"show bgp {afi} {safi} summary vrf {vrf}\"),\n        AntaTemplate(template=\"show bgp {afi} summary\"),\n    ]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        address_families: List[BgpAfi]\n\"\"\"\n        List of BGP address families (BgpAfi)\n        \"\"\"\n\n        class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n            afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n            safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n            If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n            \"\"\"\n            vrf: str = \"default\"\n\"\"\"\n            Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n            If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n            \"\"\"\n\n            @model_validator(mode=\"after\")\n            def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n                Validate the inputs provided to the BgpAfi class.\n\n                If afi is either ipv4 or ipv6, safi must be provided.\n\n                If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n                \"\"\"\n                if self.afi in [\"ipv4\", \"ipv6\"]:\n                    if self.safi is None:\n                        raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                elif self.safi is not None:\n                    raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n                elif self.vrf != \"default\":\n                    raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n                return self\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        commands = []\n        for afi in self.inputs.address_families:\n            if template == VerifyBGPPeersHealth.commands[0] and afi.afi in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, safi=afi.safi, vrf=afi.vrf))\n            elif template == VerifyBGPPeersHealth.commands[1] and afi.afi not in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, vrf=afi.vrf))\n        return commands\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n\n        failures: dict[tuple[str, Any], dict[str, Any]] = {}\n\n        for command in self.instance_commands:\n            if command.params:\n                command_output = command.json_output\n\n                afi = cast(Afi, command.params.get(\"afi\"))\n                safi = cast(Optional[Safi], command.params.get(\"safi\"))\n                afi_vrf = cast(str, command.params.get(\"vrf\"))\n\n                if not (vrfs := command_output.get(\"vrfs\")):\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"Not Configured\")\n                    continue\n\n                for vrf, vrf_data in vrfs.items():\n                    if not (peers := vrf_data.get(\"peers\")):\n                        _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"No Peers\")\n                        continue\n\n                    peer_issues = {}\n                    for peer, peer_data in peers.items():\n                        issues = _check_peer_issues(peer_data)\n\n                        if issues:\n                            peer_issues[peer] = issues\n\n                    if peer_issues:\n                        _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=vrf, issue=peer_issues)\n\n        if failures:\n            self.result.is_failure(f\"Failures: {list(failures.values())}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    address_families: List[BgpAfi]\n\"\"\"\n    List of BGP address families (BgpAfi)\n    \"\"\"\n\n    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n        afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n        safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n        If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n        \"\"\"\n        vrf: str = \"default\"\n\"\"\"\n        Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n        If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n        \"\"\"\n\n        @model_validator(mode=\"after\")\n        def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n            Validate the inputs provided to the BgpAfi class.\n\n            If afi is either ipv4 or ipv6, safi must be provided.\n\n            If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n            \"\"\"\n            if self.afi in [\"ipv4\", \"ipv6\"]:\n                if self.safi is None:\n                    raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n            elif self.safi is not None:\n                raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n            elif self.vrf != \"default\":\n                raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n            return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.address_families","title":"address_families instance-attribute","text":"
    address_families: List[BgpAfi]\n

    List of BGP address families (BgpAfi)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi","title":"BgpAfi","text":"

    Bases: BaseModel

    Source code in anta/tests/routing/bgp.py
    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n    afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n    safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n    If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n    \"\"\"\n    vrf: str = \"default\"\n\"\"\"\n    Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n    If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n    \"\"\"\n\n    @model_validator(mode=\"after\")\n    def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n        Validate the inputs provided to the BgpAfi class.\n\n        If afi is either ipv4 or ipv6, safi must be provided.\n\n        If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n        \"\"\"\n        if self.afi in [\"ipv4\", \"ipv6\"]:\n            if self.safi is None:\n                raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n        elif self.safi is not None:\n            raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n        elif self.vrf != \"default\":\n            raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n        return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.afi","title":"afi instance-attribute","text":"
    afi: Afi\n

    BGP address family (AFI)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.safi","title":"safi class-attribute instance-attribute","text":"
    safi: Optional[Safi] = None\n

    Optional BGP subsequent service family (SAFI).

    If the input afi is ipv4 or ipv6, a valid safi must be provided.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    Optional VRF for IPv4 and IPv6. If not provided, it defaults to default.

    If the input afi is not ipv4 or ipv6, e.g. evpn, vrf must be default.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.validate_inputs","title":"validate_inputs","text":"
    validate_inputs() -> BaseModel\n

    Validate the inputs provided to the BgpAfi class.

    If afi is either ipv4 or ipv6, safi must be provided.

    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.

    Source code in anta/tests/routing/bgp.py
    @model_validator(mode=\"after\")\ndef validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n    Validate the inputs provided to the BgpAfi class.\n\n    If afi is either ipv4 or ipv6, safi must be provided.\n\n    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n    \"\"\"\n    if self.afi in [\"ipv4\", \"ipv6\"]:\n        if self.safi is None:\n            raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n    elif self.safi is not None:\n        raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n    elif self.vrf != \"default\":\n        raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n    return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCCount","title":"VerifyBGPRTCCount","text":"

    Bases: AntaTest

    Verifies all RTC BGP sessions are established (default VRF) and the actual number of BGP RTC neighbors is the one we expect (default VRF).

    • self.result = \u201csuccess\u201d if all RTC BGP sessions are Established and if the actual number of BGP RTC neighbors is the one we expect.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPRTCCount(AntaTest):\n\"\"\"\n    Verifies all RTC BGP sessions are established (default VRF)\n    and the actual number of BGP RTC neighbors is the one we expect (default VRF).\n\n    * self.result = \"success\" if all RTC BGP sessions are Established and if the actual\n                         number of BGP RTC neighbors is the one we expect.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPRTCCount\"\n    description = \"Verifies all RTC BGP sessions are established (default VRF) and the actual number of BGP RTC neighbors is the one we expect (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp rt-membership summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: int\n\"\"\"The expected number of BGP RTC neighbors in the default VRF\"\"\"\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeerCount\", \"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"rtc\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        peers = command_output[\"vrfs\"][\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers and len(peers) == self.inputs.number:\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            if len(peers) != self.inputs.number:\n                self.result.is_failure(f\"Expecting {self.inputs.number} BGP RTC peers and got {len(peers)}\")\n            if non_established_peers:\n                self.result.is_failure(f\"The following RTC peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: int\n\"\"\"The expected number of BGP RTC neighbors in the default VRF\"\"\"\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCCount.Input.number","title":"number instance-attribute","text":"
    number: int\n

    The expected number of BGP RTC neighbors in the default VRF

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCState","title":"VerifyBGPRTCState","text":"

    Bases: AntaTest

    Verifies all RTC BGP sessions are established (default VRF).

    • self.result = \u201cskipped\u201d if no BGP RTC peers are returned by the device
    • self.result = \u201csuccess\u201d if all RTC BGP sessions are established.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPRTCState(AntaTest):\n\"\"\"\n    Verifies all RTC BGP sessions are established (default VRF).\n\n    * self.result = \"skipped\" if no BGP RTC peers are returned by the device\n    * self.result = \"success\" if all RTC BGP sessions are established.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPRTCState\"\n    description = \"Verifies all RTC BGP sessions are established (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp rt-membership summary\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"rtc\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        bgp_vrfs = command_output[\"vrfs\"]\n        peers = bgp_vrfs[\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following RTC peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers","title":"VerifyBGPSpecificPeers","text":"

    Bases: AntaTest

    This test verifies the health of specific BGP peer(s).

    It will validate that the BGP session is established and all message queues for this BGP session are empty for the given peer(s).

    It supports multiple types of address families (AFI) and subsequent service families (SAFI). Please refer to the Input class attributes below for details.

    Expected Results
    • success: If the BGP session is established and all messages queues are empty for each given peer.
    • failure: If the BGP session has issues or is not configured, or if BGP is not configured for an expected VRF or address family.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPSpecificPeers(AntaTest):\n\"\"\"\n    This test verifies the health of specific BGP peer(s).\n\n    It will validate that the BGP session is established and all message queues for this BGP session are empty for the given peer(s).\n\n    It supports multiple types of address families (AFI) and subsequent service families (SAFI).\n    Please refer to the Input class attributes below for details.\n\n    Expected Results:\n        * success: If the BGP session is established and all messages queues are empty for each given peer.\n        * failure: If the BGP session has issues or is not configured, or if BGP is not configured for an expected VRF or address family.\n    \"\"\"\n\n    name = \"VerifyBGPSpecificPeers\"\n    description = \"Verifies the health of specific BGP peer(s).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [\n        AntaTemplate(template=\"show bgp {afi} {safi} summary vrf {vrf}\"),\n        AntaTemplate(template=\"show bgp {afi} summary\"),\n    ]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        address_families: List[BgpAfi]\n\"\"\"\n        List of BGP address families (BgpAfi)\n        \"\"\"\n\n        class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n            afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n            safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n            If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n            \"\"\"\n            vrf: str = \"default\"\n\"\"\"\n            Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n            `all` is NOT supported.\n\n            If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n            \"\"\"\n            peers: List[Union[IPv4Address, IPv6Address]]\n\"\"\"List of BGP IPv4 or IPv6 peer\"\"\"\n\n            @model_validator(mode=\"after\")\n            def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n                Validate the inputs provided to the BgpAfi class.\n\n                If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n                If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n                \"\"\"\n                if self.afi in [\"ipv4\", \"ipv6\"]:\n                    if self.safi is None:\n                        raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                    if self.vrf == \"all\":\n                        raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n                elif self.safi is not None:\n                    raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n                elif self.vrf != \"default\":\n                    raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n                return self\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        commands = []\n        for afi in self.inputs.address_families:\n            if template == VerifyBGPSpecificPeers.commands[0] and afi.afi in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, safi=afi.safi, vrf=afi.vrf, peers=afi.peers))\n            elif template == VerifyBGPSpecificPeers.commands[1] and afi.afi not in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, vrf=afi.vrf, peers=afi.peers))\n        return commands\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n\n        failures: dict[tuple[str, Any], dict[str, Any]] = {}\n\n        for command in self.instance_commands:\n            if command.params:\n                command_output = command.json_output\n\n                afi = cast(Afi, command.params.get(\"afi\"))\n                safi = cast(Optional[Safi], command.params.get(\"safi\"))\n                afi_vrf = cast(str, command.params.get(\"vrf\"))\n                afi_peers = cast(List[Union[IPv4Address, IPv6Address]], command.params.get(\"peers\", []))\n\n                if not (vrfs := command_output.get(\"vrfs\")):\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"Not Configured\")\n                    continue\n\n                peer_issues = {}\n                for peer in afi_peers:\n                    peer_ip = str(peer)\n                    peer_data = get_value(dictionary=vrfs, key=f\"{afi_vrf}_peers_{peer_ip}\", separator=\"_\")\n                    issues = _check_peer_issues(peer_data)\n                    if issues:\n                        peer_issues[peer_ip] = issues\n\n                if peer_issues:\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=peer_issues)\n\n        if failures:\n            self.result.is_failure(f\"Failures: {list(failures.values())}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    address_families: List[BgpAfi]\n\"\"\"\n    List of BGP address families (BgpAfi)\n    \"\"\"\n\n    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n        afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n        safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n        If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n        \"\"\"\n        vrf: str = \"default\"\n\"\"\"\n        Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n        `all` is NOT supported.\n\n        If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n        \"\"\"\n        peers: List[Union[IPv4Address, IPv6Address]]\n\"\"\"List of BGP IPv4 or IPv6 peer\"\"\"\n\n        @model_validator(mode=\"after\")\n        def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n            Validate the inputs provided to the BgpAfi class.\n\n            If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n            If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n            \"\"\"\n            if self.afi in [\"ipv4\", \"ipv6\"]:\n                if self.safi is None:\n                    raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                if self.vrf == \"all\":\n                    raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n            elif self.safi is not None:\n                raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n            elif self.vrf != \"default\":\n                raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n            return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.address_families","title":"address_families instance-attribute","text":"
    address_families: List[BgpAfi]\n

    List of BGP address families (BgpAfi)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi","title":"BgpAfi","text":"

    Bases: BaseModel

    Source code in anta/tests/routing/bgp.py
    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n    afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n    safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n    If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n    \"\"\"\n    vrf: str = \"default\"\n\"\"\"\n    Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n    `all` is NOT supported.\n\n    If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n    \"\"\"\n    peers: List[Union[IPv4Address, IPv6Address]]\n\"\"\"List of BGP IPv4 or IPv6 peer\"\"\"\n\n    @model_validator(mode=\"after\")\n    def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n        Validate the inputs provided to the BgpAfi class.\n\n        If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n        If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n        \"\"\"\n        if self.afi in [\"ipv4\", \"ipv6\"]:\n            if self.safi is None:\n                raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n            if self.vrf == \"all\":\n                raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n        elif self.safi is not None:\n            raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n        elif self.vrf != \"default\":\n            raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n        return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.afi","title":"afi instance-attribute","text":"
    afi: Afi\n

    BGP address family (AFI)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.peers","title":"peers instance-attribute","text":"
    peers: List[Union[IPv4Address, IPv6Address]]\n

    List of BGP IPv4 or IPv6 peer

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.safi","title":"safi class-attribute instance-attribute","text":"
    safi: Optional[Safi] = None\n

    Optional BGP subsequent service family (SAFI).

    If the input afi is ipv4 or ipv6, a valid safi must be provided.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    Optional VRF for IPv4 and IPv6. If not provided, it defaults to default.

    all is NOT supported.

    If the input afi is not ipv4 or ipv6, e.g. evpn, vrf must be default.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.validate_inputs","title":"validate_inputs","text":"
    validate_inputs() -> BaseModel\n

    Validate the inputs provided to the BgpAfi class.

    If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.

    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.

    Source code in anta/tests/routing/bgp.py
    @model_validator(mode=\"after\")\ndef validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n    Validate the inputs provided to the BgpAfi class.\n\n    If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n    \"\"\"\n    if self.afi in [\"ipv4\", \"ipv6\"]:\n        if self.safi is None:\n            raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n        if self.vrf == \"all\":\n            raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n    elif self.safi is not None:\n        raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n    elif self.vrf != \"default\":\n        raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n    return self\n
    "},{"location":"api/tests.routing.generic/","title":"Generic","text":""},{"location":"api/tests.routing.generic/#anta-catalog-for-routing-generic-tests","title":"ANTA catalog for routing-generic tests","text":"

    Generic routing test functions

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyBFD","title":"VerifyBFD","text":"

    Bases: AntaTest

    Verifies there is no BFD peer in down state (all VRF, IPv4 neighbors).

    Source code in anta/tests/routing/generic.py
    class VerifyBFD(AntaTest):\n\"\"\"\n    Verifies there is no BFD peer in down state (all VRF, IPv4 neighbors).\n    \"\"\"\n\n    name = \"VerifyBFD\"\n    description = \"Verifies there is no BFD peer in down state (all VRF, IPv4 neighbors).\"\n    categories = [\"routing\", \"generic\"]\n    # revision 1 as later revision introduce additional nesting for type\n    commands = [AntaCommand(command=\"show bfd peers\", revision=1)]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        for _, vrf_data in command_output[\"vrfs\"].items():\n            for _, neighbor_data in vrf_data[\"ipv4Neighbors\"].items():\n                for peer, peer_data in neighbor_data[\"peerStats\"].items():\n                    if (peer_status := peer_data[\"status\"]) != \"up\":\n                        failure_message = f\"bfd state for peer '{peer}' is {peer_status} (expected up).\"\n                        if (peer_l3intf := peer_data.get(\"l3intf\")) is not None and peer_l3intf != \"\":\n                            failure_message += f\" Interface: {peer_l3intf}.\"\n                        self.result.is_failure(failure_message)\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingProtocolModel","title":"VerifyRoutingProtocolModel","text":"

    Bases: AntaTest

    Verifies the configured routing protocol model is the one we expect. And if there is no mismatch between the configured and operating routing protocol model.

    Source code in anta/tests/routing/generic.py
    class VerifyRoutingProtocolModel(AntaTest):\n\"\"\"\n    Verifies the configured routing protocol model is the one we expect.\n    And if there is no mismatch between the configured and operating routing protocol model.\n    \"\"\"\n\n    name = \"VerifyRoutingProtocolModel\"\n    description = (\n        \"Verifies the configured routing protocol model is the expected one and if there is no mismatch between the configured and operating routing protocol model.\"\n    )\n    categories = [\"routing\", \"generic\"]\n    commands = [AntaCommand(command=\"show ip route summary\", revision=3)]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        model: Literal[\"multi-agent\", \"ribd\"] = \"multi-agent\"\n\"\"\"Expected routing protocol model\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        configured_model = command_output[\"protoModelStatus\"][\"configuredProtoModel\"]\n        operating_model = command_output[\"protoModelStatus\"][\"operatingProtoModel\"]\n        if configured_model == operating_model == self.inputs.model:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"routing model is misconfigured: configured: {configured_model} - operating: {operating_model} - expected: {self.inputs.model}\")\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingProtocolModel.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/generic.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    model: Literal[\"multi-agent\", \"ribd\"] = \"multi-agent\"\n\"\"\"Expected routing protocol model\"\"\"\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingProtocolModel.Input.model","title":"model class-attribute instance-attribute","text":"
    model: Literal['multi-agent', 'ribd'] = 'multi-agent'\n

    Expected routing protocol model

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry","title":"VerifyRoutingTableEntry","text":"

    Bases: AntaTest

    This test verifies that the provided routes are present in the routing table of a specified VRF.

    Expected Results
    • success: The test will pass if the provided routes are present in the routing table.
    • failure: The test will fail if one or many provided routes are missing from the routing table.
    Source code in anta/tests/routing/generic.py
    class VerifyRoutingTableEntry(AntaTest):\n\"\"\"\n    This test verifies that the provided routes are present in the routing table of a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided routes are present in the routing table.\n        * failure: The test will fail if one or many provided routes are missing from the routing table.\n    \"\"\"\n\n    name = \"VerifyRoutingTableEntry\"\n    description = \"Verifies that the provided routes are present in the routing table of a specified VRF.\"\n    categories = [\"routing\", \"generic\"]\n    commands = [AntaTemplate(template=\"show ip route vrf {vrf} {route}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n        routes: List[IPv4Address]\n\"\"\"Routes to verify\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vrf=self.inputs.vrf, route=route) for route in self.inputs.routes]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        missing_routes = []\n\n        for command in self.instance_commands:\n            if command.params and \"vrf\" in command.params and \"route\" in command.params:\n                vrf, route = command.params[\"vrf\"], command.params[\"route\"]\n                if len(routes := command.json_output[\"vrfs\"][vrf][\"routes\"]) == 0 or route != ip_interface(list(routes)[0]).ip:\n                    missing_routes.append(str(route))\n\n        if not missing_routes:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following route(s) are missing from the routing table of VRF {self.inputs.vrf}: {missing_routes}\")\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/generic.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n    routes: List[IPv4Address]\n\"\"\"Routes to verify\"\"\"\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry.Input.routes","title":"routes instance-attribute","text":"
    routes: List[IPv4Address]\n

    Routes to verify

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    VRF context

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize","title":"VerifyRoutingTableSize","text":"

    Bases: AntaTest

    Verifies the size of the IP routing table (default VRF). Should be between the two provided thresholds.

    Source code in anta/tests/routing/generic.py
    class VerifyRoutingTableSize(AntaTest):\n\"\"\"\n    Verifies the size of the IP routing table (default VRF).\n    Should be between the two provided thresholds.\n    \"\"\"\n\n    name = \"VerifyRoutingTableSize\"\n    description = \"Verifies the size of the IP routing table (default VRF). Should be between the two provided thresholds.\"\n    categories = [\"routing\", \"generic\"]\n    commands = [AntaCommand(command=\"show ip route summary\", revision=3)]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        minimum: int\n\"\"\"Expected minimum routing table (default VRF) size\"\"\"\n        maximum: int\n\"\"\"Expected maximum routing table (default VRF) size\"\"\"\n\n        @model_validator(mode=\"after\")  # type: ignore\n        def check_min_max(self) -> AntaTest.Input:\n\"\"\"Validate that maximum is greater than minimum\"\"\"\n            if self.minimum > self.maximum:\n                raise ValueError(f\"Minimum {self.minimum} is greater than maximum {self.maximum}\")\n            return self\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        total_routes = int(command_output[\"vrfs\"][\"default\"][\"totalRoutes\"])\n        if self.inputs.minimum <= total_routes <= self.inputs.maximum:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"routing-table has {total_routes} routes and not between min ({self.inputs.minimum}) and maximum ({self.inputs.maximum})\")\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/generic.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    minimum: int\n\"\"\"Expected minimum routing table (default VRF) size\"\"\"\n    maximum: int\n\"\"\"Expected maximum routing table (default VRF) size\"\"\"\n\n    @model_validator(mode=\"after\")  # type: ignore\n    def check_min_max(self) -> AntaTest.Input:\n\"\"\"Validate that maximum is greater than minimum\"\"\"\n        if self.minimum > self.maximum:\n            raise ValueError(f\"Minimum {self.minimum} is greater than maximum {self.maximum}\")\n        return self\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input.maximum","title":"maximum instance-attribute","text":"
    maximum: int\n

    Expected maximum routing table (default VRF) size

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input.minimum","title":"minimum instance-attribute","text":"
    minimum: int\n

    Expected minimum routing table (default VRF) size

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input.check_min_max","title":"check_min_max","text":"
    check_min_max() -> AntaTest.Input\n

    Validate that maximum is greater than minimum

    Source code in anta/tests/routing/generic.py
    @model_validator(mode=\"after\")  # type: ignore\ndef check_min_max(self) -> AntaTest.Input:\n\"\"\"Validate that maximum is greater than minimum\"\"\"\n    if self.minimum > self.maximum:\n        raise ValueError(f\"Minimum {self.minimum} is greater than maximum {self.maximum}\")\n    return self\n
    "},{"location":"api/tests.routing.ospf/","title":"OSPF","text":""},{"location":"api/tests.routing.ospf/#anta-catalog-for-routing-ospf-tests","title":"ANTA catalog for routing-ospf tests","text":"

    OSPF test functions

    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborCount","title":"VerifyOSPFNeighborCount","text":"

    Bases: AntaTest

    Verifies the number of OSPF neighbors in FULL state is the one we expect.

    Source code in anta/tests/routing/ospf.py
    class VerifyOSPFNeighborCount(AntaTest):\n\"\"\"\n    Verifies the number of OSPF neighbors in FULL state is the one we expect.\n    \"\"\"\n\n    name = \"VerifyOSPFNeighborCount\"\n    description = \"Verifies the number of OSPF neighbors in FULL state is the one we expect.\"\n    categories = [\"routing\", \"ospf\"]\n    commands = [AntaCommand(command=\"show ip ospf neighbor\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: int\n\"\"\"The expected number of OSPF neighbors in FULL state\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if (neighbor_count := _count_ospf_neighbor(command_output)) == 0:\n            self.result.is_skipped(\"no OSPF neighbor found\")\n            return\n        self.result.is_success()\n        if neighbor_count != self.inputs.number:\n            self.result.is_failure(f\"device has {neighbor_count} neighbors (expected {self.inputs.number})\")\n        not_full_neighbors = _get_not_full_ospf_neighbors(command_output)\n        print(not_full_neighbors)\n        if not_full_neighbors:\n            self.result.is_failure(f\"Some neighbors are not correctly configured: {not_full_neighbors}.\")\n
    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/ospf.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: int\n\"\"\"The expected number of OSPF neighbors in FULL state\"\"\"\n
    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborCount.Input.number","title":"number instance-attribute","text":"
    number: int\n

    The expected number of OSPF neighbors in FULL state

    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborState","title":"VerifyOSPFNeighborState","text":"

    Bases: AntaTest

    Verifies all OSPF neighbors are in FULL state.

    Source code in anta/tests/routing/ospf.py
    class VerifyOSPFNeighborState(AntaTest):\n\"\"\"\n    Verifies all OSPF neighbors are in FULL state.\n    \"\"\"\n\n    name = \"VerifyOSPFNeighborState\"\n    description = \"Verifies all OSPF neighbors are in FULL state.\"\n    categories = [\"routing\", \"ospf\"]\n    commands = [AntaCommand(command=\"show ip ospf neighbor\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if _count_ospf_neighbor(command_output) == 0:\n            self.result.is_skipped(\"no OSPF neighbor found\")\n            return\n        self.result.is_success()\n        not_full_neighbors = _get_not_full_ospf_neighbors(command_output)\n        if not_full_neighbors:\n            self.result.is_failure(f\"Some neighbors are not correctly configured: {not_full_neighbors}.\")\n
    "},{"location":"api/tests.security/","title":"Security","text":""},{"location":"api/tests.security/#anta-catalog-for-security-tests","title":"ANTA catalog for security tests","text":"

    Test functions related to the EOS various security settings

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpStatus","title":"VerifyAPIHttpStatus","text":"

    Bases: AntaTest

    Verifies if eAPI HTTP server is disabled globally.

    Expected Results
    • success: The test will pass if eAPI HTTP server is disabled globally.
    • failure: The test will fail if eAPI HTTP server is NOT disabled globally.
    Source code in anta/tests/security.py
    class VerifyAPIHttpStatus(AntaTest):\n\"\"\"\n    Verifies if eAPI HTTP server is disabled globally.\n\n    Expected Results:\n        * success: The test will pass if eAPI HTTP server is disabled globally.\n        * failure: The test will fail if eAPI HTTP server is NOT disabled globally.\n    \"\"\"\n\n    name = \"VerifyAPIHttpStatus\"\n    description = \"Verifies if eAPI HTTP server is disabled globally.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"enabled\"] and not command_output[\"httpServer\"][\"running\"]:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"eAPI HTTP server is enabled globally\")\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpsSSL","title":"VerifyAPIHttpsSSL","text":"

    Bases: AntaTest

    Verifies if eAPI HTTPS server SSL profile is configured and valid.

    Expected results
    • success: The test will pass if the eAPI HTTPS server SSL profile is configured and valid.
    • failure: The test will fail if the eAPI HTTPS server SSL profile is NOT configured, misconfigured or invalid.
    Source code in anta/tests/security.py
    class VerifyAPIHttpsSSL(AntaTest):\n\"\"\"\n    Verifies if eAPI HTTPS server SSL profile is configured and valid.\n\n    Expected results:\n        * success: The test will pass if the eAPI HTTPS server SSL profile is configured and valid.\n        * failure: The test will fail if the eAPI HTTPS server SSL profile is NOT configured, misconfigured or invalid.\n    \"\"\"\n\n    name = \"VerifyAPIHttpsSSL\"\n    description = \"Verifies if eAPI HTTPS server SSL profile is configured and valid.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        profile: str\n\"\"\"SSL profile to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        try:\n            if command_output[\"sslProfile\"][\"name\"] == self.inputs.profile and command_output[\"sslProfile\"][\"state\"] == \"valid\":\n                self.result.is_success()\n            else:\n                self.result.is_failure(f\"eAPI HTTPS server SSL profile ({self.inputs.profile}) is misconfigured or invalid\")\n\n        except KeyError:\n            self.result.is_failure(f\"eAPI HTTPS server SSL profile ({self.inputs.profile}) is not configured\")\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpsSSL.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    profile: str\n\"\"\"SSL profile to verify\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpsSSL.Input.profile","title":"profile instance-attribute","text":"
    profile: str\n

    SSL profile to verify

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl","title":"VerifyAPIIPv4Acl","text":"

    Bases: AntaTest

    Verifies if eAPI has the right number IPv4 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if eAPI has the provided number of IPv4 ACL(s) in the specified VRF.
    • failure: The test will fail if eAPI has not the right number of IPv4 ACL(s) in the specified VRF.
    Source code in anta/tests/security.py
    class VerifyAPIIPv4Acl(AntaTest):\n\"\"\"\n    Verifies if eAPI has the right number IPv4 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if eAPI has the provided number of IPv4 ACL(s) in the specified VRF.\n        * failure: The test will fail if eAPI has not the right number of IPv4 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyAPIIPv4Acl\"\n    description = \"Verifies if eAPI has the right number IPv4 ACL(s) configured for a specified VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands ip access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv4_acl_list = command_output[\"ipAclList\"][\"aclList\"]\n        ipv4_acl_number = len(ipv4_acl_list)\n        not_configured_acl_list = []\n        if ipv4_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} eAPI IPv4 ACL(s) in vrf {self.inputs.vrf} but got {ipv4_acl_number}\")\n            return\n        for ipv4_acl in ipv4_acl_list:\n            if self.inputs.vrf not in ipv4_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv4_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv4_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"eAPI IPv4 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv4 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for eAPI

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl","title":"VerifyAPIIPv6Acl","text":"

    Bases: AntaTest

    Verifies if eAPI has the right number IPv6 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if eAPI has the provided number of IPv6 ACL(s) in the specified VRF.
    • failure: The test will fail if eAPI has not the right number of IPv6 ACL(s) in the specified VRF.
    • skipped: The test will be skipped if the number of IPv6 ACL(s) or VRF parameter is not provided.
    Source code in anta/tests/security.py
    class VerifyAPIIPv6Acl(AntaTest):\n\"\"\"\n    Verifies if eAPI has the right number IPv6 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if eAPI has the provided number of IPv6 ACL(s) in the specified VRF.\n        * failure: The test will fail if eAPI has not the right number of IPv6 ACL(s) in the specified VRF.\n        * skipped: The test will be skipped if the number of IPv6 ACL(s) or VRF parameter is not provided.\n    \"\"\"\n\n    name = \"VerifyAPIIPv6Acl\"\n    description = \"Verifies if eAPI has the right number IPv6 ACL(s) configured for a specified VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands ipv6 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv6_acl_list = command_output[\"ipv6AclList\"][\"aclList\"]\n        ipv6_acl_number = len(ipv6_acl_list)\n        not_configured_acl_list = []\n        if ipv6_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} eAPI IPv6 ACL(s) in vrf {self.inputs.vrf} but got {ipv6_acl_number}\")\n            return\n        for ipv6_acl in ipv6_acl_list:\n            if self.inputs.vrf not in ipv6_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv6_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv6_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"eAPI IPv6 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv6 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for eAPI

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl","title":"VerifySSHIPv4Acl","text":"

    Bases: AntaTest

    Verifies if the SSHD agent has the right number IPv4 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SSHD agent has the provided number of IPv4 ACL(s) in the specified VRF.
    • failure: The test will fail if the SSHD agent has not the right number of IPv4 ACL(s) in the specified VRF.
    Source code in anta/tests/security.py
    class VerifySSHIPv4Acl(AntaTest):\n\"\"\"\n    Verifies if the SSHD agent has the right number IPv4 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SSHD agent has the provided number of IPv4 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SSHD agent has not the right number of IPv4 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySSHIPv4Acl\"\n    description = \"Verifies if the SSHD agent has IPv4 ACL(s) configured.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management ssh ip access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv4_acl_list = command_output[\"ipAclList\"][\"aclList\"]\n        ipv4_acl_number = len(ipv4_acl_list)\n        not_configured_acl_list = []\n        if ipv4_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SSH IPv4 ACL(s) in vrf {self.inputs.vrf} but got {ipv4_acl_number}\")\n            return\n        for ipv4_acl in ipv4_acl_list:\n            if self.inputs.vrf not in ipv4_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv4_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv4_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SSH IPv4 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv4 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SSHD agent

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl","title":"VerifySSHIPv6Acl","text":"

    Bases: AntaTest

    Verifies if the SSHD agent has the right number IPv6 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SSHD agent has the provided number of IPv6 ACL(s) in the specified VRF.
    • failure: The test will fail if the SSHD agent has not the right number of IPv6 ACL(s) in the specified VRF.
    Source code in anta/tests/security.py
    class VerifySSHIPv6Acl(AntaTest):\n\"\"\"\n    Verifies if the SSHD agent has the right number IPv6 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SSHD agent has the provided number of IPv6 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SSHD agent has not the right number of IPv6 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySSHIPv6Acl\"\n    description = \"Verifies if the SSHD agent has IPv6 ACL(s) configured.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management ssh ipv6 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv6_acl_list = command_output[\"ipv6AclList\"][\"aclList\"]\n        ipv6_acl_number = len(ipv6_acl_list)\n        not_configured_acl_list = []\n        if ipv6_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SSH IPv6 ACL(s) in vrf {self.inputs.vrf} but got {ipv6_acl_number}\")\n            return\n        for ipv6_acl in ipv6_acl_list:\n            if self.inputs.vrf not in ipv6_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv6_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv6_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SSH IPv6 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv6 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SSHD agent

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHStatus","title":"VerifySSHStatus","text":"

    Bases: AntaTest

    Verifies if the SSHD agent is disabled in the default VRF.

    Expected Results
    • success: The test will pass if the SSHD agent is disabled in the default VRF.
    • failure: The test will fail if the SSHD agent is NOT disabled in the default VRF.
    Source code in anta/tests/security.py
    class VerifySSHStatus(AntaTest):\n\"\"\"\n    Verifies if the SSHD agent is disabled in the default VRF.\n\n    Expected Results:\n        * success: The test will pass if the SSHD agent is disabled in the default VRF.\n        * failure: The test will fail if the SSHD agent is NOT disabled in the default VRF.\n    \"\"\"\n\n    name = \"VerifySSHStatus\"\n    description = \"Verifies if the SSHD agent is disabled in the default VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management ssh\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n\n        line = [line for line in command_output.split(\"\\n\") if line.startswith(\"SSHD status\")][0]\n        status = line.split(\"is \")[1]\n\n        if status == \"disabled\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(line)\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyTelnetStatus","title":"VerifyTelnetStatus","text":"

    Bases: AntaTest

    Verifies if Telnet is disabled in the default VRF.

    Expected Results
    • success: The test will pass if Telnet is disabled in the default VRF.
    • failure: The test will fail if Telnet is NOT disabled in the default VRF.
    Source code in anta/tests/security.py
    class VerifyTelnetStatus(AntaTest):\n\"\"\"\n    Verifies if Telnet is disabled in the default VRF.\n\n    Expected Results:\n        * success: The test will pass if Telnet is disabled in the default VRF.\n        * failure: The test will fail if Telnet is NOT disabled in the default VRF.\n    \"\"\"\n\n    name = \"VerifyTelnetStatus\"\n    description = \"Verifies if Telnet is disabled in the default VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management telnet\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"serverState\"] == \"disabled\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Telnet status for Default VRF is enabled\")\n
    "},{"location":"api/tests.snmp/","title":"SNMP","text":""},{"location":"api/tests.snmp/#anta-catalog-for-snmp-tests","title":"ANTA catalog for SNMP tests","text":"

    Test functions related to the EOS various SNMP settings

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl","title":"VerifySnmpIPv4Acl","text":"

    Bases: AntaTest

    Verifies if the SNMP agent has the right number IPv4 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SNMP agent has the provided number of IPv4 ACL(s) in the specified VRF.
    • failure: The test will fail if the SNMP agent has not the right number of IPv4 ACL(s) in the specified VRF.
    Source code in anta/tests/snmp.py
    class VerifySnmpIPv4Acl(AntaTest):\n\"\"\"\n    Verifies if the SNMP agent has the right number IPv4 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SNMP agent has the provided number of IPv4 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SNMP agent has not the right number of IPv4 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySnmpIPv4Acl\"\n    description = \"Verifies if the SNMP agent has IPv4 ACL(s) configured.\"\n    categories = [\"snmp\"]\n    commands = [AntaCommand(command=\"show snmp ipv4 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv4_acl_list = command_output[\"ipAclList\"][\"aclList\"]\n        ipv4_acl_number = len(ipv4_acl_list)\n        not_configured_acl_list = []\n        if ipv4_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SNMP IPv4 ACL(s) in vrf {self.inputs.vrf} but got {ipv4_acl_number}\")\n            return\n        for ipv4_acl in ipv4_acl_list:\n            if self.inputs.vrf not in ipv4_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv4_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv4_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SNMP IPv4 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/snmp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv4 ACL(s)

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SNMP agent

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl","title":"VerifySnmpIPv6Acl","text":"

    Bases: AntaTest

    Verifies if the SNMP agent has the right number IPv6 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SNMP agent has the provided number of IPv6 ACL(s) in the specified VRF.
    • failure: The test will fail if the SNMP agent has not the right number of IPv6 ACL(s) in the specified VRF.
    Source code in anta/tests/snmp.py
    class VerifySnmpIPv6Acl(AntaTest):\n\"\"\"\n    Verifies if the SNMP agent has the right number IPv6 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SNMP agent has the provided number of IPv6 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SNMP agent has not the right number of IPv6 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySnmpIPv6Acl\"\n    description = \"Verifies if the SNMP agent has IPv6 ACL(s) configured.\"\n    categories = [\"snmp\"]\n    commands = [AntaCommand(command=\"show snmp ipv6 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv6_acl_list = command_output[\"ipv6AclList\"][\"aclList\"]\n        ipv6_acl_number = len(ipv6_acl_list)\n        not_configured_acl_list = []\n        if ipv6_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SNMP IPv6 ACL(s) in vrf {self.inputs.vrf} but got {ipv6_acl_number}\")\n            return\n        for ipv6_acl in ipv6_acl_list:\n            if self.inputs.vrf not in ipv6_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv6_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv6_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SNMP IPv6 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/snmp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv6 ACL(s)

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SNMP agent

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpStatus","title":"VerifySnmpStatus","text":"

    Bases: AntaTest

    Verifies whether the SNMP agent is enabled in a specified VRF.

    Expected Results
    • success: The test will pass if the SNMP agent is enabled in the specified VRF.
    • failure: The test will fail if the SNMP agent is disabled in the specified VRF.
    Source code in anta/tests/snmp.py
    class VerifySnmpStatus(AntaTest):\n\"\"\"\n    Verifies whether the SNMP agent is enabled in a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the SNMP agent is enabled in the specified VRF.\n        * failure: The test will fail if the SNMP agent is disabled in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySnmpStatus\"\n    description = \"Verifies if the SNMP agent is enabled.\"\n    categories = [\"snmp\"]\n    commands = [AntaCommand(command=\"show snmp\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"enabled\"] and self.inputs.vrf in command_output[\"vrfs\"][\"snmpVrfs\"]:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"SNMP agent disabled in vrf {self.inputs.vrf}\")\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpStatus.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/snmp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpStatus.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SNMP agent

    "},{"location":"api/tests.software/","title":"Software","text":""},{"location":"api/tests.software/#anta-catalog-for-software-tests","title":"ANTA catalog for software tests","text":"

    Test functions related to the EOS software

    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSExtensions","title":"VerifyEOSExtensions","text":"

    Bases: AntaTest

    Verifies all EOS extensions installed on the device are enabled for boot persistence.

    Source code in anta/tests/software.py
    class VerifyEOSExtensions(AntaTest):\n\"\"\"\n    Verifies all EOS extensions installed on the device are enabled for boot persistence.\n    \"\"\"\n\n    name = \"VerifyEOSExtensions\"\n    description = \"Verifies all EOS extensions installed on the device are enabled for boot persistence.\"\n    categories = [\"software\"]\n    commands = [AntaCommand(command=\"show extensions\"), AntaCommand(command=\"show boot-extensions\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        boot_extensions = []\n        show_extensions_command_output = self.instance_commands[0].json_output\n        show_boot_extensions_command_output = self.instance_commands[1].json_output\n        installed_extensions = [\n            extension for extension, extension_data in show_extensions_command_output[\"extensions\"].items() if extension_data[\"status\"] == \"installed\"\n        ]\n        for extension in show_boot_extensions_command_output[\"extensions\"]:\n            extension = extension.strip(\"\\n\")\n            if extension != \"\":\n                boot_extensions.append(extension)\n        installed_extensions.sort()\n        boot_extensions.sort()\n        if installed_extensions == boot_extensions:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Missing EOS extensions: installed {installed_extensions} / configured: {boot_extensions}\")\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSVersion","title":"VerifyEOSVersion","text":"

    Bases: AntaTest

    Verifies the device is running one of the allowed EOS version.

    Source code in anta/tests/software.py
    class VerifyEOSVersion(AntaTest):\n\"\"\"\n    Verifies the device is running one of the allowed EOS version.\n    \"\"\"\n\n    name = \"VerifyEOSVersion\"\n    description = \"Verifies the device is running one of the allowed EOS version.\"\n    categories = [\"software\"]\n    commands = [AntaCommand(command=\"show version\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        versions: List[str]\n\"\"\"List of allowed EOS versions\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"version\"] in self.inputs.versions:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f'device is running version {command_output[\"version\"]} not in expected versions: {self.inputs.versions}')\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSVersion.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/software.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    versions: List[str]\n\"\"\"List of allowed EOS versions\"\"\"\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSVersion.Input.versions","title":"versions instance-attribute","text":"
    versions: List[str]\n

    List of allowed EOS versions

    "},{"location":"api/tests.software/#anta.tests.software.VerifyTerminAttrVersion","title":"VerifyTerminAttrVersion","text":"

    Bases: AntaTest

    Verifies the device is running one of the allowed TerminAttr version.

    Source code in anta/tests/software.py
    class VerifyTerminAttrVersion(AntaTest):\n\"\"\"\n    Verifies the device is running one of the allowed TerminAttr version.\n    \"\"\"\n\n    name = \"VerifyTerminAttrVersion\"\n    description = \"Verifies the device is running one of the allowed TerminAttr version.\"\n    categories = [\"software\"]\n    commands = [AntaCommand(command=\"show version detail\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        versions: List[str]\n\"\"\"List of allowed TerminAttr versions\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        command_output_data = command_output[\"details\"][\"packages\"][\"TerminAttr-core\"][\"version\"]\n        if command_output_data in self.inputs.versions:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"device is running TerminAttr version {command_output_data} and is not in the allowed list: {self.inputs.versions}\")\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyTerminAttrVersion.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/software.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    versions: List[str]\n\"\"\"List of allowed TerminAttr versions\"\"\"\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyTerminAttrVersion.Input.versions","title":"versions instance-attribute","text":"
    versions: List[str]\n

    List of allowed TerminAttr versions

    "},{"location":"api/tests.stp/","title":"STP","text":""},{"location":"api/tests.stp/#anta-catalog-for-stp-tests","title":"ANTA catalog for STP tests","text":"

    Test functions related to various Spanning Tree Protocol (STP) settings

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPBlockedPorts","title":"VerifySTPBlockedPorts","text":"

    Bases: AntaTest

    Verifies there is no STP blocked ports.

    Expected Results
    • success: The test will pass if there are NO ports blocked by STP.
    • failure: The test will fail if there are ports blocked by STP.
    Source code in anta/tests/stp.py
    class VerifySTPBlockedPorts(AntaTest):\n\"\"\"\n    Verifies there is no STP blocked ports.\n\n    Expected Results:\n        * success: The test will pass if there are NO ports blocked by STP.\n        * failure: The test will fail if there are ports blocked by STP.\n    \"\"\"\n\n    name = \"VerifySTPBlockedPorts\"\n    description = \"Verifies there is no STP blocked ports.\"\n    categories = [\"stp\"]\n    commands = [AntaCommand(command=\"show spanning-tree blockedports\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if not (stp_instances := command_output[\"spanningTreeInstances\"]):\n            self.result.is_success()\n        else:\n            for key, value in stp_instances.items():\n                stp_instances[key] = value.pop(\"spanningTreeBlockedPorts\")\n            self.result.is_failure(f\"The following ports are blocked by STP: {stp_instances}\")\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPCounters","title":"VerifySTPCounters","text":"

    Bases: AntaTest

    Verifies there is no errors in STP BPDU packets.

    Expected Results
    • success: The test will pass if there are NO STP BPDU packet errors under all interfaces participating in STP.
    • failure: The test will fail if there are STP BPDU packet errors on one or many interface(s).
    Source code in anta/tests/stp.py
    class VerifySTPCounters(AntaTest):\n\"\"\"\n    Verifies there is no errors in STP BPDU packets.\n\n    Expected Results:\n        * success: The test will pass if there are NO STP BPDU packet errors under all interfaces participating in STP.\n        * failure: The test will fail if there are STP BPDU packet errors on one or many interface(s).\n    \"\"\"\n\n    name = \"VerifySTPCounters\"\n    description = \"Verifies there is no errors in STP BPDU packets.\"\n    categories = [\"stp\"]\n    commands = [AntaCommand(command=\"show spanning-tree counters\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        interfaces_with_errors = [\n            interface for interface, counters in command_output[\"interfaces\"].items() if counters[\"bpduTaggedError\"] or counters[\"bpduOtherError\"] != 0\n        ]\n        if interfaces_with_errors:\n            self.result.is_failure(f\"The following interfaces have STP BPDU packet errors: {interfaces_with_errors}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPForwardingPorts","title":"VerifySTPForwardingPorts","text":"

    Bases: AntaTest

    Verifies that all interfaces are in a forwarding state for a provided list of VLAN(s).

    Expected Results
    • success: The test will pass if all interfaces are in a forwarding state for the specified VLAN(s).
    • failure: The test will fail if one or many interfaces are NOT in a forwarding state in the specified VLAN(s).
    Source code in anta/tests/stp.py
    class VerifySTPForwardingPorts(AntaTest):\n\"\"\"\n    Verifies that all interfaces are in a forwarding state for a provided list of VLAN(s).\n\n    Expected Results:\n        * success: The test will pass if all interfaces are in a forwarding state for the specified VLAN(s).\n        * failure: The test will fail if one or many interfaces are NOT in a forwarding state in the specified VLAN(s).\n    \"\"\"\n\n    name = \"VerifySTPForwardingPorts\"\n    description = \"Verifies that all interfaces are forwarding for a provided list of VLAN(s).\"\n    categories = [\"stp\"]\n    commands = [AntaTemplate(template=\"show spanning-tree topology vlan {vlan} status\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify forwarding states\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vlan=vlan) for vlan in self.inputs.vlans]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        not_configured = []\n        not_forwarding = []\n        for command in self.instance_commands:\n            if command.params and \"vlan\" in command.params:\n                vlan_id = command.params[\"vlan\"]\n            if not (topologies := get_value(command.json_output, \"topologies\")):\n                not_configured.append(vlan_id)\n            else:\n                for value in topologies.values():\n                    if int(vlan_id) in value[\"vlans\"]:\n                        interfaces_not_forwarding = [interface for interface, state in value[\"interfaces\"].items() if state[\"state\"] != \"forwarding\"]\n                if interfaces_not_forwarding:\n                    not_forwarding.append({f\"VLAN {vlan_id}\": interfaces_not_forwarding})\n        if not_configured:\n            self.result.is_failure(f\"STP instance is not configured for the following VLAN(s): {not_configured}\")\n        if not_forwarding:\n            self.result.is_failure(f\"The following VLAN(s) have interface(s) that are not in a fowarding state: {not_forwarding}\")\n        if not not_configured and not interfaces_not_forwarding:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPForwardingPorts.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/stp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify forwarding states\"\"\"\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPForwardingPorts.Input.vlans","title":"vlans instance-attribute","text":"
    vlans: List[Vlan]\n

    List of VLAN on which to verify forwarding states

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode","title":"VerifySTPMode","text":"

    Bases: AntaTest

    Verifies the configured STP mode for a provided list of VLAN(s).

    Expected Results
    • success: The test will pass if the STP mode is configured properly in the specified VLAN(s).
    • failure: The test will fail if the STP mode is NOT configured properly for one or more specified VLAN(s).
    Source code in anta/tests/stp.py
    class VerifySTPMode(AntaTest):\n\"\"\"\n    Verifies the configured STP mode for a provided list of VLAN(s).\n\n    Expected Results:\n        * success: The test will pass if the STP mode is configured properly in the specified VLAN(s).\n        * failure: The test will fail if the STP mode is NOT configured properly for one or more specified VLAN(s).\n    \"\"\"\n\n    name = \"VerifySTPMode\"\n    description = \"Verifies the configured STP mode for a provided list of VLAN(s).\"\n    categories = [\"stp\"]\n    commands = [AntaTemplate(template=\"show spanning-tree vlan {vlan}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mode: Literal[\"mstp\", \"rstp\", \"rapidPvst\"] = \"mstp\"\n\"\"\"STP mode to verify\"\"\"\n        vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify STP mode\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vlan=vlan) for vlan in self.inputs.vlans]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        not_configured = []\n        wrong_stp_mode = []\n        for command in self.instance_commands:\n            if command.params and \"vlan\" in command.params:\n                vlan_id = command.params[\"vlan\"]\n            if not (stp_mode := get_value(command.json_output, f\"spanningTreeVlanInstances.{vlan_id}.spanningTreeVlanInstance.protocol\")):\n                not_configured.append(vlan_id)\n            elif stp_mode != self.inputs.mode:\n                wrong_stp_mode.append(vlan_id)\n        if not_configured:\n            self.result.is_failure(f\"STP mode '{self.inputs.mode}' not configured for the following VLAN(s): {not_configured}\")\n        if wrong_stp_mode:\n            self.result.is_failure(f\"Wrong STP mode configured for the following VLAN(s): {wrong_stp_mode}\")\n        if not not_configured and not wrong_stp_mode:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/stp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mode: Literal[\"mstp\", \"rstp\", \"rapidPvst\"] = \"mstp\"\n\"\"\"STP mode to verify\"\"\"\n    vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify STP mode\"\"\"\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode.Input.mode","title":"mode class-attribute instance-attribute","text":"
    mode: Literal['mstp', 'rstp', 'rapidPvst'] = 'mstp'\n

    STP mode to verify

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode.Input.vlans","title":"vlans instance-attribute","text":"
    vlans: List[Vlan]\n

    List of VLAN on which to verify STP mode

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority","title":"VerifySTPRootPriority","text":"

    Bases: AntaTest

    Verifies the STP root priority for a provided list of VLAN or MST instance ID(s).

    Expected Results
    • success: The test will pass if the STP root priority is configured properly for the specified VLAN or MST instance ID(s).
    • failure: The test will fail if the STP root priority is NOT configured properly for the specified VLAN or MST instance ID(s).
    Source code in anta/tests/stp.py
    class VerifySTPRootPriority(AntaTest):\n\"\"\"\n    Verifies the STP root priority for a provided list of VLAN or MST instance ID(s).\n\n    Expected Results:\n        * success: The test will pass if the STP root priority is configured properly for the specified VLAN or MST instance ID(s).\n        * failure: The test will fail if the STP root priority is NOT configured properly for the specified VLAN or MST instance ID(s).\n    \"\"\"\n\n    name = \"VerifySTPRootPriority\"\n    description = \"Verifies the STP root priority for a provided list of VLAN or MST instance ID(s).\"\n    categories = [\"stp\"]\n    commands = [AntaCommand(command=\"show spanning-tree root detail\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        priority: int\n\"\"\"STP root priority to verify\"\"\"\n        instances: List[Vlan] = []\n\"\"\"List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified.\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if not (stp_instances := command_output[\"instances\"]):\n            self.result.is_failure(\"No STP instances configured\")\n            return\n        for instance in stp_instances:\n            if instance.startswith(\"MST\"):\n                prefix = \"MST\"\n                break\n            if instance.startswith(\"VL\"):\n                prefix = \"VL\"\n                break\n        check_instances = [f\"{prefix}{instance_id}\" for instance_id in self.inputs.instances] if self.inputs.instances else command_output[\"instances\"].keys()\n        wrong_priority_instances = [\n            instance for instance in check_instances if get_value(command_output, f\"instances.{instance}.rootBridge.priority\") != self.inputs.priority\n        ]\n        if wrong_priority_instances:\n            self.result.is_failure(f\"The following instance(s) have the wrong STP root priority configured: {wrong_priority_instances}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/stp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    priority: int\n\"\"\"STP root priority to verify\"\"\"\n    instances: List[Vlan] = []\n\"\"\"List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified.\"\"\"\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority.Input.instances","title":"instances class-attribute instance-attribute","text":"
    instances: List[Vlan] = []\n

    List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified.

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority.Input.priority","title":"priority instance-attribute","text":"
    priority: int\n

    STP root priority to verify

    "},{"location":"api/tests.system/","title":"System","text":""},{"location":"api/tests.system/#anta-catalog-for-system-tests","title":"ANTA catalog for system tests","text":"

    Test functions related to system-level features and protocols

    "},{"location":"api/tests.system/#anta.tests.system.VerifyAgentLogs","title":"VerifyAgentLogs","text":"

    Bases: AntaTest

    This test verifies that no agent crash reports are present on the device.

    Expected Results
    • success: The test will pass if there is NO agent crash reported.
    • failure: The test will fail if any agent crashes are reported.
    Source code in anta/tests/system.py
    class VerifyAgentLogs(AntaTest):\n\"\"\"\n    This test verifies that no agent crash reports are present on the device.\n\n    Expected Results:\n      * success: The test will pass if there is NO agent crash reported.\n      * failure: The test will fail if any agent crashes are reported.\n    \"\"\"\n\n    name = \"VerifyAgentLogs\"\n    description = \"This test verifies that no agent crash reports are present on the device.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show agent logs crash\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        if len(command_output) == 0:\n            self.result.is_success()\n        else:\n            pattern = re.compile(r\"^===> (.*?) <===$\", re.MULTILINE)\n            agents = \"\\n * \".join(pattern.findall(command_output))\n            self.result.is_failure(f\"Device has reported agent crashes:\\n * {agents}\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyCPUUtilization","title":"VerifyCPUUtilization","text":"

    Bases: AntaTest

    This test verifies whether the CPU utilization is below 75%.

    Expected Results
    • success: The test will pass if the CPU utilization is below 75%.
    • failure: The test will fail if the CPU utilization is over 75%.
    Source code in anta/tests/system.py
    class VerifyCPUUtilization(AntaTest):\n\"\"\"\n    This test verifies whether the CPU utilization is below 75%.\n\n    Expected Results:\n      * success: The test will pass if the CPU utilization is below 75%.\n      * failure: The test will fail if the CPU utilization is over 75%.\n    \"\"\"\n\n    name = \"VerifyCPUUtilization\"\n    description = \"This test verifies whether the CPU utilization is below 75%.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show processes top once\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        command_output_data = command_output[\"cpuInfo\"][\"%Cpu(s)\"][\"idle\"]\n        if command_output_data > 25:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device has reported a high CPU utilization: {100 - command_output_data}%\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyCoredump","title":"VerifyCoredump","text":"

    Bases: AntaTest

    This test verifies if there are core dump files in the /var/core directory.

    Expected Results
    • success: The test will pass if there are NO core dump(s) in /var/core.
    • failure: The test will fail if there are core dump(s) in /var/core.
    Note
    • This test will NOT check for minidump(s) generated by certain agents in /var/core/minidump.
    Source code in anta/tests/system.py
    class VerifyCoredump(AntaTest):\n\"\"\"\n    This test verifies if there are core dump files in the /var/core directory.\n\n    Expected Results:\n      * success: The test will pass if there are NO core dump(s) in /var/core.\n      * failure: The test will fail if there are core dump(s) in /var/core.\n\n    Note:\n      * This test will NOT check for minidump(s) generated by certain agents in /var/core/minidump.\n    \"\"\"\n\n    name = \"VerifyCoredump\"\n    description = \"This test verifies if there are core dump files in the /var/core directory.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show system coredump\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        core_files = command_output[\"coreFiles\"]\n        if \"minidump\" in core_files:\n            core_files.remove(\"minidump\")\n        if not core_files:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Core dump(s) have been found: {core_files}\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyFileSystemUtilization","title":"VerifyFileSystemUtilization","text":"

    Bases: AntaTest

    This test verifies that no partition is utilizing more than 75% of its disk space.

    Expected Results
    • success: The test will pass if all partitions are using less than 75% of its disk space.
    • failure: The test will fail if any partitions are using more than 75% of its disk space.
    Source code in anta/tests/system.py
    class VerifyFileSystemUtilization(AntaTest):\n\"\"\"\n    This test verifies that no partition is utilizing more than 75% of its disk space.\n\n    Expected Results:\n      * success: The test will pass if all partitions are using less than 75% of its disk space.\n      * failure: The test will fail if any partitions are using more than 75% of its disk space.\n    \"\"\"\n\n    name = \"VerifyFileSystemUtilization\"\n    description = \"This test verifies that no partition is utilizing more than 75% of its disk space.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"bash timeout 10 df -h\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        self.result.is_success()\n        for line in command_output.split(\"\\n\")[1:]:\n            if \"loop\" not in line and len(line) > 0 and (percentage := int(line.split()[4].replace(\"%\", \"\"))) > 75:\n                self.result.is_failure(f\"Mount point {line} is higher than 75%: reported {percentage}%\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyMemoryUtilization","title":"VerifyMemoryUtilization","text":"

    Bases: AntaTest

    This test verifies whether the memory utilization is below 75%.

    Expected Results
    • success: The test will pass if the memory utilization is below 75%.
    • failure: The test will fail if the memory utilization is over 75%.
    Source code in anta/tests/system.py
    class VerifyMemoryUtilization(AntaTest):\n\"\"\"\n    This test verifies whether the memory utilization is below 75%.\n\n    Expected Results:\n      * success: The test will pass if the memory utilization is below 75%.\n      * failure: The test will fail if the memory utilization is over 75%.\n    \"\"\"\n\n    name = \"VerifyMemoryUtilization\"\n    description = \"This test verifies whether the memory utilization is below 75%.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show version\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        memory_usage = command_output[\"memFree\"] / command_output[\"memTotal\"]\n        if memory_usage > 0.25:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device has reported a high memory usage: {(1 - memory_usage)*100:.2f}%\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyNTP","title":"VerifyNTP","text":"

    Bases: AntaTest

    This test verifies that the Network Time Protocol (NTP) is synchronized.

    Expected Results
    • success: The test will pass if the NTP is synchronised.
    • failure: The test will fail if the NTP is NOT synchronised.
    Source code in anta/tests/system.py
    class VerifyNTP(AntaTest):\n\"\"\"\n    This test verifies that the Network Time Protocol (NTP) is synchronized.\n\n    Expected Results:\n      * success: The test will pass if the NTP is synchronised.\n      * failure: The test will fail if the NTP is NOT synchronised.\n    \"\"\"\n\n    name = \"VerifyNTP\"\n    description = \"This test verifies if NTP is synchronised.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show ntp status\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        if command_output.split(\"\\n\")[0].split(\" \")[0] == \"synchronised\":\n            self.result.is_success()\n        else:\n            data = command_output.split(\"\\n\")[0]\n            self.result.is_failure(f\"The device is not synchronized with the configured NTP server(s): '{data}'\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyReloadCause","title":"VerifyReloadCause","text":"

    Bases: AntaTest

    This test verifies the last reload cause of the device.

    Expected results
    • success: The test will pass if there are NO reload causes or if the last reload was caused by the user or after an FPGA upgrade.
    • failure: The test will fail if the last reload was NOT caused by the user or after an FPGA upgrade.
    • error: The test will report an error if the reload cause is NOT available.
    Source code in anta/tests/system.py
    class VerifyReloadCause(AntaTest):\n\"\"\"\n    This test verifies the last reload cause of the device.\n\n    Expected results:\n      * success: The test will pass if there are NO reload causes or if the last reload was caused by the user or after an FPGA upgrade.\n      * failure: The test will fail if the last reload was NOT caused by the user or after an FPGA upgrade.\n      * error: The test will report an error if the reload cause is NOT available.\n    \"\"\"\n\n    name = \"VerifyReloadCause\"\n    description = \"This test verifies the last reload cause of the device.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show reload cause\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if \"resetCauses\" not in command_output.keys():\n            self.result.is_error(message=\"No reload causes available\")\n            return\n        if len(command_output[\"resetCauses\"]) == 0:\n            # No reload causes\n            self.result.is_success()\n            return\n        reset_causes = command_output[\"resetCauses\"]\n        command_output_data = reset_causes[0].get(\"description\")\n        if command_output_data in [\n            \"Reload requested by the user.\",\n            \"Reload requested after FPGA upgrade\",\n        ]:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Reload cause is: '{command_output_data}'\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyUptime","title":"VerifyUptime","text":"

    Bases: AntaTest

    This test verifies if the device uptime is higher than the provided minimum uptime value.

    Expected Results
    • success: The test will pass if the device uptime is higher than the provided value.
    • failure: The test will fail if the device uptime is lower than the provided value.
    Source code in anta/tests/system.py
    class VerifyUptime(AntaTest):\n\"\"\"\n    This test verifies if the device uptime is higher than the provided minimum uptime value.\n\n    Expected Results:\n      * success: The test will pass if the device uptime is higher than the provided value.\n      * failure: The test will fail if the device uptime is lower than the provided value.\n    \"\"\"\n\n    name = \"VerifyUptime\"\n    description = \"This test verifies if the device uptime is higher than the provided minimum uptime value.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show uptime\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        minimum: conint(ge=0)  # type: ignore\n\"\"\"Minimum uptime in seconds\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"upTime\"] > self.inputs.minimum:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device uptime is {command_output['upTime']} seconds\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyUptime.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/system.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    minimum: conint(ge=0)  # type: ignore\n\"\"\"Minimum uptime in seconds\"\"\"\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyUptime.Input.minimum","title":"minimum instance-attribute","text":"
    minimum: conint(ge=0)\n

    Minimum uptime in seconds

    "},{"location":"api/tests.vxlan/","title":"VXLAN","text":""},{"location":"api/tests.vxlan/#anta-catalog-for-vxlan-tests","title":"ANTA catalog for VXLAN tests","text":"

    Test functions related to VXLAN

    "},{"location":"api/tests.vxlan/#anta.tests.vxlan.VerifyVxlan1Interface","title":"VerifyVxlan1Interface","text":"

    Bases: AntaTest

    This test verifies if the Vxlan1 interface is configured and \u2018up/up\u2019.

    Warning

    The name of this test has been updated from \u2018VerifyVxlan\u2019 for better representation.

    Expected Results
    • success: The test will pass if the Vxlan1 interface is configured with line protocol status and interface status \u2018up\u2019.
    • failure: The test will fail if the Vxlan1 interface line protocol status or interface status are not \u2018up\u2019.
    • skipped: The test will be skipped if the Vxlan1 interface is not configured.
    Source code in anta/tests/vxlan.py
    class VerifyVxlan1Interface(AntaTest):\n\"\"\"\n    This test verifies if the Vxlan1 interface is configured and 'up/up'.\n\n    !!! warning\n        The name of this test has been updated from 'VerifyVxlan' for better representation.\n\n    Expected Results:\n      * success: The test will pass if the Vxlan1 interface is configured with line protocol status and interface status 'up'.\n      * failure: The test will fail if the Vxlan1 interface line protocol status or interface status are not 'up'.\n      * skipped: The test will be skipped if the Vxlan1 interface is not configured.\n    \"\"\"\n\n    name = \"VerifyVxlan1Interface\"\n    description = \"This test verifies if the Vxlan1 interface is configured and 'up/up'.\"\n    categories = [\"vxlan\"]\n    commands = [AntaCommand(command=\"show interfaces description\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if \"Vxlan1\" not in command_output[\"interfaceDescriptions\"]:\n            self.result.is_skipped(\"Vxlan1 interface is not configured\")\n        elif (\n            command_output[\"interfaceDescriptions\"][\"Vxlan1\"][\"lineProtocolStatus\"] == \"up\"\n            and command_output[\"interfaceDescriptions\"][\"Vxlan1\"][\"interfaceStatus\"] == \"up\"\n        ):\n            self.result.is_success()\n        else:\n            self.result.is_failure(\n                f\"Vxlan1 interface is {command_output['interfaceDescriptions']['Vxlan1']['lineProtocolStatus']}\"\n                f\"/{command_output['interfaceDescriptions']['Vxlan1']['interfaceStatus']}\"\n            )\n
    "},{"location":"api/tests.vxlan/#anta.tests.vxlan.VerifyVxlanConfigSanity","title":"VerifyVxlanConfigSanity","text":"

    Bases: AntaTest

    This test verifies that no issues are detected with the VXLAN configuration.

    Expected Results
    • success: The test will pass if no issues are detected with the VXLAN configuration.
    • failure: The test will fail if issues are detected with the VXLAN configuration.
    • skipped: The test will be skipped if VXLAN is not configured on the device.
    Source code in anta/tests/vxlan.py
    class VerifyVxlanConfigSanity(AntaTest):\n\"\"\"\n    This test verifies that no issues are detected with the VXLAN configuration.\n\n    Expected Results:\n      * success: The test will pass if no issues are detected with the VXLAN configuration.\n      * failure: The test will fail if issues are detected with the VXLAN configuration.\n      * skipped: The test will be skipped if VXLAN is not configured on the device.\n    \"\"\"\n\n    name = \"VerifyVxlanConfigSanity\"\n    description = \"This test verifies that no issues are detected with the VXLAN configuration.\"\n    categories = [\"vxlan\"]\n    commands = [AntaCommand(command=\"show vxlan config-sanity\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if \"categories\" not in command_output or len(command_output[\"categories\"]) == 0:\n            self.result.is_skipped(\"VXLAN is not configured\")\n            return\n        failed_categories = {\n            category: content\n            for category, content in command_output[\"categories\"].items()\n            if category in [\"localVtep\", \"mlag\", \"pd\"] and content[\"allCheckPass\"] is not True\n        }\n        if len(failed_categories) > 0:\n            self.result.is_failure(f\"VXLAN config sanity check is not passing: {failed_categories}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/types/","title":"Input Types","text":""},{"location":"api/types/#anta.custom_types","title":"anta.custom_types","text":"

    Module that provides predefined types for AntaTest.Input instances

    "},{"location":"api/types/#anta.custom_types.AAAAuthMethod","title":"AAAAuthMethod module-attribute","text":"
    AAAAuthMethod = Annotated[str, AfterValidator(aaa_group_prefix)]\n
    "},{"location":"api/types/#anta.custom_types.Afi","title":"Afi module-attribute","text":"
    Afi = Literal['ipv4', 'ipv6', 'vpn-ipv4', 'vpn-ipv6', 'evpn', 'rt-membership']\n
    "},{"location":"api/types/#anta.custom_types.Interface","title":"Interface module-attribute","text":"
    Interface = Annotated[str, Field(pattern='^(Ethernet|Fabric|Loopback|Management|Port-Channel|Tunnel|Vlan|Vxlan)[0-9]+(\\\\/[0-9]+)*$')]\n
    "},{"location":"api/types/#anta.custom_types.Safi","title":"Safi module-attribute","text":"
    Safi = Literal['unicast', 'multicast', 'labeled-unicast']\n
    "},{"location":"api/types/#anta.custom_types.TestStatus","title":"TestStatus module-attribute","text":"
    TestStatus = Literal['unset', 'success', 'failure', 'error', 'skipped']\n
    "},{"location":"api/types/#anta.custom_types.Vlan","title":"Vlan module-attribute","text":"
    Vlan = Annotated[int, Field(ge=0, le=4094)]\n
    "},{"location":"api/types/#anta.custom_types.aaa_group_prefix","title":"aaa_group_prefix","text":"
    aaa_group_prefix(v: str) -> str\n

    Prefix the AAA method with \u2018group\u2019 if it is known

    Source code in anta/custom_types.py
    def aaa_group_prefix(v: str) -> str:\n\"\"\"Prefix the AAA method with 'group' if it is known\"\"\"\n    built_in_methods = [\"local\", \"none\", \"logging\"]\n    return f\"group {v}\" if v not in built_in_methods and not v.startswith(\"group \") else v\n
    "},{"location":"cli/debug/","title":"Helpers","text":""},{"location":"cli/debug/#anta-debug-commands","title":"ANTA debug commands","text":"

    The ANTA CLI includes a set of debugging tools, making it easier to build and test ANTA content. This functionality is accessed via the debug subcommand and offers the following options:

    • Executing a command on a device from your inventory and retrieving the result.
    • Running a templated command on a device from your inventory and retrieving the result.

    These tools are especially helpful in building the tests, as they give a visual access to the output received from the eAPI. They also facilitate the extraction of output content for use in unit tests, as described in our contribution guide.

    Warning

    The debug tools require a device from your inventory. Thus, you MUST use a valid ANTA Inventory.

    "},{"location":"cli/debug/#executing-an-eos-command","title":"Executing an EOS command","text":"

    You can use the run-cmd entrypoint to run a command, which includes the following options:

    "},{"location":"cli/debug/#command-overview","title":"Command overview","text":"
    $ anta debug run-cmd --help\nUsage: anta debug run-cmd [OPTIONS]\n\nRun arbitrary command to an ANTA device\n\nOptions:\n  -c, --command TEXT        Command to run  [required]\n--ofmt [json|text]        EOS eAPI format to use. can be text or json\n  -v, --version [1|latest]  EOS eAPI version\n  -r, --revision INTEGER    eAPI command revision\n  -d, --device TEXT         Device from inventory to use  [required]\n--help                    Show this message and exit.\n
    "},{"location":"cli/debug/#example","title":"Example","text":"

    This example illustrates how to run the show interfaces description command with a JSON format (default):

    anta debug run-cmd --command \"show interfaces description\" --device DC1-SPINE1\nRun command show interfaces description on DC1-SPINE1\n{\n'interfaceDescriptions': {\n'Ethernet1': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-LEAF1A_Ethernet1', 'interfaceStatus': 'up'},\n        'Ethernet2': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-LEAF1B_Ethernet1', 'interfaceStatus': 'up'},\n        'Ethernet3': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-BL1_Ethernet1', 'interfaceStatus': 'up'},\n        'Ethernet4': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-BL2_Ethernet1', 'interfaceStatus': 'up'},\n        'Loopback0': {'lineProtocolStatus': 'up', 'description': 'EVPN_Overlay_Peering', 'interfaceStatus': 'up'},\n        'Management0': {'lineProtocolStatus': 'up', 'description': 'oob_management', 'interfaceStatus': 'up'}\n}\n}\n
    "},{"location":"cli/debug/#executing-an-eos-command-using-templates","title":"Executing an EOS command using templates","text":"

    The run-template entrypoint allows the user to provide an f-string templated command. It is followed by a list of arguments (key-value pairs) that build a dictionary used as template parameters.

    "},{"location":"cli/debug/#command-overview_1","title":"Command overview","text":"
    $ anta debug run-template --help\nUsage: anta debug run-template [OPTIONS] PARAMS...\n\n  Run arbitrary templated command to an ANTA device.\n\n  Takes a list of arguments (keys followed by a value) to build a dictionary\n  used as template parameters. Example:\n\n  anta debug run-template -d leaf1a -t 'show vlan {vlan_id}' vlan_id 1\n\nOptions:\n  -t, --template TEXT       Command template to run. E.g. 'show vlan\n                            {vlan_id}'  [required]\n--ofmt [json|text]        EOS eAPI format to use. can be text or json\n  -v, --version [1|latest]  EOS eAPI version\n  -r, --revision INTEGER    eAPI command revision\n  -d, --device TEXT         Device from inventory to use  [required]\n--help                    Show this message and exit.\n
    "},{"location":"cli/debug/#example_1","title":"Example","text":"

    This example uses the show vlan {vlan_id} command in a JSON format:

    anta debug run-template --template \"show vlan {vlan_id}\" vlan_id 10 --device DC1-LEAF1A\nRun templated command 'show vlan {vlan_id}' with {'vlan_id': '10'} on DC1-LEAF1A\n{\n'vlans': {\n'10': {\n'name': 'VRFPROD_VLAN10',\n            'dynamic': False,\n            'status': 'active',\n            'interfaces': {\n'Cpu': {'privatePromoted': False, 'blocked': None},\n                'Port-Channel11': {'privatePromoted': False, 'blocked': None},\n                'Vxlan1': {'privatePromoted': False, 'blocked': None}\n}\n}\n},\n    'sourceDetail': ''\n}\n

    Warning

    If multiple arguments of the same key are provided, only the last argument value will be kept in the template parameters.

    "},{"location":"cli/debug/#example-of-multiple-arguments","title":"Example of multiple arguments","text":"
    anta --log DEBUG debug run-template --template \"ping {dst} source {src}\" dst \"8.8.8.8\" src Loopback0 --device DC1-SPINE1 \u00a0 \u00a0\n> {'dst': '8.8.8.8', 'src': 'Loopback0'}\n\nanta --log DEBUG debug run-template --template \"ping {dst} source {src}\" dst \"8.8.8.8\" src Loopback0 dst \"1.1.1.1\" src Loopback1 --device DC1-SPINE1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\n> {'dst': '1.1.1.1', 'src': 'Loopback1'}\n# Notice how `src` and `dst` keep only the latest value\n
    "},{"location":"cli/exec/","title":"Execute commands","text":""},{"location":"cli/exec/#executing-commands-on-devices","title":"Executing Commands on Devices","text":"

    ANTA CLI provides a set of entrypoints to facilitate remote command execution on EOS devices.

    "},{"location":"cli/exec/#exec-command-overview","title":"EXEC Command overview","text":"
    anta exec --help\nUsage: anta exec [OPTIONS] COMMAND [ARGS]...\n\n  Execute commands to inventory devices\n\nOptions:\n  --help  Show this message and exit.\n\nCommands:\n  clear-counters        Clear counter statistics on EOS devices\n  collect-tech-support  Collect scheduled tech-support from EOS devices\n  snapshot              Collect commands output from devices in inventory\n
    "},{"location":"cli/exec/#clear-interfaces-counters","title":"Clear interfaces counters","text":"

    This command clears interface counters on EOS devices specified in your inventory.

    "},{"location":"cli/exec/#command-overview","title":"Command overview","text":"
    anta exec clear-counters --help\nUsage: anta exec clear-counters [OPTIONS]\n\nClear counter statistics on EOS devices\n\nOptions:\n  -t, --tags TEXT  List of tags using comma as separator: tag1,tag2,tag3\n  --help           Show this message and exit.\n
    "},{"location":"cli/exec/#example","title":"Example","text":"
    anta exec clear-counters --tags SPINE\n[20:19:13] INFO     Connecting to devices...                                                                                                                         utils.py:43\n           INFO     Clearing counters on remote devices...                                                                                                           utils.py:46\n           INFO     Cleared counters on DC1-SPINE2 (cEOSLab)                                                                                                         utils.py:41\n           INFO     Cleared counters on DC2-SPINE1 (cEOSLab)                                                                                                         utils.py:41\n           INFO     Cleared counters on DC1-SPINE1 (cEOSLab)                                                                                                         utils.py:41\n           INFO     Cleared counters on DC2-SPINE2 (cEOSLab)\n
    "},{"location":"cli/exec/#collect-a-set-of-commands","title":"Collect a set of commands","text":"

    This command collects all the commands specified in a commands-list file, which can be in either json or text format.

    "},{"location":"cli/exec/#command-overview_1","title":"Command overview","text":"
    anta exec snapshot --help\nUsage: anta exec snapshot [OPTIONS]\n\nCollect commands output from devices in inventory\n\nOptions:\n  -t, --tags TEXT           List of tags using comma as separator:\n                            tag1,tag2,tag3\n  -c, --commands-list FILE  File with list of commands to collect  [env var:\n                            ANTA_EXEC_SNAPSHOT_COMMANDS_LIST; required]\n-o, --output DIRECTORY    Directory to save commands output. Will have a\n                            suffix with the format _YEAR-MONTH-DAY_HOUR-\n                            MINUTES-SECONDS'  [env var:\n                            ANTA_EXEC_SNAPSHOT_OUTPUT; default: anta_snapshot]\n--help                    Show this message and exit.\n

    The commands-list file should follow this structure:

    ---\njson_format:\n- show version\ntext_format:\n- show bfd peers\n
    "},{"location":"cli/exec/#example_1","title":"Example","text":"
    anta exec snapshot --tags SPINE --commands-list ./commands.yaml --output ./\n[20:25:15] INFO     Connecting to devices...                                                                                                                         utils.py:78\n           INFO     Collecting commands from remote devices                                                                                                          utils.py:81\n           INFO     Collected command 'show version' from device DC2-SPINE1 (cEOSLab)                                                                                utils.py:76\n           INFO     Collected command 'show version' from device DC2-SPINE2 (cEOSLab)                                                                                utils.py:76\n           INFO     Collected command 'show version' from device DC1-SPINE1 (cEOSLab)                                                                                utils.py:76\n           INFO     Collected command 'show version' from device DC1-SPINE2 (cEOSLab)                                                                                utils.py:76\n[20:25:16] INFO     Collected command 'show bfd peers' from device DC2-SPINE2 (cEOSLab)                                                                              utils.py:76\n           INFO     Collected command 'show bfd peers' from device DC2-SPINE1 (cEOSLab)                                                                              utils.py:76\n           INFO     Collected command 'show bfd peers' from device DC1-SPINE1 (cEOSLab)                                                                              utils.py:76\n           INFO     Collected command 'show bfd peers' from device DC1-SPINE2 (cEOSLab)\n

    The results of the executed commands will be stored in the output directory specified during command execution:

    tree _2023-07-14_20_25_15\n_2023-07-14_20_25_15\n\u251c\u2500\u2500 DC1-SPINE1\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 json\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 text\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 show bfd peers.log\n\u251c\u2500\u2500 DC1-SPINE2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 json\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 text\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 show bfd peers.log\n\u251c\u2500\u2500 DC2-SPINE1\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 json\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 text\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 show bfd peers.log\n\u2514\u2500\u2500 DC2-SPINE2\n    \u251c\u2500\u2500 json\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n    \u2514\u2500\u2500 text\n        \u2514\u2500\u2500 show bfd peers.log\n\n12 directories, 8 files\n
    "},{"location":"cli/exec/#get-scheduled-tech-support","title":"Get Scheduled tech-support","text":"

    EOS offers a feature that automatically creates a tech-support archive every hour by default. These archives are stored under /mnt/flash/schedule/tech-support.

    leaf1#show schedule summary\nMaximum concurrent jobs  1\nPrepend host name to logfile: Yes\nName                 At Time       Last        Interval       Timeout        Max        Max     Logfile Location                  Status\n                                   Time         (mins)        (mins)         Log        Logs\n                                                                            Files       Size\n----------------- ------------- ----------- -------------- ------------- ----------- ---------- --------------------------------- ------\ntech-support           now         08:37          60            30           100         -      flash:schedule/tech-support/      Success\n\n\nleaf1#bash ls /mnt/flash/schedule/tech-support\nleaf1_tech-support_2023-03-09.1337.log.gz  leaf1_tech-support_2023-03-10.0837.log.gz  leaf1_tech-support_2023-03-11.0337.log.gz\n

    For Network Readiness for Use (NRFU) tests and to keep a comprehensive report of the system state before going live, ANTA provides a command-line interface that efficiently retrieves these files.

    "},{"location":"cli/exec/#command-overview_2","title":"Command overview","text":"
    anta exec collect-tech-support --help\nUsage: anta exec collect-tech-support [OPTIONS]\n\nCollect scheduled tech-support from EOS devices\n\nOptions:\n  -o, --output PATH              Path for tests catalog  [default: ./tech-\n                                 support]\n--latest INTEGER               Number of scheduled show-tech to retrieve\n  --configure        Ensure devices have 'aaa authorization exec default\n                     local' configured (required for SCP on EOS). THIS WILL\n                     CHANGE THE CONFIGURATION OF YOUR NETWORK.\n  -t, --tags TEXT                List of tags using comma as separator:\n                                 tag1,tag2,tag3\n  --help                         Show this message and exit.\n

    When executed, this command fetches tech-support files and downloads them locally into a device-specific subfolder within the designated folder. You can specify the output folder with the --output option.

    ANTA uses SCP to download files from devices and will not trust unknown SSH hosts by default. Add the SSH public keys of your devices to your known_hosts file or use the anta --insecure option to ignore SSH host keys validation.

    The configuration aaa authorization exec default must be present on devices to be able to use SCP. ANTA can automatically configure aaa authorization exec default local using the anta exec collect-tech-support --configure option. If you require specific AAA configuration for aaa authorization exec default, like aaa authorization exec default none or aaa authorization exec default group tacacs+, you will need to configure it manually.

    The --latest option allows retrieval of a specific number of the most recent tech-support files.

    Warning

    By default all the tech-support files present on the devices are retrieved.

    "},{"location":"cli/exec/#example_2","title":"Example","text":"
    anta --insecure exec collect-tech-support\n[15:27:19] INFO     Connecting to devices...\nINFO     Copying '/mnt/flash/schedule/tech-support/spine1_tech-support_2023-06-09.1315.log.gz' from device spine1 to 'tech-support/spine1' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf3_tech-support_2023-06-09.1315.log.gz' from device leaf3 to 'tech-support/leaf3' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf1_tech-support_2023-06-09.1315.log.gz' from device leaf1 to 'tech-support/leaf1' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf2_tech-support_2023-06-09.1315.log.gz' from device leaf2 to 'tech-support/leaf2' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/spine2_tech-support_2023-06-09.1315.log.gz' from device spine2 to 'tech-support/spine2' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf4_tech-support_2023-06-09.1315.log.gz' from device leaf4 to 'tech-support/leaf4' locally\nINFO     Collected 1 scheduled tech-support from leaf2\nINFO     Collected 1 scheduled tech-support from spine2\nINFO     Collected 1 scheduled tech-support from leaf3\nINFO     Collected 1 scheduled tech-support from spine1\nINFO     Collected 1 scheduled tech-support from leaf1\nINFO     Collected 1 scheduled tech-support from leaf4\n

    The output folder structure is as follows:

    tree tech-support/\ntech-support/\n\u251c\u2500\u2500 leaf1\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf1_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 leaf2\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf2_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 leaf3\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf3_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 leaf4\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf4_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 spine1\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 spine1_tech-support_2023-06-09.1315.log.gz\n\u2514\u2500\u2500 spine2\n    \u2514\u2500\u2500 spine2_tech-support_2023-06-09.1315.log.gz\n\n6 directories, 6 files\n

    Each device has its own subdirectory containing the collected tech-support files.

    "},{"location":"cli/get-inventory-information/","title":"Get Inventory Information","text":""},{"location":"cli/get-inventory-information/#retrieving-inventory-information","title":"Retrieving Inventory Information","text":"

    The ANTA CLI offers multiple entrypoints to access data from your local inventory.

    "},{"location":"cli/get-inventory-information/#inventory-used-of-examples","title":"Inventory used of examples","text":"

    Let\u2019s consider the following inventory:

    ---\nanta_inventory:\nhosts:\n- host: 172.20.20.101\nname: DC1-SPINE1\ntags: [\"SPINE\", \"DC1\"]\n\n- host: 172.20.20.102\nname: DC1-SPINE2\ntags: [\"SPINE\", \"DC1\"]\n\n- host: 172.20.20.111\nname: DC1-LEAF1A\ntags: [\"LEAF\", \"DC1\"]\n\n- host: 172.20.20.112\nname: DC1-LEAF1B\ntags: [\"LEAF\", \"DC1\"]\n\n- host: 172.20.20.121\nname: DC1-BL1\ntags: [\"BL\", \"DC1\"]\n\n- host: 172.20.20.122\nname: DC1-BL2\ntags: [\"BL\", \"DC1\"]\n\n- host: 172.20.20.201\nname: DC2-SPINE1\ntags: [\"SPINE\", \"DC2\"]\n\n- host: 172.20.20.202\nname: DC2-SPINE2\ntags: [\"SPINE\", \"DC2\"]\n\n- host: 172.20.20.211\nname: DC2-LEAF1A\ntags: [\"LEAF\", \"DC2\"]\n\n- host: 172.20.20.212\nname: DC2-LEAF1B\ntags: [\"LEAF\", \"DC2\"]\n\n- host: 172.20.20.221\nname: DC2-BL1\ntags: [\"BL\", \"DC2\"]\n\n- host: 172.20.20.222\nname: DC2-BL2\ntags: [\"BL\", \"DC2\"]\n
    "},{"location":"cli/get-inventory-information/#obtaining-all-configured-tags","title":"Obtaining all configured tags","text":"

    As most of ANTA\u2019s commands accommodate tag filtering, this particular command is useful for enumerating all tags configured in the inventory. Running the anta get tags command will return a list of all tags that have been configured in the inventory.

    "},{"location":"cli/get-inventory-information/#command-overview","title":"Command overview","text":"
    anta get tags --help\nUsage: anta get tags [OPTIONS]\n\nGet list of configured tags in user inventory.\n\nOptions:\n  --help  Show this message and exit.\n
    "},{"location":"cli/get-inventory-information/#example","title":"Example","text":"

    To get the list of all configured tags in the inventory, run the following command:

    anta get tags\nTags found:\n[\n\"BL\",\n  \"DC1\",\n  \"DC2\",\n  \"LEAF\",\n  \"SPINE\",\n  \"all\"\n]\n\n* note that tag all has been added by anta\n

    Note

    Even if you haven\u2019t explicitly configured the all tag in the inventory, it is automatically added. This default tag allows to execute commands on all devices in the inventory when no tag is specified.

    "},{"location":"cli/get-inventory-information/#list-devices-in-inventory","title":"List devices in inventory","text":"

    This command will list all devices available in the inventory. Using the --tags option, you can filter this list to only include devices with specific tags. The --connected option allows to display only the devices where a connection has been established.

    "},{"location":"cli/get-inventory-information/#command-overview_1","title":"Command overview","text":"
    anta get inventory --help\nUsage: anta get inventory [OPTIONS]\n\nShow inventory loaded in ANTA.\n\nOptions:\n  -t, --tags TEXT                List of tags using comma as separator:\n                                 tag1,tag2,tag3\n  --connected / --not-connected  Display inventory after connection has been\n                                 created\n  --help                         Show this message and exit.\n

    Tip

    In its default mode, anta get inventory provides only information that doesn\u2019t rely on a device connection. If you are interested in obtaining connection-dependent details, like the hardware model, please use the --connected option.

    "},{"location":"cli/get-inventory-information/#example_1","title":"Example","text":"

    To retrieve a comprehensive list of all devices along with their details, execute the following command. It will provide all the data loaded into the ANTA inventory from your inventory file.

    anta get inventory --tags SPINE\nCurrent inventory content is:\n{\n'DC1-SPINE1': AsyncEOSDevice(\nname='DC1-SPINE1',\n        tags=['SPINE', 'DC1', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.101',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    ),\n    'DC1-SPINE2': AsyncEOSDevice(\nname='DC1-SPINE2',\n        tags=['SPINE', 'DC1', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.102',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    ),\n    'DC2-SPINE1': AsyncEOSDevice(\nname='DC2-SPINE1',\n        tags=['SPINE', 'DC2', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.201',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    ),\n    'DC2-SPINE2': AsyncEOSDevice(\nname='DC2-SPINE2',\n        tags=['SPINE', 'DC2', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.202',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    )\n}\n
    "},{"location":"cli/inv-from-ansible/","title":"Inventory from Ansible","text":""},{"location":"cli/inv-from-ansible/#create-an-inventory-from-ansible-inventory","title":"Create an Inventory from Ansible inventory","text":"

    In large setups, it might be beneficial to construct your inventory based on your Ansible inventory. The from-ansible entrypoint of the get command enables the user to create an ANTA inventory from Ansible.

    "},{"location":"cli/inv-from-ansible/#command-overview","title":"Command overview","text":"
    anta get from-ansible --help\nUsage: anta get from-ansible [OPTIONS]\n\nBuild ANTA inventory from an ansible inventory YAML file\n\nOptions:\n  -g, --ansible-group TEXT        Ansible group to filter\n  -i, --ansible-inventory FILENAME\n                                  Path to your ansible inventory file to read\n-o, --output FILENAME           Path to save inventory file\n  -d, --inventory-directory PATH  Directory to save inventory file\n  --help                          Show this message and exit.\n

    The output is an inventory where the name of the container is added as a tag for each host:

    anta_inventory:\nhosts:\n- host: 10.73.252.41\nname: srv-pod01\n- host: 10.73.252.42\nname: srv-pod02\n- host: 10.73.252.43\nname: srv-pod03\n

    Warning

    The current implementation only considers devices directly attached to a specific Ansible group and does not support inheritence when using the --ansible-group option.

    host value is coming from the ansible_host key in your inventory while name is the name you defined for your host. Below is an ansible inventory example used to generate previous inventory:

    ---\ntooling:\nchildren:\nendpoints:\nhosts:\nsrv-pod01:\nansible_httpapi_port: 9023\nansible_port: 9023\nansible_host: 10.73.252.41\ntype: endpoint\nsrv-pod02:\nansible_httpapi_port: 9024\nansible_port: 9024\nansible_host: 10.73.252.42\ntype: endpoint\nsrv-pod03:\nansible_httpapi_port: 9025\nansible_port: 9025\nansible_host: 10.73.252.43\ntype: endpoint\n
    "},{"location":"cli/inv-from-cvp/","title":"Inventory from CVP","text":""},{"location":"cli/inv-from-cvp/#create-an-inventory-from-cloudvision","title":"Create an Inventory from CloudVision","text":"

    In large setups, it might be beneficial to construct your inventory based on CloudVision. The from-cvp entrypoint of the get command enables the user to create an ANTA inventory from CloudVision.

    "},{"location":"cli/inv-from-cvp/#command-overview","title":"Command overview","text":"
    anta get from-cvp --help\nUsage: anta get from-cvp [OPTIONS]\n\nBuild ANTA inventory from Cloudvision\n\nOptions:\n  -ip, --cvp-ip TEXT              CVP IP Address  [required]\n-u, --cvp-username TEXT         CVP Username  [required]\n-p, --cvp-password TEXT         CVP Password / token  [required]\n-c, --cvp-container TEXT        Container where devices are configured\n  -d, --inventory-directory PATH  Path to save inventory file\n  --help                          Show this message and exit.\n

    The output is an inventory where the name of the container is added as a tag for each host:

    anta_inventory:\nhosts:\n- host: 192.168.0.13\nname: leaf2\ntags:\n- pod1\n- host: 192.168.0.15\nname: leaf4\ntags:\n- pod2\n

    Warning

    The current implementation only considers devices directly attached to a specific container when using the --cvp-container option.

    "},{"location":"cli/inv-from-cvp/#creating-an-inventory-from-multiple-containers","title":"Creating an inventory from multiple containers","text":"

    If you need to create an inventory from multiple containers, you can use a bash command and then manually concatenate files to create a single inventory file:

    $ for container in pod01 pod02 spines; do anta get from-cvp -ip <cvp-ip> -u cvpadmin -p cvpadmin -c $container -d test-inventory; done\n\n[12:25:35] INFO     Getting auth token from cvp.as73.inetsix.net for user tom\n[12:25:36] INFO     Creating inventory folder /home/tom/Projects/arista/network-test-automation/test-inventory\n           WARNING  Using the new api_token parameter. This will override usage of the cvaas_token parameter if both are provided. This is because api_token and cvaas_token parameters\n                    are for the same use case and api_token is more generic\n           INFO     Connected to CVP cvp.as73.inetsix.net\n\n\n[12:25:37] INFO     Getting auth token from cvp.as73.inetsix.net for user tom\n[12:25:38] WARNING  Using the new api_token parameter. This will override usage of the cvaas_token parameter if both are provided. This is because api_token and cvaas_token parameters\n                    are for the same use case and api_token is more generic\n           INFO     Connected to CVP cvp.as73.inetsix.net\n\n\n[12:25:38] INFO     Getting auth token from cvp.as73.inetsix.net for user tom\n[12:25:39] WARNING  Using the new api_token parameter. This will override usage of the cvaas_token parameter if both are provided. This is because api_token and cvaas_token parameters\n                    are for the same use case and api_token is more generic\n           INFO     Connected to CVP cvp.as73.inetsix.net\n\n           INFO     Inventory file has been created in /home/tom/Projects/arista/network-test-automation/test-inventory/inventory-spines.yml\n
    "},{"location":"cli/nrfu/","title":"NRFU","text":""},{"location":"cli/nrfu/#execute-network-readiness-for-use-nrfu-testing","title":"Execute Network Readiness For Use (NRFU) Testing","text":"

    ANTA provides a set of commands for performing NRFU tests on devices. These commands are under the anta nrfu namespace and offer multiple output format options:

    • Text view
    • Table view
    • JSON view
    • Custom template view
    "},{"location":"cli/nrfu/#nrfu-command-overview","title":"NRFU Command overview","text":"
    anta nrfu --help\nUsage: anta nrfu [OPTIONS] COMMAND [ARGS]...\n\n  Run NRFU against inventory devices\n\nOptions:\n  -c, --catalog FILE  Path to the tests catalog YAML file  [env var:\n                      ANTA_NRFU_CATALOG; required]\n--help              Show this message and exit.\n\nCommands:\n  json        ANTA command to check network state with JSON result\n  table       ANTA command to check network states with table result\n  text        ANTA command to check network states with text result\n  tpl-report  ANTA command to check network state with templated report\n

    All commands under the anta nrfu namespace require a catalog yaml file specified with the --catalog option.

    "},{"location":"cli/nrfu/#performing-nrfu-with-text-rendering","title":"Performing NRFU with text rendering","text":"

    The text subcommand provides a straightforward text report for each test executed on all devices in your inventory.

    "},{"location":"cli/nrfu/#command-overview","title":"Command overview","text":"
    anta nrfu text --help\nUsage: anta nrfu text [OPTIONS]\n\nANTA command to check network states with text result\n\nOptions:\n  -t, --tags TEXT    List of tags using comma as separator: tag1,tag2,tag3\n  -s, --search TEXT  Regular expression to search in both name and test\n--skip-error       Hide tests in errors due to connectivity issue\n  --help             Show this message and exit.\n

    The --tags option allows to target specific devices in your inventory, while the --search option permits filtering based on a regular expression pattern in both the hostname and the test name.

    The --skip-error option can be used to exclude tests that failed due to connectivity issues or unsupported commands.

    "},{"location":"cli/nrfu/#example","title":"Example","text":"

    anta nrfu text --tags LEAF --search DC1-LEAF1A\n

    "},{"location":"cli/nrfu/#performing-nrfu-with-table-rendering","title":"Performing NRFU with table rendering","text":"

    The table command under the anta nrfu namespace offers a clear and organized table view of the test results, suitable for filtering. It also has its own set of options for better control over the output.

    "},{"location":"cli/nrfu/#command-overview_1","title":"Command overview","text":"
    anta nrfu table --help\nUsage: anta nrfu table [OPTIONS]\n\nANTA command to check network states with table result\n\nOptions:\n  --tags TEXT               List of tags using comma as separator:\n                            tag1,tag2,tag3\n  -d, --device TEXT         Show a summary for this device\n  -t, --test TEXT           Show a summary for this test\n--group-by [device|test]  Group result by test or host. default none\n  --help                    Show this message and exit.\n

    The --tags option can be used to target specific devices in your inventory.

    The --device and --test options show a summarized view of the test results for a specific host or test case, respectively.

    The --group-by option show a summarized view of the test results per host or per test.

    "},{"location":"cli/nrfu/#examples","title":"Examples","text":"

    anta nrfu table --tags LEAF\n

    For larger setups, you can also group the results by host or test to get a summarized view:

    anta nrfu table --group-by device\n

    anta nrfu table --group-by test\n

    To get more specific information, it is possible to filter on a single device or a single test:

    anta nrfu table --device spine1\n

    anta nrfu table --test VerifyZeroTouch\n

    "},{"location":"cli/nrfu/#performing-nrfu-with-json-rendering","title":"Performing NRFU with JSON rendering","text":"

    The JSON rendering command in NRFU testing is useful in generating a JSON output that can subsequently be passed on to another tool for reporting purposes.

    "},{"location":"cli/nrfu/#command-overview_2","title":"Command overview","text":"
    anta nrfu json --help\nUsage: anta nrfu json [OPTIONS]\n\nANTA command to check network state with JSON result\n\nOptions:\n  -t, --tags TEXT    List of tags using comma as separator: tag1,tag2,tag3\n  -o, --output FILE  Path to save report as a file  [env var:\n                     ANTA_NRFU_JSON_OUTPUT]\n--help             Show this message and exit.\n

    The --tags option can be used to target specific devices in your inventory.

    The --output option allows you to save the JSON report as a file.

    "},{"location":"cli/nrfu/#example_1","title":"Example","text":"

    anta nrfu json --tags LEAF\n

    "},{"location":"cli/nrfu/#performing-nrfu-with-custom-reports","title":"Performing NRFU with custom reports","text":"

    ANTA offers a CLI option for creating custom reports. This leverages the Jinja2 template system, allowing you to tailor reports to your specific needs.

    "},{"location":"cli/nrfu/#command-overview_3","title":"Command overview","text":"

    anta nrfu tpl-report --help\nUsage: anta nrfu tpl-report [OPTIONS]\n\nANTA command to check network state with templated report\n\nOptions:\n  -tpl, --template FILE  Path to the template to use for the report  [env var:\n                         ANTA_NRFU_TPL_REPORT_TEMPLATE; required]\n-o, --output FILE      Path to save report as a file  [env var:\n                         ANTA_NRFU_TPL_REPORT_OUTPUT]\n-t, --tags TEXT        List of tags using comma as separator: tag1,tag2,tag3\n  --help                 Show this message and exit.\n
    The --template option is used to specify the Jinja2 template file for generating the custom report.

    The --output option allows you to choose the path where the final report will be saved.

    The --tags option can be used to target specific devices in your inventory.

    "},{"location":"cli/nrfu/#example_2","title":"Example","text":"

    anta nrfu tpl-report --tags LEAF --template ./custom_template.j2\n

    The template ./custom_template.j2 is a simple Jinja2 template:

    {% for d in data %}\n* {{ d.test }} is [green]{{ d.result | upper}}[/green] for {{ d.name }}\n{% endfor %}\n

    The Jinja2 template has access to all TestResult elements and their values, as described in this documentation.

    You can also save the report result to a file using the --output option:

    anta nrfu tpl-report --tags LEAF --template ./custom_template.j2 --output nrfu-tpl-report.txt\n

    The resulting output might look like this:

    cat nrfu-tpl-report.txt\n* VerifyMlagStatus is [green]SUCCESS[/green] for DC1-LEAF1A\n* VerifyMlagInterfaces is [green]SUCCESS[/green] for DC1-LEAF1A\n* VerifyMlagConfigSanity is [green]SUCCESS[/green] for DC1-LEAF1A\n* VerifyMlagReloadDelay is [green]SUCCESS[/green] for DC1-LEAF1A\n
    "},{"location":"cli/overview/","title":"Overview","text":""},{"location":"cli/overview/#overview-of-antas-command-line-interface-cli","title":"Overview of ANTA\u2019s Command-Line Interface (CLI)","text":"

    ANTA provides a powerful Command-Line Interface (CLI) to perform a wide range of operations. This document provides a comprehensive overview of ANTA CLI usage and its commands.

    ANTA can also be used as a Python library, allowing you to build your own tools based on it. Visit this page for more details.

    To start using the ANTA CLI, open your terminal and type anta.

    "},{"location":"cli/overview/#invoking-anta-cli","title":"Invoking ANTA CLI","text":"
    $ anta --help\nUsage: anta [OPTIONS] COMMAND [ARGS]...\n\n  Arista Network Test Automation (ANTA) CLI\n\nOptions:\n  --version                       Show the version and exit.\n  --username TEXT                 Username to connect to EOS  [env var:\n                                  ANTA_USERNAME; required]\n--password TEXT                 Password to connect to EOS that must be\n                                  provided. It can be prompted using '--\n                                  prompt' option.  [env var: ANTA_PASSWORD]\n--enable-password TEXT          Password to access EOS Privileged EXEC mode.\n                                  It can be prompted using '--prompt' option.\n                                  Requires '--enable' option.  [env var:\n                                  ANTA_ENABLE_PASSWORD]\n--enable                        Some commands may require EOS Privileged\n                                  EXEC mode. This option tries to access this\n                                  mode before sending a command to the device.\n                                  [env var: ANTA_ENABLE]\n-P, --prompt                    Prompt for passwords if they are not\n                                  provided.\n  --timeout INTEGER               Global connection timeout  [env var:\n                                  ANTA_TIMEOUT; default: 30]\n--insecure                      Disable SSH Host Key validation  [env var:\n                                  ANTA_INSECURE]\n-i, --inventory FILE            Path to the inventory YAML file  [env var:\n                                  ANTA_INVENTORY; required]\n--log-file FILE                 Send the logs to a file. If logging level is\n                                  DEBUG, only INFO or higher will be sent to\n                                  stdout.  [env var: ANTA_LOG_FILE]\n--log-level, --log [CRITICAL|ERROR|WARNING|INFO|DEBUG]\nANTA logging level  [env var:\n                                  ANTA_LOG_LEVEL; default: INFO]\n--ignore-status                 Always exit with success  [env var:\n                                  ANTA_IGNORE_STATUS]\n--ignore-error                  Only report failures and not errors  [env\n                                  var: ANTA_IGNORE_ERROR]\n--help                          Show this message and exit.\n\nCommands:\n  debug  Debug commands for building ANTA\n  exec   Execute commands to inventory devices\n  get    Get data from/to ANTA\n  nrfu   Run NRFU against inventory devices\n
    "},{"location":"cli/overview/#anta-global-parameters","title":"ANTA Global Parameters","text":"

    Certain parameters are globally required and can be either passed to the ANTA CLI or set as an environment variable (ENV VAR).

    To pass the parameters via the CLI:

    anta --username tom --password arista123 --inventory inventory.yml <anta cli>\n

    To set them as ENV VAR:

    export ANTA_USERNAME=tom\nexport ANTA_PASSWORD=arista123\nexport ANTA_INVENTORY=inventory.yml\n

    Then, run the CLI:

    anta <anta cli>\n
    "},{"location":"cli/overview/#anta-exit-codes","title":"ANTA Exit Codes","text":"

    ANTA utilizes different exit codes to indicate the status of the test runs.

    For all subcommands, ANTA will return the exit code 0, indicating a successful operation, except for the nrfu command.

    For the nrfu command, ANTA uses the following exit codes:

    • Exit code 0 - All tests passed successfully.
    • Exit code 1 - Tests were run, but at least one test returned a failure.
    • Exit code 2 - Tests were run, but at least one test returned an error.
    • Exit code 3 - An internal error occurred while executing tests.

    To ignore the test status, use anta --ignore-status nrfu, and the exit code will always be 0.

    To ignore errors, use anta --ignore-error nrfu, and the exit code will be 0 if all tests succeeded or 1 if any test failed.

    "},{"location":"cli/overview/#shell-completion","title":"Shell Completion","text":"

    You can enable shell completion for the ANTA CLI:

    ZSHBASH

    If you use ZSH shell, add the following line in your ~/.zshrc:

    eval \"$(_ANTA_COMPLETE=zsh_source anta)\" > /dev/null\n

    With bash, add the following line in your ~/.bashrc:

    eval \"$(_ANTA_COMPLETE=bash_source anta)\" > /dev/null\n
    "},{"location":"imgs/animated-svg/","title":"Animated svg","text":"

    Repository: https://github.com/marionebl/svg-term-cli Command: cat anta-nrfu.cast | svg-term --height 10 --window --out anta.svg

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Home","text":""},{"location":"#arista-network-test-automation-anta-framework","title":"Arista Network Test Automation (ANTA) Framework","text":"

    ANTA is Python framework that automates tests for Arista devices.

    • ANTA provides a set of tests to validate the state of your network
    • ANTA can be used to:
      • Automate NRFU (Network Ready For Use) test on a preproduction network
      • Automate tests on a live network (periodically or on demand)
    • ANTA can be used with:
      • The ANTA CLI
      • As a Python library in your own application

    # Install ANTA CLI\n$ pip install anta\n\n# Run ANTA CLI\n$ anta --help\nUsage: anta [OPTIONS] COMMAND [ARGS]...\n\n  Arista Network Test Automation (ANTA) CLI\n\nOptions:\n  --version                       Show the version and exit.\n  --username TEXT                 Username to connect to EOS  [env var:\n                                  ANTA_USERNAME; required]\n--password TEXT                 Password to connect to EOS that must be\n                                  provided. It can be prompted using '--\n                                  prompt' option.  [env var: ANTA_PASSWORD]\n--enable-password TEXT          Password to access EOS Privileged EXEC mode.\n                                  It can be prompted using '--prompt' option.\n                                  Requires '--enable' option.  [env var:\n                                  ANTA_ENABLE_PASSWORD]\n--enable                        Some commands may require EOS Privileged\n                                  EXEC mode. This option tries to access this\n                                  mode before sending a command to the device.\n                                  [env var: ANTA_ENABLE]\n-P, --prompt                    Prompt for passwords if they are not\n                                  provided.\n  --timeout INTEGER               Global connection timeout  [env var:\n                                  ANTA_TIMEOUT; default: 30]\n--insecure                      Disable SSH Host Key validation  [env var:\n                                  ANTA_INSECURE]\n-i, --inventory FILE            Path to the inventory YAML file  [env var:\n                                  ANTA_INVENTORY; required]\n--log-file FILE                 Send the logs to a file. If logging level is\n                                  DEBUG, only INFO or higher will be sent to\n                                  stdout.  [env var: ANTA_LOG_FILE]\n--log-level, --log [CRITICAL|ERROR|WARNING|INFO|DEBUG]\nANTA logging level  [env var:\n                                  ANTA_LOG_LEVEL; default: INFO]\n--ignore-status                 Always exit with success  [env var:\n                                  ANTA_IGNORE_STATUS]\n--ignore-error                  Only report failures and not errors  [env\n                                  var: ANTA_IGNORE_ERROR]\n--help                          Show this message and exit.\n\nCommands:\n  debug  Debug commands for building ANTA\n  exec   Execute commands to inventory devices\n  get    Get data from/to ANTA\n  nrfu   Run NRFU against inventory devices\n

    username, password, enable-password, enable, timeout and insecure values are the same for all devices

    "},{"location":"#documentation","title":"Documentation","text":"

    The documentation is published on ANTA package website. Also, a demo repository is available to facilitate your journey with ANTA.

    "},{"location":"#contribution-guide","title":"Contribution guide","text":"

    Contributions are welcome. Please refer to the contribution guide

    "},{"location":"#credits","title":"Credits","text":"

    Thank you to Ang\u00e9lique Phillipps, Colin MacGiollaE\u00e1in, Khelil Sator, Matthieu Tache, Onur Gashi, Paul Lavelle, Guillaume Mulocher and Thomas Grimonet for their contributions and guidances.

    "},{"location":"contribution/","title":"Contributions","text":""},{"location":"contribution/#how-to-contribute-to-anta","title":"How to contribute to ANTA","text":"

    Contribution model is based on a fork-model. Don\u2019t push to arista-netdevops-community/anta directly. Always do a branch in your forked repository and create a PR.

    To help development, open your PR as soon as possible even in draft mode. It helps other to know on what you are working on and avoid duplicate PRs.

    "},{"location":"contribution/#create-a-development-environement","title":"Create a development environement","text":"

    Run the following commands to create an ANTA development environement:

    # Clone repository\n$ git clone https://github.com/arista-netdevops-community/anta.git\n$ cd anta\n\n# Install ANTA in editable mode and its development tools\n$ pip install -e .[dev]\n\n# Verify installation\n$ pip list -e\nPackage Version Editable project location\n------- ------- -------------------------\nanta    0.7.2   /mnt/lab/projects/anta\n

    Then, tox is configued with few environments to run CI locally:

    $ tox list -d\ndefault environments:\nclean  -> Erase previous coverage reports\nlint   -> Check the code style\ntype   -> Check typing\npy38   -> Run pytest with py38\npy39   -> Run pytest with py39\npy310  -> Run pytest with py310\npy311  -> Run pytest with py311\nreport -> Generate coverage report\n
    "},{"location":"contribution/#code-linting","title":"Code linting","text":"
    tox -e lint\n[...]\nlint: commands[0]> black --check --diff --color .\nAll done! \u2728 \ud83c\udf70 \u2728\n104 files would be left unchanged.\nlint: commands[1]> isort --check --diff --color .\nSkipped 7 files\nlint: commands[2]> flake8 --max-line-length=165 --config=/dev/null anta\nlint: commands[3]> flake8 --max-line-length=165 --config=/dev/null tests\nlint: commands[4]> pylint anta\n\n--------------------------------------------------------------------\nYour code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)\n\n.pkg: _exit> python /Users/guillaumemulocher/.pyenv/versions/3.8.13/envs/anta/lib/python3.8/site-packages/pyproject_api/_backend.py True setuptools.build_meta\n  lint: OK (19.26=setup[5.83]+cmd[1.50,0.76,1.19,1.20,8.77] seconds)\ncongratulations :) (19.56 seconds)\n
    "},{"location":"contribution/#code-typing","title":"Code Typing","text":"
    tox -e type\n\n[...]\ntype: commands[0]> mypy --config-file=pyproject.toml anta\nSuccess: no issues found in 52 source files\n.pkg: _exit> python /Users/guillaumemulocher/.pyenv/versions/3.8.13/envs/anta/lib/python3.8/site-packages/pyproject_api/_backend.py True setuptools.build_meta\n  type: OK (46.66=setup[24.20]+cmd[22.46] seconds)\ncongratulations :) (47.01 seconds)\n

    NOTE: Typing is configured quite strictly, do not hesitate to reach out if you have any questions, struggles, nightmares.

    "},{"location":"contribution/#unit-tests","title":"Unit tests","text":"

    To keep high quality code, we require to provide a Pytest for every tests implemented in ANTA.

    All submodule should have its own pytest section under tests/units/anta_tests/<submodule-name>.py.

    "},{"location":"contribution/#how-to-write-a-unit-test-for-an-antatest-subclass","title":"How to write a unit test for an AntaTest subclass","text":"

    The Python modules in the tests/units/anta_tests folder define test parameters for AntaTest subclasses unit tests. A generic test function is written for all unit tests in tests.lib.anta module. The pytest_generate_tests function definition in conftest.py is called during test collection. The pytest_generate_tests function will parametrize the generic test function based on the DATA data structure defined in tests.units.anta_tests modules. See https://docs.pytest.org/en/7.3.x/how-to/parametrize.html#basic-pytest-generate-tests-example

    The DATA structure is a list of dictionaries used to parametrize the test. The list elements have the following keys: - name (str): Test name as displayed by Pytest. - test (AntaTest): An AntaTest subclass imported in the test module - e.g. VerifyUptime. - eos_data (list[dict]): List of data mocking EOS returned data to be passed to the test. - inputs (dict): Dictionary to instantiate the test inputs as defined in the class from test. - expected (dict): Expected test result structure, a dictionary containing a key result containing one of the allowed status (Literal['success', 'failure', 'unset', 'skipped', 'error']) and optionally a key messages which is a list(str) and each message is expected to be a substring of one of the actual messages in the TestResult object.

    In order for your unit tests to be correctly collected, you need to import the generic test function even if not used in the Python module.

    Test example for anta.tests.system.VerifyUptime AntaTest.

    # Import the generic test function\nfrom tests.lib.anta import test  # noqa: F401\n\n# Import your AntaTest\nfrom anta.tests.system import VerifyUptime\n\n# Define test parameters\nDATA: list[dict[str, Any]] = [\n   {\n        # Arbitrary test name\n        \"name\": \"success\",\n        # Must be an AntaTest definition\n        \"test\": VerifyUptime,\n        # Data returned by EOS on which the AntaTest is tested\n        \"eos_data\": [{\"upTime\": 1186689.15, \"loadAvg\": [0.13, 0.12, 0.09], \"users\": 1, \"currentTime\": 1683186659.139859}],\n        # Dictionary to instantiate VerifyUptime.Input\n        \"inputs\": {\"minimum\": 666},\n        # Expected test result\n        \"expected\": {\"result\": \"success\"},\n    },\n    {\n        \"name\": \"failure\",\n        \"test\": VerifyUptime,\n        \"eos_data\": [{\"upTime\": 665.15, \"loadAvg\": [0.13, 0.12, 0.09], \"users\": 1, \"currentTime\": 1683186659.139859}],\n        \"inputs\": {\"minimum\": 666},\n        # If the test returns messages, it needs to be expected otherwise test will fail.\n        # NB: expected messages only needs to be included in messages returned by the test. Exact match is not required.\n        \"expected\": {\"result\": \"failure\", \"messages\": [\"Device uptime is 665.15 seconds\"]},\n    },\n]\n
    "},{"location":"contribution/#git-pre-commit-hook","title":"Git Pre-commit hook","text":"
    pip install pre-commit\npre-commit install\n

    When running a commit or a pre-commit check:

    \u276f echo \"import foobaz\" > test.py && git add test.py\n\u276f pre-commit\npylint...................................................................Failed\n- hook id: pylint\n- exit code: 22\n\n************* Module test\ntest.py:1:0: C0114: Missing module docstring (missing-module-docstring)\ntest.py:1:0: E0401: Unable to import 'foobaz' (import-error)\ntest.py:1:0: W0611: Unused import foobaz (unused-import)\n

    NOTE: It could happen that pre-commit and tox disagree on something, in that case please open an issue on Github so we can take a look.. It is most probably wrong configuration on our side.

    "},{"location":"contribution/#configure-mypypath","title":"Configure MYPYPATH","text":"

    In some cases, mypy can complain about not having MYPYPATH configured in your shell. It is especially the case when you update both an anta test and its unit test. So you can configure this environment variable with:

    # Option 1: use local folder\nexport MYPYPATH=.\n\n# Option 2: use absolute path\nexport MYPYPATH=/path/to/your/local/anta/repository\n
    "},{"location":"contribution/#documentation","title":"Documentation","text":"

    mkdocs is used to generate the documentation. A PR should always update the documentation to avoid documentation debt.

    "},{"location":"contribution/#install-documentation-requirements","title":"Install documentation requirements","text":"

    Run pip to install the documentation requirements from the root of the repo:

    pip install -e .[doc]\n
    "},{"location":"contribution/#testing-documentation","title":"Testing documentation","text":"

    You can then check locally the documentation using the following command from the root of the repo:

    mkdocs serve\n

    By default, mkdocs listens to http://127.0.0.1:8000/, if you need to expose the documentation to another IP or port (for instance all IPs on port 8080), use the following command:

    mkdocs serve --dev-addr=0.0.0.0:8080\n
    "},{"location":"contribution/#build-class-diagram","title":"Build class diagram","text":"

    To build class diagram to use in API documentation, you can use pyreverse part of pylint with graphviz installed for jpeg generation.

    pyreverse anta --colorized -a1 -s1 -o jpeg -m true -k --output-directory docs/imgs/uml/ -c <FQDN anta class>\n

    Image will be generated under docs/imgs/uml/ and can be inserted in your documentation.

    "},{"location":"contribution/#checking-links","title":"Checking links","text":"

    Writing documentation is crucial but managing links can be cumbersome. To be sure there is no dead links, you can use muffet with the following command:

    muffet -c 2 --color=always http://127.0.0.1:8000 -e fonts.gstatic.com\n
    "},{"location":"contribution/#continuous-integration","title":"Continuous Integration","text":"

    GitHub actions is used to test git pushes and pull requests. The workflows are defined in this directory. We can view the results here.

    "},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#frequently-asked-questions-faq","title":"Frequently Asked Questions (FAQ)","text":""},{"location":"faq/#why-am-i-seeing-an-importerror-related-to-urllib3-when-running-anta","title":"Why am I seeing an ImportError related to urllib3 when running ANTA?","text":"

    When running the anta --help command, some users might encounter the following error:

    ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.0.2k-fips  26 Jan 2017'. See: https://github.com/urllib3/urllib3/issues/2168\n

    This error arises due to a compatibility issue between urllib3 v2.0 and older versions of OpenSSL.

    "},{"location":"faq/#how-can-i-resolve-this-error","title":"How can I resolve this error?","text":"
    1. Workaround: Downgrade urllib3

      If you need a quick fix, you can temporarily downgrade the urllib3 package:

      pip3 uninstall urllib3\n\npip3 install urllib3==1.26.15\n
    2. Recommended: Upgrade System or Libraries:

      As per the urllib3 v2 migration guide, the root cause of this error is an incompatibility with older OpenSSL versions. For example, users on RHEL7 might consider upgrading to RHEL8, which supports the required OpenSSL version.

    "},{"location":"faq/#why-am-i-seeing-attributeerror-module-lib-has-no-attribute-openssl_add_all_algorithms-when-running-anta","title":"Why am I seeing AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms' when running ANTA","text":"

    When running the anta commands after installation, some users might encounter the following error:

    AttributeError: module 'lib' has no attribute 'OpenSSL_add_all_algorithms'\n

    The error is a result of incompatibility between cryptography and pyopenssl when installing asyncssh which is a requirement of ANTA.

    "},{"location":"faq/#how-can-i-resolve-this-error_1","title":"How can I resolve this error?","text":"
    1. Upgrade pyopenssl

    If you\u2019ve tried the above solutions and continue to experience problems, please report the issue in our GitHub repository.

    "},{"location":"faq/#pip-install-u-pyopenssl220","title":"
    pip install -U pyopenssl>22.0\n
    ","text":""},{"location":"faq/#still-facing-issues","title":"Still facing issues?","text":""},{"location":"getting-started/","title":"Getting Started","text":""},{"location":"getting-started/#getting-started","title":"Getting Started","text":"

    This section shows how to use ANTA with basic configuration. All examples are based on Arista Test Drive (ATD) topology you can access by reaching out to your prefered SE.

    "},{"location":"getting-started/#installation","title":"Installation","text":"

    The easiest way to intall ANTA package is to run Python (>=3.8) and its pip package to install:

    pip install anta\n

    For more details about how to install package, please see the requirements and intallation section.

    "},{"location":"getting-started/#configure-arista-eos-devices","title":"Configure Arista EOS devices","text":"

    For ANTA to be able to connect to your target devices, you need to configure your management interface

    vrf instance MGMT\n!\ninterface Management0\n   description oob_management\n   vrf MGMT\n   ip address 192.168.0.10/24\n!\n

    Then, configure access to eAPI:

    !\nmanagement api http-commands\n   protocol https port 443\n   no shutdown\n   vrf MGMT\n      no shutdown\n   !\n!\n
    "},{"location":"getting-started/#create-your-inventory","title":"Create your inventory","text":"

    ANTA uses an inventory to list the target devices for the tests. You can create a file manually with this format:

    anta_inventory:\nhosts:\n- host: 192.168.0.10\nname: spine01\ntags: ['fabric', 'spine']\n- host: 192.168.0.11\nname: spine02\ntags: ['fabric', 'spine']\n- host: 192.168.0.12\nname: leaf01\ntags: ['fabric', 'leaf']\n- host: 192.168.0.13\nname: leaf02\ntags: ['fabric', 'leaf']\n- host: 192.168.0.14\nname: leaf03\ntags: ['fabric', 'leaf']\n- host: 192.168.0.15\nname: leaf04\ntags: ['fabric', 'leaf']\n

    You can read more details about how to build your inventory here

    "},{"location":"getting-started/#test-catalog","title":"Test Catalog","text":"

    To test your network, ANTA relies on a test catalog to list all the tests to run against your inventory. A test catalog references python functions into a yaml file.

    The structure to follow is like:

    <anta_tests_submodule>:\n- <anta_tests_submodule function name>:\n<test function option>:\n<test function option value>\n

    You can read more details about how to build your catalog here

    Here is an example for basic tests:

    # Load anta.tests.software\nanta.tests.software:\n- VerifyEOSVersion: # Verifies the device is running one of the allowed EOS version.\nversions: # List of allowed EOS versions.\n- 4.25.4M\n- 4.26.1F\n- '4.28.3M-28837868.4283M (engineering build)'\n- VerifyTerminAttrVersion:\nversions:\n- v1.22.1\n\nanta.tests.system:\n- VerifyUptime: # Verifies the device uptime is higher than a value.\nminimum: 1\n- VerifyNTP:\n- VerifySyslog:\n\nanta.tests.mlag:\n- VerifyMlagStatus:\n- VerifyMlagInterfaces:\n- VerifyMlagConfigSanity:\n\nanta.tests.configuration:\n- VerifyZeroTouch: # Verifies ZeroTouch is disabled.\n- VerifyRunningConfigDiffs:\n
    "},{"location":"getting-started/#test-your-network","title":"Test your network","text":"

    ANTA comes with a generic CLI entrypoint to run tests in your network. It requires an inventory file as well as a test catalog.

    This entrypoint has multiple options to manage test coverage and reporting.

    # Generic ANTA options\n$ anta\nUsage: anta [OPTIONS] COMMAND [ARGS]...\n\n  Arista Network Test Automation (ANTA) CLI\n\nOptions:\n  --version                       Show the version and exit.\n  --username TEXT                 Username to connect to EOS  [env var:\n                                  ANTA_USERNAME; required]\n--password TEXT                 Password to connect to EOS that must be\n                                  provided. It can be prompted using '--\n                                  prompt' option.  [env var: ANTA_PASSWORD]\n--enable-password TEXT          Password to access EOS Privileged EXEC mode.\n                                  It can be prompted using '--prompt' option.\n                                  Requires '--enable' option.  [env var:\n                                  ANTA_ENABLE_PASSWORD]\n--enable                        Some commands may require EOS Privileged\n                                  EXEC mode. This option tries to access this\n                                  mode before sending a command to the device.\n                                  [env var: ANTA_ENABLE]\n-P, --prompt                    Prompt for passwords if they are not\n                                  provided.\n  --timeout INTEGER               Global connection timeout  [env var:\n                                  ANTA_TIMEOUT; default: 30]\n--insecure                      Disable SSH Host Key validation  [env var:\n                                  ANTA_INSECURE]\n-i, --inventory FILE            Path to the inventory YAML file  [env var:\n                                  ANTA_INVENTORY; required]\n--log-file FILE                 Send the logs to a file. If logging level is\n                                  DEBUG, only INFO or higher will be sent to\n                                  stdout.  [env var: ANTA_LOG_FILE]\n--log-level, --log [CRITICAL|ERROR|WARNING|INFO|DEBUG]\nANTA logging level  [env var:\n                                  ANTA_LOG_LEVEL; default: INFO]\n--ignore-status                 Always exit with success  [env var:\n                                  ANTA_IGNORE_STATUS]\n--ignore-error                  Only report failures and not errors  [env\n                                  var: ANTA_IGNORE_ERROR]\n--help                          Show this message and exit.\n\nCommands:\n  debug  Debug commands for building ANTA\n  exec   Execute commands to inventory devices\n  get    Get data from/to ANTA\n  nrfu   Run NRFU against inventory devices\n
    # NRFU part of ANTA\n$ anta nrfu --help\nUsage: anta nrfu [OPTIONS] COMMAND [ARGS]...\n\n  Run NRFU against inventory devices\n\nOptions:\n  -c, --catalog FILE  Path to the tests catalog YAML file  [env var:\n                      ANTA_NRFU_CATALOG; required]\n--help              Show this message and exit.\n\nCommands:\n  json        ANTA command to check network state with JSON result\n  table       ANTA command to check network states with table result\n  text        ANTA command to check network states with text result\n  tpl-report  ANTA command to check network state with templated report\n

    To run the NRFU, you need to select an output format amongst [\u201cjson\u201d, \u201ctable\u201d, \u201ctext\u201d, \u201ctpl-report\u201d]. For a first usage, table is recommended. By default all test results for all devices are rendered but it can be changed to a report per test case or per host

    "},{"location":"getting-started/#default-report-using-table","title":"Default report using table","text":"
    anta \\\n--username tom \\\n--password arista123 \\\n--enable \\\n--enable-password t \\\n--inventory .personal/inventory_atd.yml \\\nnrfu --catalog .personal/tests-bases.yml table --tags leaf\n\n\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Settings \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Running ANTA tests:                                  \u2502\n\u2502 - ANTA Inventory contains 6 devices (AsyncEOSDevice) \u2502\n\u2502 - Tests catalog contains 10 tests                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[10:17:24] INFO     Running ANTA tests...                                                                                                           runner.py:75\n  \u2022 Running NRFU Tests...100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 40/40 \u2022 0:00:02 \u2022 0:00:00\n\n                                                                       All tests results\n\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Device IP \u2503 Test Name                \u2503 Test Status \u2503 Message(s)       \u2503 Test description                                                     \u2503 Test category \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 leaf01    \u2502 VerifyEOSVersion         \u2502 success     \u2502                  \u2502 Verifies the device is running one of the allowed EOS version.       \u2502 software      \u2502\n\u2502 leaf01    \u2502 VerifyTerminAttrVersion  \u2502 success     \u2502                  \u2502 Verifies the device is running one of the allowed TerminAttr         \u2502 software      \u2502\n\u2502           \u2502                          \u2502             \u2502                  \u2502 version.                                                             \u2502               \u2502\n\u2502 leaf01    \u2502 VerifyUptime             \u2502 success     \u2502                  \u2502 Verifies the device uptime is higher than a value.                   \u2502 system        \u2502\n\u2502 leaf01    \u2502 VerifyNTP                \u2502 success     \u2502                  \u2502 Verifies NTP is synchronised.                                        \u2502 system        \u2502\n\u2502 leaf01    \u2502 VerifySyslog             \u2502 success     \u2502                  \u2502 Verifies the device had no syslog message with a severity of warning \u2502 system        \u2502\n\u2502           \u2502                          \u2502             \u2502                  \u2502 (or a more severe message) during the last 7 days.                   \u2502               \u2502\n\u2502 leaf01    \u2502 VerifyMlagStatus         \u2502 skipped     \u2502 MLAG is disabled \u2502 This test verifies the health status of the MLAG configuration.      \u2502 mlag          \u2502\n\u2502 leaf01    \u2502 VerifyMlagInterfaces     \u2502 skipped     \u2502 MLAG is disabled \u2502 This test verifies there are no inactive or active-partial MLAG      \u2502 mlag          \u2502\n[...]\n\u2502 leaf04    \u2502 VerifyMlagConfigSanity   \u2502 skipped     \u2502 MLAG is disabled \u2502 This test verifies there are no MLAG config-sanity inconsistencies.  \u2502 mlag          \u2502\n\u2502 leaf04    \u2502 VerifyZeroTouch          \u2502 success     \u2502                  \u2502 Verifies ZeroTouch is disabled.                                      \u2502 configuration \u2502\n\u2502 leaf04    \u2502 VerifyRunningConfigDiffs \u2502 success     \u2502                  \u2502                                                                      \u2502 configuration \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
    "},{"location":"getting-started/#report-in-text-mode","title":"Report in text mode","text":"
    $ anta \\\n--username tom \\\n--password arista123 \\\n--enable \\\n--enable-password t \\\n--inventory .personal/inventory_atd.yml \\\nnrfu --catalog .personal/tests-bases.yml text --tags leaf\n\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Settings \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Running ANTA tests:                                  \u2502\n\u2502 - ANTA Inventory contains 6 devices (AsyncEOSDevice) \u2502\n\u2502 - Tests catalog contains 10 tests                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[10:20:47] INFO     Running ANTA tests...                                                                                                           runner.py:75\n  \u2022 Running NRFU Tests...100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 40/40 \u2022 0:00:01 \u2022 0:00:00\nleaf01 :: VerifyEOSVersion :: SUCCESS\nleaf01 :: VerifyTerminAttrVersion :: SUCCESS\nleaf01 :: VerifyUptime :: SUCCESS\nleaf01 :: VerifyNTP :: SUCCESS\nleaf01 :: VerifySyslog :: SUCCESS\nleaf01 :: VerifyMlagStatus :: SKIPPED (MLAG is disabled)\nleaf01 :: VerifyMlagInterfaces :: SKIPPED (MLAG is disabled)\nleaf01 :: VerifyMlagConfigSanity :: SKIPPED (MLAG is disabled)\n[...]\n
    "},{"location":"getting-started/#report-in-json-format","title":"Report in JSON format","text":"
    $ anta \\\n--username tom \\\n--password arista123 \\\n--enable \\\n--enable-password t \\\n--inventory .personal/inventory_atd.yml \\\nnrfu --catalog .personal/tests-bases.yml json --tags leaf\n\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Settings \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 Running ANTA tests:                                  \u2502\n\u2502 - ANTA Inventory contains 6 devices (AsyncEOSDevice) \u2502\n\u2502 - Tests catalog contains 10 tests                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[10:21:51] INFO     Running ANTA tests...                                                                                                           runner.py:75\n  \u2022 Running NRFU Tests...100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 40/40 \u2022 0:00:02 \u2022 0:00:00\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 JSON results of all tests                                                                                                                                    \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n[\n{\n\"name\": \"leaf01\",\n    \"test\": \"VerifyEOSVersion\",\n    \"categories\": [\n\"software\"\n],\n    \"description\": \"Verifies the device is running one of the allowed EOS version.\",\n    \"result\": \"success\",\n    \"messages\": [],\n    \"custom_field\": \"None\",\n  },\n  {\n\"name\": \"leaf01\",\n    \"test\": \"VerifyTerminAttrVersion\",\n    \"categories\": [\n\"software\"\n],\n    \"description\": \"Verifies the device is running one of the allowed TerminAttr version.\",\n    \"result\": \"success\",\n    \"messages\": [],\n    \"custom_field\": \"None\",\n  },\n[...]\n]\n

    You can find more information under the usage section of the website

    "},{"location":"requirements-and-installation/","title":"Installation","text":""},{"location":"requirements-and-installation/#anta-requirements","title":"ANTA Requirements","text":""},{"location":"requirements-and-installation/#python-version","title":"Python version","text":"

    Python 3 (>=3.8) is required:

    python --version\nPython 3.9.9\n
    "},{"location":"requirements-and-installation/#install-anta-package","title":"Install ANTA package","text":"

    This installation will deploy tests collection, scripts and all their Python requirements.

    The ANTA package and the cli require some packages that are not part of the Python standard library. They are indicated in the pyproject.toml file, under dependencies.

    "},{"location":"requirements-and-installation/#install-from-pypi-server","title":"Install from Pypi server","text":"
    pip install anta\n
    "},{"location":"requirements-and-installation/#install-anta-from-github","title":"Install ANTA from github","text":"
    pip install git+https://github.com/arista-netdevops-community/anta.git\n\n# You can even specify the branch, tag or commit:\npip install git+https://github.com/arista-netdevops-community/anta.git@<cool-feature-branch>\npip install git+https://github.com/arista-netdevops-community/anta.git@<cool-tag>\npip install git+https://github.com/arista-netdevops-community/anta.git@<more-or-less-cool-hash>\n
    "},{"location":"requirements-and-installation/#check-installation","title":"Check installation","text":"

    After installing ANTA, verify the installation with the following commands:

    # Check ANTA has been installed in your python path\npip list | grep anta\n\n# Check scripts are in your $PATH\n# Path may differ but it means CLI is in your path\nwhich anta\n/home/tom/.pyenv/shims/anta\n

    Warning

    Before running the anta --version command, please be aware that some users have reported issues related to the urllib3 package. If you encounter an error at this step, please refer to our FAQ page for guidance on resolving it.

    # Check ANTA version\nanta --version\nanta, version v0.7.2\n
    "},{"location":"requirements-and-installation/#eos-requirements","title":"EOS Requirements","text":"

    To get ANTA working, the targetted Arista EOS devices must have the following configuration (assuming you connect to the device using Management interface in MGMT VRF):

    configure\n!\nvrf instance MGMT\n!\ninterface Management1\n   description oob_management\n   vrf MGMT\n   ip address 10.73.1.105/24\n!\nend\n

    Enable eAPI on the MGMT vrf:

    configure\n!\nmanagement api http-commands\n   protocol https port 443\n   no shutdown\n   vrf MGMT\n      no shutdown\n!\nend\n

    Now the swicth accepts on port 443 in the MGMT VRF HTTPS requests containing a list of CLI commands.

    Run these EOS commands to verify:

    show management http-server\nshow management api http-commands\n
    "},{"location":"usage-inventory-catalog/","title":"Inventory & Tests catalog","text":""},{"location":"usage-inventory-catalog/#inventory-and-catalog-definition","title":"Inventory and Catalog definition","text":"

    This page describes how to create an inventory and a tests catalog.

    "},{"location":"usage-inventory-catalog/#create-an-inventory-file","title":"Create an inventory file","text":"

    anta cli needs an inventory file to list all devices to tests. This inventory is a YAML file with the folowing keys:

    anta_inventory:\nhosts:\n- host: < ip address value >\nport: < TCP port for eAPI. Default is 443 (Optional)>\nname: < name to display in report. Default is host:port (Optional) >\ntags: < list of tags to use to filter inventory during tests. Default is ['all']. (Optional) >\nnetworks:\n- network: < network using CIDR notation >\ntags: < list of tags to use to filter inventory during tests. Default is ['all']. (Optional) >\nranges:\n- start: < first ip address value of the range >\nend: < last ip address value of the range >\ntags: < list of tags to use to filter inventory during tests. Default is ['all']. (Optional) >\n

    Your inventory file can be based on any of these 3 keys and MUST start with anta_inventory key. A full description of the inventory model is available in API documentation

    An inventory example:

    ---\nanta_inventory:\nhosts:\n- host: 192.168.0.10\nname: spine01\ntags: ['fabric', 'spine']\n- host: 192.168.0.11\nname: spine02\ntags: ['fabric', 'spine']\nnetworks:\n- network: '192.168.110.0/24'\ntags: ['fabric', 'leaf']\nranges:\n- start: 10.0.0.9\nend: 10.0.0.11\ntags: ['fabric', 'l2leaf']\n
    "},{"location":"usage-inventory-catalog/#test-catalog","title":"Test Catalog","text":"

    In addition to your inventory file, you also have to define a catalog of tests to execute against all your devices. This catalog list all your tests and their parameters. Its format is a YAML file and keys are tests functions inherited from the python path.

    "},{"location":"usage-inventory-catalog/#default-tests-catalog","title":"Default tests catalog","text":"

    All tests are located under anta.tests module and are categorised per family (one submodule). So to run test for software version, you can do:

    anta.tests.software:\n- VerifyEosVersion:\n

    It will load the test VerifyEosVersion located in anta.tests.software. But since this function has parameters, we will create a catalog with the following structure:

    anta.tests.software:\n- VerifyEosVersion:\n# List of allowed EOS versions.\nversions:\n- 4.25.4M\n- 4.26.1F\n

    To get a list of all available tests and their respective parameters, you can read the tests section of this website.

    The following example gives a very minimal tests catalog you can use in almost any situation

    ---\n# Load anta.tests.software\nanta.tests.software:\n# Verifies the device is running one of the allowed EOS version.\n- VerifyEosVersion:\n# List of allowed EOS versions.\nversions:\n- 4.25.4M\n- 4.26.1F\n\n# Load anta.tests.system\nanta.tests.system:\n# Verifies the device uptime is higher than a value.\n- VerifyUptime:\nminimum: 1\n\n# Load anta.tests.configuration\nanta.tests.configuration:\n# Verifies ZeroTouch is disabled.\n- VerifyZeroTouch:\n- VerifyRunningConfigDiffs:\n
    "},{"location":"usage-inventory-catalog/#custom-tests-catalog","title":"Custom tests catalog","text":"

    In case you want to leverage your own tests collection, you can use the following syntax:

    <your package name>:\n- <your test in your package name>:\n

    So for instance, it could be:

    titom73.tests.system:\n- VerifyPlatform:\ntype: ['cEOS-LAB']\n

    How to create custom tests

    To create your custom tests, you should refer to this following documentation

    "},{"location":"usage-inventory-catalog/#customize-test-description-and-categories","title":"Customize test description and categories","text":"

    It might be interesting to use your own categories and customized test description to build a better report for your environment. ANTA comes with a handy feature to define your own categories and description in the report.

    In your test catalog, use result_overwrite dictionary with categories and description to just overwrite this values in your report:

    anta.tests.configuration:\n- VerifyZeroTouch: # Verifies ZeroTouch is disabled.\nresult_overwrite:\ncategories: ['demo', 'pr296']\ndescription: A custom test\n- VerifyRunningConfigDiffs:\nanta.tests.interfaces:\n- VerifyInterfaceUtilization:\n

    Once you run anta nrfu table, you will see following output:

    \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2503 Device IP \u2503 Test Name                  \u2503 Test Status \u2503 Message(s) \u2503 Test description                              \u2503 Test category \u2503\n\u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529\n\u2502 spine01   \u2502 VerifyZeroTouch            \u2502 success     \u2502            \u2502 A custom test                                 \u2502 demo, pr296   \u2502\n\u2502 spine01   \u2502 VerifyRunningConfigDiffs   \u2502 success     \u2502            \u2502                                               \u2502 configuration \u2502\n\u2502 spine01   \u2502 VerifyInterfaceUtilization \u2502 success     \u2502            \u2502 Verifies interfaces utilization is below 75%. \u2502 interfaces    \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n
    "},{"location":"advanced_usages/as-python-lib/","title":"ANTA as a Python Library","text":"

    ANTA is a Python library that can be used in user applications. This section describes how you can leverage ANTA Python modules to help you create your own NRFU solution.

    Tip

    If you are unfamiliar with asyncio, refer to the Python documentation relevant to your Python version - https://docs.python.org/3/library/asyncio.html

    "},{"location":"advanced_usages/as-python-lib/#antadevice-abstract-class","title":"AntaDevice Abstract Class","text":"

    A device is represented in ANTA as a instance of a subclass of the AntaDevice abstract class. There are few abstract methods that needs to be implemented by child classes:

    • The collect() coroutine is in charge of collecting outputs of AntaCommand instances.
    • The refresh() coroutine is in charge of updating attributes of the AntaDevice instance. These attributes are used by AntaInventory to filter out unreachable devices or by AntaTest to skip devices based on their hardware models.

    The copy() coroutine is used to copy files to and from the device. It does not need to be implemented if tests are not using it.

    "},{"location":"advanced_usages/as-python-lib/#asynceosdevice-class","title":"AsyncEOSDevice Class","text":"

    The AsyncEOSDevice class is an implementation of AntaDevice for Arista EOS. It uses the aio-eapi eAPI client and the AsyncSSH library.

    • The collect() coroutine collects AntaCommand outputs using eAPI.
    • The refresh() coroutine tries to open a TCP connection on the eAPI port and update the is_online attribute accordingly. If the TCP connection succeeds, it sends a show version command to gather the hardware model of the device and updates the established and hw_model attributes.
    • The copy() coroutine copies files to and from the device using the SCP protocol.
    "},{"location":"advanced_usages/as-python-lib/#antainventory-class","title":"AntaInventory Class","text":"

    The AntaInventory class is a subclass of the standard Python type dict. The keys of this dictionary are the device names, the values are AntaDevice instances.

    AntaInventory provides methods to interact with the ANTA inventory:

    • The add_device() method adds an AntaDevice instance to the inventory. Adding an entry to AntaInventory with a key different from the device name is not allowed.
    • The get_inventory() returns a new AntaInventory instance with filtered out devices based on the method inputs.
    • The connect_inventory() coroutine will execute the refresh() coroutines of all the devices in the inventory.
    • The parse() static method creates an AntaInventory instance from a YAML file and returns it. The devices are AsyncEOSDevice instances.

    To parse a YAML inventory file and print the devices connection status:

    \"\"\"\nExample\n\"\"\"\nimport asyncio\n\nfrom anta.inventory import AntaInventory\n\n\nasync def main(inv: AntaInventory) -> None:\n\"\"\"\n    Take an AntaInventory and:\n    1. try to connect to every device in the inventory\n    2. print a message for every device connection status\n    \"\"\"\n    await inv.connect_inventory()\n\n    for device in inv.values():\n        if device.established:\n            print(f\"Device {device.name} is online\")\n        else:\n            print(f\"Could not connect to device {device.name}\")\n\nif __name__ == \"__main__\":\n    # Create the AntaInventory instance\n    inventory = AntaInventory.parse(\n        inventory_file=\"inv.yml\",\n        username=\"arista\",\n        password=\"@rista123\",\n        timeout=15,\n    )\n\n    # Run the main coroutine\n    res = asyncio.run(main(inventory))\n
    How to create your inventory file

    Please visit this dedicated section for how to use inventory and catalog files.

    To run an EOS commands list on the reachable devices from the inventory:

    \"\"\"\nExample\n\"\"\"\n# This is needed to run the script for python < 3.10 for typing annotations\nfrom __future__ import annotations\n\nimport asyncio\nfrom pprint import pprint\n\nfrom anta.inventory import AntaInventory\nfrom anta.models import AntaCommand\n\n\nasync def main(inv: AntaInventory, commands: list[str]) -> dict[str, list[AntaCommand]]:\n\"\"\"\n    Take an AntaInventory and a list of commands as string and:\n    1. try to connect to every device in the inventory\n    2. collect the results of the commands from each device\n\n    Returns:\n      a dictionary where key is the device name and the value is the list of AntaCommand ran towards the device\n    \"\"\"\n    await inv.connect_inventory()\n\n    # Make a list of coroutine to run commands towards each connected device\n    coros = []\n    # dict to keep track of the commands per device\n    result_dict = {}\n    for name, device in inv.get_inventory(established_only=True).items():\n        anta_commands = [AntaCommand(command=command, ofmt=\"json\") for command in commands]\n        result_dict[name] = anta_commands\n        coros.append(device.collect_commands(anta_commands))\n\n    # Run the coroutines\n    await asyncio.gather(*coros)\n\n    return result_dict\n\n\nif __name__ == \"__main__\":\n    # Create the AntaInventory instance\n    inventory = AntaInventory.parse(\n        inventory_file=\"inv.yml\",\n        username=\"arista\",\n        password=\"@rista123\",\n        timeout=15,\n    )\n\n    # Create a list of commands with json output\n    commands = [\"show version\", \"show ip bgp summary\"]\n\n    # Run the main asyncio  entry point\n    res = asyncio.run(main(inventory, commands))\n\n    pprint(res)\n

    "},{"location":"advanced_usages/as-python-lib/#use-tests-from-anta","title":"Use tests from ANTA","text":"

    All the test classes inherit from the same abstract Base Class AntaTest. The Class definition indicates which commands are required for the test and the user should focus only on writing the test function with optional keywords argument. The instance of the class upon creation instantiates a TestResult object that can be accessed later on to check the status of the test ([unset, skipped, success, failure, error]).

    "},{"location":"advanced_usages/as-python-lib/#test-structure","title":"Test structure","text":"

    All tests are built on a class named AntaTest which provides a complete toolset for a test:

    • Object creation
    • Test definition
    • TestResult definition
    • Abstracted method to collect data

    This approach means each time you create a test it will be based on this AntaTest class. Besides that, you will have to provide some elements:

    • name: Name of the test
    • description: A human readable description of your test
    • categories: a list of categories to sort test.
    • commands: a list of command to run. This list must be a list of AntaCommand which is described in the next part of this document.

    Here is an example of a hardware test related to device temperature:

    from __future__ import annotations\n\nimport logging\nfrom typing import Any, Dict, List, Optional, cast\n\nfrom anta.models import AntaTest, AntaCommand\n\n\nclass VerifyTemperature(AntaTest):\n\"\"\"\n    Verifies device temparture is currently OK.\n    \"\"\"\n\n    # The test name\n    name = \"VerifyTemperature\"\n    # A small description of the test, usually the first line of the class docstring\n    description = \"Verifies device temparture is currently OK\"\n    # The category of the test, usually the module name\n    categories = [\"hardware\"]\n    # The command(s) used for the test. Could be a template instead\n    commands = [AntaCommand(command=\"show system environment temperature\", ofmt=\"json\")]\n\n    # Decorator\n    @AntaTest.anta_test\n    # abstract method that must be defined by the child Test class\n    def test(self) -> None:\n\"\"\"Run VerifyTemperature validation\"\"\"\n        command_output = cast(Dict[str, Dict[Any, Any]], self.instance_commands[0].output)\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature is not OK, systemStatus: {temperature_status }\")\n

    When you run the test, object will automatically call its anta.models.AntaTest.collect() method to get device output for each command if no pre-collected data was given to the test. This method does a loop to call anta.inventory.models.InventoryDevice.collect() methods which is in charge of managing device connection and how to get data.

    run test offline

    You can also pass eos data directly to your test if you want to validate data collected in a different workflow. An example is provided below just for information:

    test = VerifyTemperature(mocked_device, eos_data=test_data[\"eos_data\"])\nasyncio.run(test.test())\n

    The test function is always the same and must be defined with the @AntaTest.anta_test decorator. This function takes at least one argument which is a anta.inventory.models.InventoryDevice object. In some cases a test would rely on some additional inputs from the user, for instance the number of expected peers or some expected numbers. All parameters must come with a default value and the test function should validate the parameters values (at this stage this is the only place where validation can be done but there are future plans to make this better).

    class VerifyTemperature(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pass\n\nclass VerifyTransceiversManufacturers(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self, manufacturers: Optional[List[str]] = None) -> None:\n        # validate the manufactures parameter\n        pass\n

    The test itself does not return any value, but the result is directly availble from your AntaTest object and exposes a anta.result_manager.models.TestResult object with result, name of the test and optional messages:

    • name (str): Device name where the test has run.
    • test (str): Test name runs on the device.
    • categories (List[str]): List of categories the TestResult belongs to, by default the AntaTest categories.
    • description (str): TestResult description, by default the AntaTest description.
    • results (str): Result of the test. Can be one of [\u201cunset\u201d, \u201csuccess\u201d, \u201cfailure\u201d, \u201cerror\u201d, \u201cskipped\u201d].
    • message (str, optional): Message to report after the test if any.
    • custom_field (str, optional): Custom field to store a string for flexibility in integrating with ANTA
    from anta.tests.hardware import VerifyTemperature\n\ntest = VerifyTemperature(mocked_device, eos_data=test_data[\"eos_data\"])\nasyncio.run(test.test())\nassert test.result.result == \"success\"\n
    "},{"location":"advanced_usages/as-python-lib/#classes-for-commands","title":"Classes for commands","text":"

    To make it easier to get data, ANTA defines 2 different classes to manage commands to send to devices:

    "},{"location":"advanced_usages/as-python-lib/#antacommand-class","title":"AntaCommand Class","text":"

    Represent a command with following information:

    • Command to run
    • Ouput format expected
    • eAPI version
    • Output of the command

    Usage example:

    from anta.models import AntaCommand\n\ncmd1 = AntaCommand(command=\"show zerotouch\")\ncmd2 = AntaCommand(command=\"show running-config diffs\", ofmt=\"text\")\n

    Command revision and version

    • Most of EOS commands return a JSON structure according to a model (some commands may not be modeled hence the necessity to use text outformat sometimes.
    • The model can change across time (adding feature, \u2026 ) and when the model is changed in a non backward-compatible way, the revision number is bumped. The initial model starts with revision 1.
    • A revision applies to a particular CLI command whereas a version is global to an eAPI call. The version is internally translated to a specific revision for each CLI command in the RPC call. The currently supported version vaues are 1 and latest.
    • A revision takes precedence over a version (e.g. if a command is run with version=\u201dlatest\u201d and revision=1, the first revision of the model is returned)
    • By default eAPI returns the first revision of each model to ensure that when upgrading, intergation with existing tools is not broken. This is done by using by default version=1 in eAPI calls.

    ANTA uses by default version=\"latest\" in AntaCommand. For some commands, you may want to run them with a different revision or version.

    For instance the VerifyRoutingTableSize test leverages the first revision of show bfd peers:

    # revision 1 as later revision introduce additional nesting for type\ncommands = [AntaCommand(command=\"show bfd peers\", revision=1)]\n
    "},{"location":"advanced_usages/as-python-lib/#antatemplate-class","title":"AntaTemplate Class","text":"

    Because some command can require more dynamic than just a command with no parameter provided by user, ANTA supports command template: you define a template in your test class and user provide parameters when creating test object.

    class RunArbitraryTemplateCommand(AntaTest):\n\"\"\"\n    Run an EOS command and return result\n    Based on AntaTest to build relevant output for pytest\n    \"\"\"\n\n    name = \"Run aributrary EOS command\"\n    description = \"To be used only with anta debug commands\"\n    template = AntaTemplate(template=\"show interfaces {ifd}\")\n    categories = [\"debug\"]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        errdisabled_interfaces = [interface for interface, value in response[\"interfaceStatuses\"].items() if value[\"linkStatus\"] == \"errdisabled\"]\n        ...\n\n\nparams = [{\"ifd\": \"Ethernet2\"}, {\"ifd\": \"Ethernet49/1\"}]\nrun_command1 = RunArbitraryTemplateCommand(device_anta, params)\n

    In this example, test waits for interfaces to check from user setup and will only check for interfaces in params

    "},{"location":"advanced_usages/custom-tests/","title":"Developing ANTA tests","text":"

    This documentation applies for both creating tests in ANTA or creating your own test package.

    ANTA is not only a Python library with a CLI and a collection of built-in tests, it is also a framework you can extend by building your own tests.

    "},{"location":"advanced_usages/custom-tests/#generic-approach","title":"Generic approach","text":"

    A test is a Python class where a test function is defined and will be run by the framework.

    ANTA provides an abstract class AntaTest. This class does the heavy lifting and provide the logic to define, collect and test data. The code below is an example of a simple test in ANTA, which is an AntaTest subclass:

    from anta.models import AntaTest, AntaCommand\nfrom anta.decorators import skip_on_platforms\n\n\nclass VerifyTemperature(AntaTest):\n\"\"\"\n    This test verifies if the device temperature is within acceptable limits.\n\n    Expected Results:\n      * success: The test will pass if the device temperature is currently OK: 'temperatureOk'.\n      * failure: The test will fail if the device temperature is NOT OK.\n    \"\"\"\n\n    name = \"VerifyTemperature\"\n    description = \"Verifies if the device temperature is within the acceptable range.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment temperature\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature exceeds acceptable limits. Current system status: '{temperature_status}'\")\n

    AntaTest also provide more advanced capabilities like AntaCommand templating using the AntaTemplate class or test inputs definition and validation using AntaTest.Input pydantic model. This will be discussed in the sections below.

    "},{"location":"advanced_usages/custom-tests/#antatest-structure","title":"AntaTest structure","text":""},{"location":"advanced_usages/custom-tests/#class-attributes","title":"Class Attributes","text":"
    • name (str): Name of the test. Used during reporting.
    • description (str): A human readable description of your test.
    • categories (list[str]): A list of categories in which the test belongs.
    • commands (list[Union[AntaTemplate, AntaCommand]]): A list of command to collect from devices. This list must be a list of AntaCommand or AntaTemplate instances. Rendering AntaTemplate instances will be discussed later.

    Info

    All these class attributes are mandatory. If any attribute is missing, a NotImplementedError exception will be raised during class instantiation.

    "},{"location":"advanced_usages/custom-tests/#instance-attributes","title":"Instance Attributes","text":"

    Info

    You can access an instance attribute in your code using the self reference. E.g. you can access the test input values using self.inputs.

    Logger object

    ANTA already provides comprehensive logging at every steps of a test execution. The AntaTest class also provides a logger attribute that is a Python logger specific to the test instance. See Python documentation for more information.

    AntaDevice object

    Even if device is not a private attribute, you should not need to access this object in your code.

    "},{"location":"advanced_usages/custom-tests/#test-inputs","title":"Test Inputs","text":"

    AntaTest.Input is a pydantic model that allow test developers to define their test inputs. pydantic provides out of the box error handling for test input validation based on the type hints defined by the test developer.

    The base definition of AntaTest.Input provides common test inputs for all AntaTest instances:

    "},{"location":"advanced_usages/custom-tests/#input-model","title":"Input model","text":""},{"location":"advanced_usages/custom-tests/#resultoverwrite-model","title":"ResultOverwrite model","text":"

    Attributes:

    Name Type Description description Optional[str]

    overwrite TestResult.description

    categories Optional[List[str]]

    overwrite TestResult.categories

    custom_field Optional[str]

    a free string that will be included in the TestResult object

    Note

    The pydantic model is configured using the extra=forbid that will fail input validation if extra fields are provided.

    "},{"location":"advanced_usages/custom-tests/#methods","title":"Methods","text":"
    • test(self) -> None: This is an abstract method that must be implemented. It contains the test logic that can access the collected command outputs using the instance_commands instance attribute, access the test inputs using the inputs instance attribute and must set the result instance attribute accordingly. It must be implemented using the AntaTest.anta_test decorator that provides logging and will collect commands before executing the test() method.
    • render(self, template: AntaTemplate) -> list[AntaCommand]: This method only needs to be implemented if AntaTemplate instances are present in the commands class attribute. It will be called for every AntaTemplate occurence and must return a list of AntaCommand using the AntaTemplate.render() method. It can access test inputs using the inputs instance attribute.
    "},{"location":"advanced_usages/custom-tests/#test-execution","title":"Test execution","text":"

    Below is a high level description of the test execution flow in ANTA:

    1. ANTA will parse the test catalog to get the list of AntaTest subclasses to instantiate and their associated input values. We consider a single AntaTest subclass in the following steps.

    2. ANTA will instantiate the AntaTest subclass and a single device will be provided to the test instance. The Input model defined in the class will also be instantiated at this moment. If any ValidationError is raised, the test execution will be stopped.

    3. If there is any AntaTemplate instance in the commands class attribute, render() will be called for every occurrence. At this moment, the instance_commands attribute has been initialized. If any rendering error occurs, the test execution will be stopped.

    4. The AntaTest.anta_test decorator will collect the commands from the device and update the instance_commands attribute with the outputs. If any collection error occurs, the test execution will be stopped.

    5. The test() method is executed.

    "},{"location":"advanced_usages/custom-tests/#writing-an-antatest-subclass","title":"Writing an AntaTest subclass","text":"

    In this section, we will go into all the details of writing an AntaTest subclass.

    "},{"location":"advanced_usages/custom-tests/#class-definition","title":"Class definition","text":"

    Import anta.models.AntaTest and define your own class. Define the mandatory class attributes using anta.models.AntaCommand, anta.models.AntaTemplate or both.

    from anta.models import AntaTest, AntaCommand, AntaTemplate\n\n\nclass <YourTestName>(AntaTest):\n\"\"\"\n    <a docstring description of your test>\n    \"\"\"\n\n    name = \"YourTestName\"                                           # should be your class name\n    description = \"<test description in human reading format>\"\n    categories = [\"<arbitrary category>\", \"<another arbitrary category>\"]\n    commands = [\n        AntaCommand(\n            command=\"<EOS command to run>\",\n            ofmt=\"<command format output>\",\n            version=\"<eAPI version to use>\",\n            revision=\"<revision to use for the command>\",           # revision has precedence over version\n        ),\n        AntaTemplate(\n            template=\"<Python f-string to render an EOS command>\",\n            ofmt=\"<command format output>\",\n            version=\"<eAPI version to use>\",\n            revision=\"<revision to use for the command>\",           # revision has precedence over version\n        )\n    ]\n
    "},{"location":"advanced_usages/custom-tests/#inputs-definition","title":"Inputs definition","text":"

    If the user needs to provide inputs for your test, you need to define a pydantic model that defines the schema of the test inputs:

    class <YourTestName>(AntaTest):\n    ...\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        <input field name>: <input field type>\n\"\"\"<input field docstring>\"\"\"\n

    To define an input field type, refer to the pydantic documentation about types. You can also leverage anta.custom_types that provides reusable types defined in ANTA tests.

    Regarding required, optional and nullable fields, refer to this documentation on how to define them.

    Note

    All the pydantic features are supported. For instance you can define validators for complex input validation.

    "},{"location":"advanced_usages/custom-tests/#template-rendering","title":"Template rendering","text":"

    Define the render() method if you have AntaTemplate instances in your commands class attribute:

    class <YourTestName>(AntaTest):\n    ...\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(<template param>=input_value) for input_value in self.inputs.<input_field>]\n

    You can access test inputs and render as many AntaCommand as desired.

    "},{"location":"advanced_usages/custom-tests/#test-definition","title":"Test definition","text":"

    Implement the test() method with your test logic:

    class <YourTestName>(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pass\n

    The logic usually includes the following different stages: 1. Parse the command outputs using the self.instance_commands instance attribute. 2. If needed, access the test inputs using the self.inputs instance attribute and write your conditional logic. 3. Set the result instance attribute to reflect the test result by either calling self.result.is_success() or self.result.is_failure(\"<FAILURE REASON>\"). Sometimes, setting the test result to skipped using self.result.is_skipped(\"<SKIPPED REASON>\") can make sense (e.g. testing the OSPF neighbor states but no neighbor was found). However, you should not need to catch any exception and set the test result to error since the error handling is done by the framework, see below.

    The example below is based on the VerifyTemperature test.

    class VerifyTemperature(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Grab output of the collected command\n        command_output = self.instance_commands[0].json_output\n\n        # Do your test: In this example we check a specific field of the JSON output from EOS\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature exceeds acceptable limits. Current system status: '{temperature_status}'\")\n

    As you can see there is no error handling to do in your code. Everything is packaged in the AntaTest.anta_tests decorator and below is a simple example of error captured when trying to access a dictionary with an incorrect key:

    class VerifyTemperature(AntaTest):\n    ...\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Grab output of the collected command\n        command_output = self.instance_commands[0].json_output\n\n        # Access the dictionary with an incorrect key\n        command_output['incorrectKey']\n
    ERROR    Exception raised for test VerifyTemperature (on device 192.168.0.10) - KeyError ('incorrectKey')\n

    Get stack trace for debugging

    If you want to access to the full exception stack, you can run ANTA in debug mode by setting the ANTA_DEBUG environment variable to true. Example:

    $ ANTA_DEBUG=true anta nrfu --catalog test_custom.yml text\n

    "},{"location":"advanced_usages/custom-tests/#test-decorators","title":"Test decorators","text":"

    In addition to the required AntaTest.anta_tests decorator, ANTA offers a set of optional decorators for further test customization:

    • anta.decorators.deprecated_test: Use this to log a message of WARNING severity when a test is deprecated.
    • anta.decorators.skip_on_platforms: Use this to skip tests for functionalities that are not supported on specific platforms.
    • anta.decorators.check_bgp_family_enable: Use this to skip tests when a particular BGP address family is not configured on the device.

    Warning

    The check_bgp_family_enable decorator is deprecated and will eventually be removed in a future major release of ANTA. For more details, please refer to the BGP tests section.

    from anta.decorators import skip_on_platforms\n\nclass VerifyTemperature(AntaTest):\n    ...\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pass\n
    "},{"location":"advanced_usages/custom-tests/#access-your-custom-tests-in-the-test-catalog","title":"Access your custom tests in the test catalog","text":"

    This section is required only if you are not merging your development into ANTA. Otherwise, just follow contribution guide.

    For that, you need to create your own Python package as described in this hitchhiker\u2019s guide to package Python code. We assume it is well known and we won\u2019t focus on this aspect. Thus, your package must be impartable by ANTA hence available in the module search path sys.path (you can use PYTHONPATH for example).

    It is very similar to what is documented in catalog section but you have to use your own package name.2

    Let say the custom Python package is anta_titom73 and the test is defined in anta_titom73.dc_project Python module, the test catalog would look like:

    anta_titom73.dc_project:\n- VerifyFeatureX:\nminimum: 1\n
    And now you can run your NRFU tests with the CLI:

    anta nrfu text --catalog test_custom.yml\nspine01 :: verify_dynamic_vlan :: FAILURE (Device has 0 configured, we expect at least 1)\nspine02 :: verify_dynamic_vlan :: FAILURE (Device has 0 configured, we expect at least 1)\nleaf01 :: verify_dynamic_vlan :: SUCCESS\nleaf02 :: verify_dynamic_vlan :: SUCCESS\nleaf03 :: verify_dynamic_vlan :: SUCCESS\nleaf04 :: verify_dynamic_vlan :: SUCCESS\n
    "},{"location":"api/device/","title":"Device models","text":""},{"location":"api/device/#antadevice-base-class","title":"AntaDevice base class","text":""},{"location":"api/device/#uml-representation","title":"UML representation","text":""},{"location":"api/device/#anta.device.AntaDevice","title":"AntaDevice","text":"
    AntaDevice(name: str, tags: Optional[list[str]] = None)\n

    Bases: ABC

    Abstract class representing a device in ANTA. An implementation of this class needs must override the abstract coroutines collect() and refresh().

    Attributes:

    Name Type Description name str

    Device name

    is_online bool

    True if the device IP is reachable and a port can be open

    established bool

    True if remote command execution succeeds

    hw_model Optional[str]

    Hardware model of the device

    tags list[str]

    List of tags for this device

    Parameters:

    Name Type Description Default name str

    Device name

    required tags Optional[list[str]]

    list of tags for this device

    None Source code in anta/device.py
    def __init__(self, name: str, tags: Optional[list[str]] = None) -> None:\n\"\"\"\n    Constructor of AntaDevice\n\n    Args:\n        name: Device name\n        tags: list of tags for this device\n    \"\"\"\n    self.name: str = name\n    self.hw_model: Optional[str] = None\n    self.tags: list[str] = tags if tags is not None else []\n    self.is_online: bool = False\n    self.established: bool = False\n\n    # Ensure tag 'all' is always set\n    if DEFAULT_TAG not in self.tags:\n        self.tags.append(DEFAULT_TAG)\n
    "},{"location":"api/device/#anta.device.AntaDevice.collect","title":"collect abstractmethod async","text":"
    collect(command: AntaCommand) -> None\n

    Collect device command output. This abstract coroutine can be used to implement any command collection method for a device in ANTA.

    The collect() implementation needs to populate the output attribute of the AntaCommand object passed as argument.

    If a failure occurs, the collect() implementation is expected to catch the exception and implement proper logging, the output attribute of the AntaCommand object passed as argument would be None in this case.

    Parameters:

    Name Type Description Default command AntaCommand

    the command to collect

    required Source code in anta/device.py
    @abstractmethod\nasync def collect(self, command: AntaCommand) -> None:\n\"\"\"\n    Collect device command output.\n    This abstract coroutine can be used to implement any command collection method\n    for a device in ANTA.\n\n    The `collect()` implementation needs to populate the `output` attribute\n    of the `AntaCommand` object passed as argument.\n\n    If a failure occurs, the `collect()` implementation is expected to catch the\n    exception and implement proper logging, the `output` attribute of the\n    `AntaCommand` object passed as argument would be `None` in this case.\n\n    Args:\n        command: the command to collect\n    \"\"\"\n
    "},{"location":"api/device/#anta.device.AntaDevice.collect_commands","title":"collect_commands async","text":"
    collect_commands(commands: list[AntaCommand]) -> None\n

    Collect multiple commands.

    Parameters:

    Name Type Description Default commands list[AntaCommand]

    the commands to collect

    required Source code in anta/device.py
    async def collect_commands(self, commands: list[AntaCommand]) -> None:\n\"\"\"\n    Collect multiple commands.\n\n    Args:\n        commands: the commands to collect\n    \"\"\"\n    await asyncio.gather(*(self.collect(command=command) for command in commands))\n
    "},{"location":"api/device/#anta.device.AntaDevice.copy","title":"copy async","text":"
    copy(sources: list[Path], destination: Path, direction: Literal['to', 'from'] = 'from') -> None\n

    Copy files to and from the device, usually through SCP. It is not mandatory to implement this for a valid AntaDevice subclass.

    Parameters:

    Name Type Description Default sources list[Path]

    List of files to copy to or from the device.

    required destination Path

    Local or remote destination when copying the files. Can be a folder.

    required direction Literal['to', 'from']

    Defines if this coroutine copies files to or from the device.

    'from' Source code in anta/device.py
    async def copy(self, sources: list[Path], destination: Path, direction: Literal[\"to\", \"from\"] = \"from\") -> None:\n\"\"\"\n    Copy files to and from the device, usually through SCP.\n    It is not mandatory to implement this for a valid AntaDevice subclass.\n\n    Args:\n        sources: List of files to copy to or from the device.\n        destination: Local or remote destination when copying the files. Can be a folder.\n        direction: Defines if this coroutine copies files to or from the device.\n    \"\"\"\n    raise NotImplementedError(f\"copy() method has not been implemented in {self.__class__.__name__} definition\")\n
    "},{"location":"api/device/#anta.device.AntaDevice.refresh","title":"refresh abstractmethod async","text":"
    refresh() -> None\n

    Update attributes of an AntaDevice instance.

    This coroutine must update the following attributes of AntaDevice
    • is_online: When the device IP is reachable and a port can be open
    • established: When a command execution succeeds
    • hw_model: The hardware model of the device
    Source code in anta/device.py
    @abstractmethod\nasync def refresh(self) -> None:\n\"\"\"\n    Update attributes of an AntaDevice instance.\n\n    This coroutine must update the following attributes of AntaDevice:\n        - `is_online`: When the device IP is reachable and a port can be open\n        - `established`: When a command execution succeeds\n        - `hw_model`: The hardware model of the device\n    \"\"\"\n
    "},{"location":"api/device/#async-eos-device-class","title":"Async EOS device class","text":""},{"location":"api/device/#uml-representation_1","title":"UML representation","text":""},{"location":"api/device/#anta.device.AsyncEOSDevice","title":"AsyncEOSDevice","text":"
    AsyncEOSDevice(host: str, username: str, password: str, name: Optional[str] = None, enable: bool = False, enable_password: Optional[str] = None, port: Optional[int] = None, ssh_port: Optional[int] = 22, tags: Optional[list[str]] = None, timeout: Optional[float] = None, insecure: bool = False, proto: Literal['http', 'https'] = 'https')\n

    Bases: AntaDevice

    Implementation of AntaDevice for EOS using aio-eapi.

    Attributes:

    Name Type Description name

    Device name

    is_online

    True if the device IP is reachable and a port can be open

    established

    True if remote command execution succeeds

    hw_model

    Hardware model of the device

    tags

    List of tags for this device

    Parameters:

    Name Type Description Default host str

    Device FQDN or IP

    required username str

    Username to connect to eAPI and SSH

    required password str

    Password to connect to eAPI and SSH

    required name Optional[str]

    Device name

    None enable bool

    Device needs privileged access

    False enable_password Optional[str]

    Password used to gain privileged access on EOS

    None port Optional[int]

    eAPI port. Defaults to 80 is proto is \u2018http\u2019 or 443 if proto is \u2018https\u2019.

    None ssh_port Optional[int]

    SSH port

    22 tags Optional[list[str]]

    List of tags for this device

    None timeout Optional[float]

    Timeout value in seconds for outgoing connections. Default to 10 secs.

    None insecure bool

    Disable SSH Host Key validation

    False proto Literal['http', 'https']

    eAPI protocol. Value can be \u2018http\u2019 or \u2018https\u2019

    'https' Source code in anta/device.py
    def __init__(  # pylint: disable=R0913\n    self,\n    host: str,\n    username: str,\n    password: str,\n    name: Optional[str] = None,\n    enable: bool = False,\n    enable_password: Optional[str] = None,\n    port: Optional[int] = None,\n    ssh_port: Optional[int] = 22,\n    tags: Optional[list[str]] = None,\n    timeout: Optional[float] = None,\n    insecure: bool = False,\n    proto: Literal[\"http\", \"https\"] = \"https\",\n) -> None:\n\"\"\"\n    Constructor of AsyncEOSDevice\n\n    Args:\n        host: Device FQDN or IP\n        username: Username to connect to eAPI and SSH\n        password: Password to connect to eAPI and SSH\n        name: Device name\n        enable: Device needs privileged access\n        enable_password: Password used to gain privileged access on EOS\n        port: eAPI port. Defaults to 80 is proto is 'http' or 443 if proto is 'https'.\n        ssh_port: SSH port\n        tags: List of tags for this device\n        timeout: Timeout value in seconds for outgoing connections. Default to 10 secs.\n        insecure: Disable SSH Host Key validation\n        proto: eAPI protocol. Value can be 'http' or 'https'\n    \"\"\"\n    if name is None:\n        name = f\"{host}{f':{port}' if port else ''}\"\n    super().__init__(name, tags)\n    self.enable = enable\n    self._enable_password = enable_password\n    self._session: Device = Device(host=host, port=port, username=username, password=password, proto=proto, timeout=timeout)\n    ssh_params: dict[str, Any] = {}\n    if insecure:\n        ssh_params.update({\"known_hosts\": None})\n    self._ssh_opts: SSHClientConnectionOptions = SSHClientConnectionOptions(host=host, port=ssh_port, username=username, password=password, **ssh_params)\n
    "},{"location":"api/device/#anta.device.AsyncEOSDevice.collect","title":"collect async","text":"
    collect(command: AntaCommand) -> None\n

    Collect device command output from EOS using aio-eapi.

    Supports outformat json and text as output structure. Gain privileged access using the enable_password attribute of the AntaDevice instance if populated.

    Parameters:

    Name Type Description Default command AntaCommand

    the command to collect

    required Source code in anta/device.py
    async def collect(self, command: AntaCommand) -> None:\n\"\"\"\n    Collect device command output from EOS using aio-eapi.\n\n    Supports outformat `json` and `text` as output structure.\n    Gain privileged access using the `enable_password` attribute\n    of the `AntaDevice` instance if populated.\n\n    Args:\n        command: the command to collect\n    \"\"\"\n    try:\n        commands = []\n        if self.enable and self._enable_password is not None:\n            commands.append(\n                {\n                    \"cmd\": \"enable\",\n                    \"input\": str(self._enable_password),\n                }\n            )\n        elif self.enable:\n            # No password\n            commands.append({\"cmd\": \"enable\"})\n        if command.revision:\n            commands.append({\"cmd\": command.command, \"revision\": command.revision})\n        else:\n            commands.append({\"cmd\": command.command})\n        response = await self._session.cli(\n            commands=commands,\n            ofmt=command.ofmt,\n            version=command.version,\n        )\n        # remove first dict related to enable command\n        # only applicable to json output\n        if command.ofmt in [\"json\", \"text\"]:\n            # selecting only our command output\n            response = response[-1]\n        command.output = response\n        logger.debug(f\"{self.name}: {command}\")\n\n    except EapiCommandError as e:\n        message = f\"Command '{command.command}' failed on {self.name}\"\n        anta_log_exception(e, message, logger)\n        command.failed = e\n    except (HTTPError, ConnectError) as e:\n        message = f\"Cannot connect to device {self.name}\"\n        anta_log_exception(e, message, logger)\n        command.failed = e\n    except Exception as e:  # pylint: disable=broad-exception-caught\n        message = f\"Exception raised while collecting command '{command.command}' on device {self.name}\"\n        anta_log_exception(e, message, logger)\n        command.failed = e\n        logger.debug(command)\n
    "},{"location":"api/device/#anta.device.AsyncEOSDevice.copy","title":"copy async","text":"
    copy(sources: list[Path], destination: Path, direction: Literal['to', 'from'] = 'from') -> None\n

    Copy files to and from the device using asyncssh.scp().

    Parameters:

    Name Type Description Default sources list[Path]

    List of files to copy to or from the device.

    required destination Path

    Local or remote destination when copying the files. Can be a folder.

    required direction Literal['to', 'from']

    Defines if this coroutine copies files to or from the device.

    'from' Source code in anta/device.py
    async def copy(self, sources: list[Path], destination: Path, direction: Literal[\"to\", \"from\"] = \"from\") -> None:\n\"\"\"\n    Copy files to and from the device using asyncssh.scp().\n\n    Args:\n        sources: List of files to copy to or from the device.\n        destination: Local or remote destination when copying the files. Can be a folder.\n        direction: Defines if this coroutine copies files to or from the device.\n    \"\"\"\n    async with asyncssh.connect(\n        host=self._ssh_opts.host,\n        port=self._ssh_opts.port,\n        tunnel=self._ssh_opts.tunnel,\n        family=self._ssh_opts.family,\n        local_addr=self._ssh_opts.local_addr,\n        options=self._ssh_opts,\n    ) as conn:\n        src: Union[list[tuple[SSHClientConnection, Path]], list[Path]]\n        dst: Union[tuple[SSHClientConnection, Path], Path]\n        if direction == \"from\":\n            src = [(conn, file) for file in sources]\n            dst = destination\n            for file in sources:\n                logger.info(f\"Copying '{file}' from device {self.name} to '{destination}' locally\")\n        elif direction == \"to\":\n            src = sources\n            dst = (conn, destination)\n            for file in sources:\n                logger.info(f\"Copying '{file}' to device {self.name} to '{destination}' remotely\")\n        else:\n            logger.critical(f\"'direction' argument to copy() fonction is invalid: {direction}\")\n            return\n        await asyncssh.scp(src, dst)\n
    "},{"location":"api/device/#anta.device.AsyncEOSDevice.refresh","title":"refresh async","text":"
    refresh() -> None\n

    Update attributes of an AsyncEOSDevice instance.

    This coroutine must update the following attributes of AsyncEOSDevice: - is_online: When a device IP is reachable and a port can be open - established: When a command execution succeeds - hw_model: The hardware model of the device

    Source code in anta/device.py
    async def refresh(self) -> None:\n\"\"\"\n    Update attributes of an AsyncEOSDevice instance.\n\n    This coroutine must update the following attributes of AsyncEOSDevice:\n    - is_online: When a device IP is reachable and a port can be open\n    - established: When a command execution succeeds\n    - hw_model: The hardware model of the device\n    \"\"\"\n    # Refresh command\n    COMMAND: str = \"show version\"\n    # Hardware model definition in show version\n    HW_MODEL_KEY: str = \"modelName\"\n    logger.debug(f\"Refreshing device {self.name}\")\n    self.is_online = await self._session.check_connection()\n    if self.is_online:\n        try:\n            response = await self._session.cli(command=COMMAND)\n        except EapiCommandError as e:\n            logger.warning(f\"Cannot get hardware information from device {self.name}: {e.errmsg}\")\n        except (HTTPError, ConnectError) as e:\n            logger.warning(f\"Cannot get hardware information from device {self.name}: {exc_to_str(e)}\")\n        else:\n            if HW_MODEL_KEY in response:\n                self.hw_model = response[HW_MODEL_KEY]\n            else:\n                logger.warning(f\"Cannot get hardware information from device {self.name}: cannot parse '{COMMAND}'\")\n    else:\n        logger.warning(f\"Could not connect to device {self.name}: cannot open eAPI port\")\n    self.established = bool(self.is_online and self.hw_model)\n
    "},{"location":"api/inventory/","title":"Inventory module","text":""},{"location":"api/inventory/#anta.inventory.AntaInventory","title":"AntaInventory","text":"

    Bases: dict

    Inventory abstraction for ANTA framework.

    "},{"location":"api/inventory/#anta.inventory.AntaInventory.add_device","title":"add_device","text":"
    add_device(device: AntaDevice) -> None\n

    Add a device to final inventory.

    Parameters:

    Name Type Description Default device AntaDevice

    Device object to be added

    required Source code in anta/inventory/__init__.py
    def add_device(self, device: AntaDevice) -> None:\n\"\"\"Add a device to final inventory.\n\n    Args:\n        device: Device object to be added\n    \"\"\"\n    self[device.name] = device\n
    "},{"location":"api/inventory/#anta.inventory.AntaInventory.connect_inventory","title":"connect_inventory async","text":"
    connect_inventory() -> None\n

    Run refresh() coroutines for all AntaDevice objects in this inventory.

    Source code in anta/inventory/__init__.py
    async def connect_inventory(self) -> None:\n\"\"\"Run `refresh()` coroutines for all AntaDevice objects in this inventory.\"\"\"\n    logger.debug(\"Refreshing devices...\")\n    results = await asyncio.gather(\n        *(device.refresh() for device in self.values()),\n        return_exceptions=True,\n    )\n    for r in results:\n        if isinstance(r, Exception):\n            message = \"Error when refreshing inventory\"\n            anta_log_exception(r, message, logger)\n
    "},{"location":"api/inventory/#anta.inventory.AntaInventory.get_inventory","title":"get_inventory","text":"
    get_inventory(established_only: bool = False, tags: Optional[list[str]] = None) -> AntaInventory\n

    Returns a filtered inventory.

    Parameters:

    Name Type Description Default established_only bool

    Whether or not to include only established devices. Default False.

    False tags Optional[list[str]]

    List of tags to filter devices.

    None

    Returns:

    Name Type Description AntaInventory AntaInventory

    An inventory with filtered AntaDevice objects.

    Source code in anta/inventory/__init__.py
    def get_inventory(self, established_only: bool = False, tags: Optional[list[str]] = None) -> AntaInventory:\n\"\"\"\n    Returns a filtered inventory.\n\n    Args:\n        established_only: Whether or not to include only established devices. Default False.\n        tags: List of tags to filter devices.\n\n    Returns:\n        AntaInventory: An inventory with filtered AntaDevice objects.\n    \"\"\"\n\n    def _filter_devices(device: AntaDevice) -> bool:\n\"\"\"\n        Helper function to select the devices based on the input tags\n        and the requirement for an established connection.\n        \"\"\"\n        if tags is not None and all(tag not in tags for tag in device.tags):\n            return False\n        return bool(not established_only or device.established)\n\n    devices: list[AntaDevice] = list(filter(_filter_devices, self.values()))\n    result = AntaInventory()\n    for device in devices:\n        result.add_device(device)\n    return result\n
    "},{"location":"api/inventory/#anta.inventory.AntaInventory.parse","title":"parse staticmethod","text":"
    parse(inventory_file: str, username: str, password: str, enable: bool = False, enable_password: Optional[str] = None, timeout: Optional[float] = None, insecure: bool = False) -> AntaInventory\n

    Create an AntaInventory instance from an inventory file. The inventory devices are AsyncEOSDevice instances.

    Parameters:

    Name Type Description Default inventory_file str

    Path to inventory YAML file where user has described his inputs

    required username str

    Username to use to connect to devices

    required password str

    Password to use to connect to devices

    required enable bool

    Whether or not the commands need to be run in enable mode towards the devices

    False timeout float

    timeout in seconds for every API call.

    None

    Raises:

    Type Description InventoryRootKeyError

    Root key of inventory is missing.

    InventoryIncorrectSchema

    Inventory file is not following AntaInventory Schema.

    InventoryUnknownFormat

    Output format is not supported.

    Source code in anta/inventory/__init__.py
    @staticmethod\ndef parse(\n    inventory_file: str,\n    username: str,\n    password: str,\n    enable: bool = False,\n    enable_password: Optional[str] = None,\n    timeout: Optional[float] = None,\n    insecure: bool = False,\n) -> AntaInventory:\n    # pylint: disable=too-many-arguments\n\"\"\"\n    Create an AntaInventory instance from an inventory file.\n    The inventory devices are AsyncEOSDevice instances.\n\n    Args:\n        inventory_file (str): Path to inventory YAML file where user has described his inputs\n        username (str): Username to use to connect to devices\n        password (str): Password to use to connect to devices\n        enable (bool): Whether or not the commands need to be run in enable mode towards the devices\n        timeout (float, optional): timeout in seconds for every API call.\n\n    Raises:\n        InventoryRootKeyError: Root key of inventory is missing.\n        InventoryIncorrectSchema: Inventory file is not following AntaInventory Schema.\n        InventoryUnknownFormat: Output format is not supported.\n    \"\"\"\n\n    inventory = AntaInventory()\n    kwargs: dict[str, Any] = {\n        \"username\": username,\n        \"password\": password,\n        \"enable\": enable,\n        \"enable_password\": enable_password,\n        \"timeout\": timeout,\n        \"insecure\": insecure,\n    }\n    kwargs = {k: v for k, v in kwargs.items() if v is not None}\n\n    with open(inventory_file, \"r\", encoding=\"UTF-8\") as file:\n        data = safe_load(file)\n\n    # Load data using Pydantic\n    try:\n        inventory_input = AntaInventoryInput(**data[AntaInventory.INVENTORY_ROOT_KEY])\n    except KeyError as exc:\n        logger.error(f\"Inventory root key is missing: {AntaInventory.INVENTORY_ROOT_KEY}\")\n        raise InventoryRootKeyError(f\"Inventory root key ({AntaInventory.INVENTORY_ROOT_KEY}) is not defined in your inventory\") from exc\n    except ValidationError as exc:\n        logger.error(\"Inventory data are not compliant with inventory models\")\n        raise InventoryIncorrectSchema(f\"Inventory is not following the schema: {str(exc)}\") from exc\n\n    # Read data from input\n    AntaInventory._parse_hosts(inventory_input, inventory, **kwargs)\n    AntaInventory._parse_networks(inventory_input, inventory, **kwargs)\n    AntaInventory._parse_ranges(inventory_input, inventory, **kwargs)\n\n    return inventory\n
    "},{"location":"api/inventory/#anta.inventory.exceptions","title":"exceptions","text":"

    Manage Exception in Inventory module.

    "},{"location":"api/inventory/#anta.inventory.exceptions.InventoryIncorrectSchema","title":"InventoryIncorrectSchema","text":"

    Bases: Exception

    Error when user data does not follow ANTA schema.

    "},{"location":"api/inventory/#anta.inventory.exceptions.InventoryRootKeyError","title":"InventoryRootKeyError","text":"

    Bases: Exception

    Error raised when inventory root key is not found.

    "},{"location":"api/inventory.models.input/","title":"Inventory models","text":""},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryInput","title":"AntaInventoryInput","text":"

    Bases: BaseModel

    User\u2019s inventory model.

    Attributes:

    Name Type Description networks (list[AntaInventoryNetwork], Optional)

    List of AntaInventoryNetwork objects for networks.

    hosts (list[AntaInventoryHost], Optional)

    List of AntaInventoryHost objects for hosts.

    range (list[AntaInventoryRange], Optional)

    List of AntaInventoryRange objects for ranges.

    "},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryHost","title":"AntaInventoryHost","text":"

    Bases: BaseModel

    Host definition for user\u2019s inventory.

    Attributes:

    Name Type Description host IPvAnyAddress

    IPv4 or IPv6 address of the device

    port int

    (Optional) eAPI port to use Default is 443.

    name str

    (Optional) Name to display during tests report. Default is hostname:port

    tags list[str]

    List of attached tags read from inventory file.

    "},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryNetwork","title":"AntaInventoryNetwork","text":"

    Bases: BaseModel

    Network definition for user\u2019s inventory.

    Attributes:

    Name Type Description network IPvAnyNetwork

    Subnet to use for testing.

    tags list[str]

    List of attached tags read from inventory file.

    "},{"location":"api/inventory.models.input/#anta.inventory.models.AntaInventoryRange","title":"AntaInventoryRange","text":"

    Bases: BaseModel

    IP Range definition for user\u2019s inventory.

    Attributes:

    Name Type Description start IPvAnyAddress

    IPv4 or IPv6 address for the begining of the range.

    stop IPvAnyAddress

    IPv4 or IPv6 address for the end of the range.

    tags list[str]

    List of attached tags read from inventory file.

    "},{"location":"api/models/","title":"Test models","text":""},{"location":"api/models/#test-definition","title":"Test definition","text":""},{"location":"api/models/#uml-diagram","title":"UML Diagram","text":""},{"location":"api/models/#anta.models.AntaTest","title":"AntaTest","text":"
    AntaTest(device: AntaDevice, inputs: Optional[dict[str, Any]], eos_data: Optional[list[dict[Any, Any] | str]] = None)\n

    Bases: ABC

    Abstract class defining a test in ANTA

    The goal of this class is to handle the heavy lifting and make writing a test as simple as possible.

    Examples:

    The following is an example of an AntaTest subclass implementation:

        class VerifyReachability(AntaTest):\n        name = \"VerifyReachability\"\n        description = \"Test the network reachability to one or many destination IP(s).\"\n        categories = [\"connectivity\"]\n        commands = [AntaTemplate(template=\"ping vrf {vrf} {dst} source {src} repeat 2\")]\n\n        class Input(AntaTest.Input):\n            hosts: list[Host]\n            class Host(BaseModel):\n                dst: IPv4Address\n                src: IPv4Address\n                vrf: str = \"default\"\n\n        def render(self, template: AntaTemplate) -> list[AntaCommand]:\n            return [template.render({\"dst\": host.dst, \"src\": host.src, \"vrf\": host.vrf}) for host in self.inputs.hosts]\n\n        @AntaTest.anta_test\n        def test(self) -> None:\n            failures = []\n            for command in self.instance_commands:\n                if command.params and (\"src\" and \"dst\") in command.params:\n                    src, dst = command.params[\"src\"], command.params[\"dst\"]\n                if \"2 received\" not in command.json_output[\"messages\"][0]:\n                    failures.append((str(src), str(dst)))\n            if not failures:\n                self.result.is_success()\n            else:\n                self.result.is_failure(f\"Connectivity test failed for the following source-destination pairs: {failures}\")\n
    Attributes: device: AntaDevice instance on which this test is run inputs: AntaTest.Input instance carrying the test inputs instance_commands: List of AntaCommand instances of this test result: TestResult instance representing the result of this test logger: Python logger for this test instance

    Parameters:

    Name Type Description Default device AntaDevice

    AntaDevice instance on which the test will be run

    required inputs Optional[dict[str, Any]]

    dictionary of attributes used to instantiate the AntaTest.Input instance

    required eos_data Optional[list[dict[Any, Any] | str]]

    Populate outputs of the test commands instead of collecting from devices. This list must have the same length and order than the instance_commands instance attribute.

    None Source code in anta/models.py
    def __init__(\n    self,\n    device: AntaDevice,\n    inputs: Optional[dict[str, Any]],\n    eos_data: Optional[list[dict[Any, Any] | str]] = None,\n):\n\"\"\"AntaTest Constructor\n\n    Args:\n        device: AntaDevice instance on which the test will be run\n        inputs: dictionary of attributes used to instantiate the AntaTest.Input instance\n        eos_data: Populate outputs of the test commands instead of collecting from devices.\n                  This list must have the same length and order than the `instance_commands` instance attribute.\n    \"\"\"\n    self.logger: logging.Logger = logging.getLogger(f\"{self.__module__}.{self.__class__.__name__}\")\n    self.device: AntaDevice = device\n    self.inputs: AntaTest.Input\n    self.instance_commands: list[AntaCommand] = []\n    self.result: TestResult = TestResult(name=device.name, test=self.name, categories=self.categories, description=self.description)\n    self._init_inputs(inputs)\n    if self.result.result == \"unset\":\n        self._init_commands(eos_data)\n
    "},{"location":"api/models/#anta.models.AntaTest.collected","title":"collected property","text":"
    collected: bool\n

    Returns True if all commands for this test have been collected.

    "},{"location":"api/models/#anta.models.AntaTest.failed_commands","title":"failed_commands property","text":"
    failed_commands: list[AntaCommand]\n

    Returns a list of all the commands that have failed.

    "},{"location":"api/models/#anta.models.AntaTest.Input","title":"Input","text":"

    Bases: BaseModel

    Class defining inputs for a test in ANTA.

    Examples:

    A valid test catalog will look like the following:

    <Python module>:\n- <AntaTest subclass>:\nresult_overwrite:\ncategories:\n- \"Overwritten category 1\"\ndescription: \"Test with overwritten description\"\ncustom_field: \"Test run by John Doe\"\n
    Attributes: result_overwrite: Define fields to overwrite in the TestResult object

    "},{"location":"api/models/#anta.models.AntaTest.Input.ResultOverwrite","title":"ResultOverwrite","text":"

    Bases: BaseModel

    Test inputs model to overwrite result fields

    Attributes:

    Name Type Description description Optional[str]

    overwrite TestResult.description

    categories Optional[List[str]]

    overwrite TestResult.categories

    custom_field Optional[str]

    a free string that will be included in the TestResult object

    "},{"location":"api/models/#anta.models.AntaTest.anta_test","title":"anta_test staticmethod","text":"
    anta_test(function: F) -> Callable[..., Coroutine[Any, Any, TestResult]]\n

    Decorator for the test() method.

    This decorator implements (in this order):

    1. Instantiate the command outputs if eos_data is provided to the test() method
    2. Collect the commands from the device
    3. Run the test() method
    4. Catches any exception in test() user code and set the result instance attribute
    Source code in anta/models.py
    @staticmethod\ndef anta_test(function: F) -> Callable[..., Coroutine[Any, Any, TestResult]]:\n\"\"\"\n    Decorator for the `test()` method.\n\n    This decorator implements (in this order):\n\n    1. Instantiate the command outputs if `eos_data` is provided to the `test()` method\n    2. Collect the commands from the device\n    3. Run the `test()` method\n    4. Catches any exception in `test()` user code and set the `result` instance attribute\n    \"\"\"\n\n    @wraps(function)\n    async def wrapper(\n        self: AntaTest,\n        eos_data: Optional[list[dict[Any, Any] | str]] = None,\n        **kwargs: Any,\n    ) -> TestResult:\n\"\"\"\n        Args:\n            eos_data: Populate outputs of the test commands instead of collecting from devices.\n                      This list must have the same length and order than the `instance_commands` instance attribute.\n\n        Returns:\n            result: TestResult instance attribute populated with error status if any\n        \"\"\"\n\n        def format_td(seconds: float, digits: int = 3) -> str:\n            isec, fsec = divmod(round(seconds * 10**digits), 10**digits)\n            return f\"{timedelta(seconds=isec)}.{fsec:0{digits}.0f}\"\n\n        start_time = time.time()\n        if self.result.result != \"unset\":\n            return self.result\n\n        # TODO maybe_skip decorators\n\n        # Data\n        if eos_data is not None:\n            self.save_commands_data(eos_data)\n            self.logger.debug(f\"Test {self.name} initialized with input data {eos_data}\")\n\n        # If some data is missing, try to collect\n        if not self.collected:\n            await self.collect()\n            if self.result.result != \"unset\":\n                return self.result\n\n        try:\n            if self.failed_commands:\n                self.result.is_error(\n                    message=\"\\n\".join(\n                        [f\"{cmd.command} has failed: {exc_to_str(cmd.failed)}\" if cmd.failed else f\"{cmd.command} has failed\" for cmd in self.failed_commands]\n                    )\n                )\n                return self.result\n            function(self, **kwargs)\n        except Exception as e:  # pylint: disable=broad-exception-caught\n            message = f\"Exception raised for test {self.name} (on device {self.device.name})\"\n            anta_log_exception(e, message, self.logger)\n            self.result.is_error(message=exc_to_str(e))\n\n        test_duration = time.time() - start_time\n        self.logger.debug(f\"Executing test {self.name} on device {self.device.name} took {format_td(test_duration)}\")\n\n        AntaTest.update_progress()\n        return self.result\n\n    return wrapper\n
    "},{"location":"api/models/#anta.models.AntaTest.collect","title":"collect async","text":"
    collect() -> None\n

    Method used to collect outputs of all commands of this test class from the device of this test instance.

    Source code in anta/models.py
    async def collect(self) -> None:\n\"\"\"\n    Method used to collect outputs of all commands of this test class from the device of this test instance.\n    \"\"\"\n    try:\n        await self.device.collect_commands(self.instance_commands)\n    except Exception as e:  # pylint: disable=broad-exception-caught\n        message = f\"Exception raised while collecting commands for test {self.name} (on device {self.device.name})\"\n        anta_log_exception(e, message, self.logger)\n        self.result.is_error(message=exc_to_str(e))\n
    "},{"location":"api/models/#anta.models.AntaTest.render","title":"render","text":"
    render(template: AntaTemplate) -> list[AntaCommand]\n

    Render an AntaTemplate instance of this AntaTest using the provided AntaTest.Input instance at self.inputs.

    This is not an abstract method because it does not need to be implemented if there is no AntaTemplate for this test.

    Source code in anta/models.py
    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n\"\"\"Render an AntaTemplate instance of this AntaTest using the provided\n       AntaTest.Input instance at self.inputs.\n\n    This is not an abstract method because it does not need to be implemented if there is\n    no AntaTemplate for this test.\"\"\"\n    raise NotImplementedError(f\"AntaTemplate are provided but render() method has not been implemented for {self.__module__}.{self.name}\")\n
    "},{"location":"api/models/#anta.models.AntaTest.save_commands_data","title":"save_commands_data","text":"
    save_commands_data(eos_data: list[dict[str, Any] | str]) -> None\n

    Populate output of all AntaCommand instances in instance_commands

    Source code in anta/models.py
    def save_commands_data(self, eos_data: list[dict[str, Any] | str]) -> None:\n\"\"\"Populate output of all AntaCommand instances in `instance_commands`\"\"\"\n    if len(eos_data) != len(self.instance_commands):\n        self.result.is_error(message=\"Test initialization error: Trying to save more data than there are commands for the test\")\n        return\n    for index, data in enumerate(eos_data or []):\n        self.instance_commands[index].output = data\n
    "},{"location":"api/models/#anta.models.AntaTest.test","title":"test abstractmethod","text":"
    test() -> Coroutine[Any, Any, TestResult]\n

    This abstract method is the core of the test logic. It must set the correct status of the result instance attribute with the appropriate outcome of the test.

    Examples:

    It must be implemented using the AntaTest.anta_test decorator:

    @AntaTest.anta_test\ndef test(self) -> None:\n    self.result.is_success()\n    for command in self.instance_commands:\n        if not self._test_command(command): # _test_command() is an arbitrary test logic\n            self.result.is_failure(\"Failure reson\")\n

    Source code in anta/models.py
    @abstractmethod\ndef test(self) -> Coroutine[Any, Any, TestResult]:\n\"\"\"\n    This abstract method is the core of the test logic.\n    It must set the correct status of the `result` instance attribute\n    with the appropriate outcome of the test.\n\n    Examples:\n    It must be implemented using the `AntaTest.anta_test` decorator:\n        ```python\n        @AntaTest.anta_test\n        def test(self) -> None:\n            self.result.is_success()\n            for command in self.instance_commands:\n                if not self._test_command(command): # _test_command() is an arbitrary test logic\n                    self.result.is_failure(\"Failure reson\")\n        ```\n    \"\"\"\n
    "},{"location":"api/models/#command-definition","title":"Command definition","text":""},{"location":"api/models/#uml-diagram_1","title":"UML Diagram","text":""},{"location":"api/models/#anta.models.AntaCommand","title":"AntaCommand","text":"

    Bases: BaseModel

    Class to define a command.

    Info

    eAPI models are revisioned, this means that if a model is modified in a non-backwards compatible way, then its revision will be bumped up (revisions are numbers, default value is 1).

    By default an eAPI request will return revision 1 of the model instance, this ensures that older management software will not suddenly stop working when a switch is upgraded. A revision applies to a particular CLI command whereas a version is global and is internally translated to a specific revision for each CLI command in the RPC.

    Revision has precedence over version.

    Attributes:

    Name Type Description command str

    Device command

    version Literal[1, 'latest']

    eAPI version - valid values are 1 or \u201clatest\u201d - default is \u201clatest\u201d

    revision Optional[conint(ge=1, le=99)]

    eAPI revision of the command. Valid values are 1 to 99. Revision has precedence over version.

    ofmt Literal['json', 'text']

    eAPI output - json or text - default is json

    template Optional[AntaTemplate]

    AntaTemplate object used to render this command

    params Optional[Dict[str, Any]]

    dictionary of variables with string values to render the template

    failed Optional[Exception]

    If the command execution fails, the Exception object is stored in this field

    "},{"location":"api/models/#anta.models.AntaCommand.collected","title":"collected property","text":"
    collected: bool\n

    Return True if the command has been collected

    "},{"location":"api/models/#anta.models.AntaCommand.json_output","title":"json_output property","text":"
    json_output: dict[str, Any]\n

    Get the command output as JSON

    "},{"location":"api/models/#anta.models.AntaCommand.text_output","title":"text_output property","text":"
    text_output: str\n

    Get the command output as a string

    "},{"location":"api/models/#template-definition","title":"Template definition","text":""},{"location":"api/models/#uml-diagram_2","title":"UML Diagram","text":""},{"location":"api/models/#anta.models.AntaTemplate","title":"AntaTemplate","text":"

    Bases: BaseModel

    Class to define a command template as Python f-string. Can render a command from parameters.

    Attributes:

    Name Type Description template str

    Python f-string. Example: \u2018show vlan {vlan_id}\u2019

    version Literal[1, 'latest']

    eAPI version - valid values are 1 or \u201clatest\u201d - default is \u201clatest\u201d

    revision Optional[conint(ge=1, le=99)]

    Revision of the command. Valid values are 1 to 99. Revision has precedence over version.

    ofmt Literal['json', 'text']

    eAPI output - json or text - default is json

    "},{"location":"api/models/#anta.models.AntaTemplate.render","title":"render","text":"
    render(**params: dict[str, Any]) -> AntaCommand\n

    Render an AntaCommand from an AntaTemplate instance. Keep the parameters used in the AntaTemplate instance.

    Parameters:

    Name Type Description Default params dict[str, Any]

    dictionary of variables with string values to render the Python f-string

    {}

    Returns:

    Name Type Description command AntaCommand

    The rendered AntaCommand. This AntaCommand instance have a template attribute that references this AntaTemplate instance.

    Source code in anta/models.py
    def render(self, **params: dict[str, Any]) -> AntaCommand:\n\"\"\"Render an AntaCommand from an AntaTemplate instance.\n    Keep the parameters used in the AntaTemplate instance.\n\n    Args:\n        params: dictionary of variables with string values to render the Python f-string\n\n    Returns:\n        command: The rendered AntaCommand.\n                 This AntaCommand instance have a template attribute that references this\n                 AntaTemplate instance.\n    \"\"\"\n    try:\n        return AntaCommand(command=self.template.format(**params), ofmt=self.ofmt, version=self.version, revision=self.revision, template=self, params=params)\n    except KeyError as e:\n        raise AntaTemplateRenderError(self, e.args[0]) from e\n
    "},{"location":"api/report_manager/","title":"Report Manager module","text":""},{"location":"api/report_manager/#anta.reporter.ReportTable","title":"ReportTable","text":"
    ReportTable()\n

    TableReport Generate a Table based on TestResult.

    Source code in anta/reporter/__init__.py
    def __init__(self) -> None:\n\"\"\"\n    __init__ Class constructor\n    \"\"\"\n    self.colors = []\n    self.colors.append(ColorManager(level=\"success\", color=RICH_COLOR_PALETTE.SUCCESS))\n    self.colors.append(ColorManager(level=\"failure\", color=RICH_COLOR_PALETTE.FAILURE))\n    self.colors.append(ColorManager(level=\"error\", color=RICH_COLOR_PALETTE.ERROR))\n    self.colors.append(ColorManager(level=\"skipped\", color=RICH_COLOR_PALETTE.SKIPPED))\n
    "},{"location":"api/report_manager/#anta.reporter.ReportTable.report_all","title":"report_all","text":"
    report_all(result_manager: ResultManager, host: Optional[str] = None, testcase: Optional[str] = None, title: str = 'All tests results') -> Table\n

    Create a table report with all tests for one or all devices.

    Create table with full output: Host / Test / Status / Message

    Parameters:

    Name Type Description Default result_manager ResultManager

    A manager with a list of tests.

    required host str

    IP Address of a host to search for. Defaults to None.

    None testcase str

    A test name to search for. Defaults to None.

    None title str

    Title for the report. Defaults to \u2018All tests results\u2019.

    'All tests results'

    Returns:

    Name Type Description Table Table

    A fully populated rich Table

    Source code in anta/reporter/__init__.py
    def report_all(\n    self,\n    result_manager: ResultManager,\n    host: Optional[str] = None,\n    testcase: Optional[str] = None,\n    title: str = \"All tests results\",\n) -> Table:\n\"\"\"\n    Create a table report with all tests for one or all devices.\n\n    Create table with full output: Host / Test / Status / Message\n\n    Args:\n        result_manager (ResultManager): A manager with a list of tests.\n        host (str, optional): IP Address of a host to search for. Defaults to None.\n        testcase (str, optional): A test name to search for. Defaults to None.\n        title (str, optional): Title for the report. Defaults to 'All tests results'.\n\n    Returns:\n        Table: A fully populated rich Table\n    \"\"\"\n    table = Table(title=title)\n    headers = [\"Device\", \"Test Name\", \"Test Status\", \"Message(s)\", \"Test description\", \"Test category\"]\n    table = self._build_headers(headers=headers, table=table)\n\n    for result in result_manager.get_results(output_format=\"list\"):\n        # pylint: disable=R0916\n        if (host is None and testcase is None) or (host is not None and str(result.name) == host) or (testcase is not None and testcase == str(result.test)):\n            state = self._color_result(status=str(result.result), output_type=\"str\")\n            message = self._split_list_to_txt_list(result.messages) if len(result.messages) > 0 else \"\"\n            categories = \", \".join(result.categories)\n            table.add_row(str(result.name), result.test, state, message, result.description, categories)\n    return table\n
    "},{"location":"api/report_manager/#anta.reporter.ReportTable.report_summary_hosts","title":"report_summary_hosts","text":"
    report_summary_hosts(result_manager: ResultManager, host: Optional[str] = None, title: str = 'Summary per host') -> Table\n

    Create a table report with result agregated per host.

    Create table with full output: Host / Number of success / Number of failure / Number of error / List of nodes in error or failure

    Parameters:

    Name Type Description Default result_manager ResultManager

    A manager with a list of tests.

    required host str

    IP Address of a host to search for. Defaults to None.

    None title str

    Title for the report. Defaults to \u2018All tests results\u2019.

    'Summary per host'

    Returns:

    Name Type Description Table Table

    A fully populated rich Table

    Source code in anta/reporter/__init__.py
    def report_summary_hosts(\n    self,\n    result_manager: ResultManager,\n    host: Optional[str] = None,\n    title: str = \"Summary per host\",\n) -> Table:\n\"\"\"\n    Create a table report with result agregated per host.\n\n    Create table with full output: Host / Number of success / Number of failure / Number of error / List of nodes in error or failure\n\n    Args:\n        result_manager (ResultManager): A manager with a list of tests.\n        host (str, optional): IP Address of a host to search for. Defaults to None.\n        title (str, optional): Title for the report. Defaults to 'All tests results'.\n\n    Returns:\n        Table: A fully populated rich Table\n    \"\"\"\n    table = Table(title=title)\n    headers = [\n        \"Device\",\n        \"# of success\",\n        \"# of skipped\",\n        \"# of failure\",\n        \"# of errors\",\n        \"List of failed or error test cases\",\n    ]\n    table = self._build_headers(headers=headers, table=table)\n    for host_read in result_manager.get_hosts():\n        if host is None or str(host_read) == host:\n            results = result_manager.get_result_by_host(host_read)\n            logger.debug(\"data to use for computation\")\n            logger.debug(f\"{host}: {results}\")\n            nb_failure = len([result for result in results if result.result == \"failure\"])\n            nb_error = len([result for result in results if result.result == \"error\"])\n            list_failure = [str(result.test) for result in results if result.result in [\"failure\", \"error\"]]\n            nb_success = len([result for result in results if result.result == \"success\"])\n            nb_skipped = len([result for result in results if result.result == \"skipped\"])\n            table.add_row(\n                str(host_read),\n                str(nb_success),\n                str(nb_skipped),\n                str(nb_failure),\n                str(nb_error),\n                str(list_failure),\n            )\n    return table\n
    "},{"location":"api/report_manager/#anta.reporter.ReportTable.report_summary_tests","title":"report_summary_tests","text":"
    report_summary_tests(result_manager: ResultManager, testcase: Optional[str] = None, title: str = 'Summary per test case') -> Table\n

    Create a table report with result agregated per test.

    Create table with full output: Test / Number of success / Number of failure / Number of error / List of nodes in error or failure

    Parameters:

    Name Type Description Default result_manager ResultManager

    A manager with a list of tests.

    required testcase str

    A test name to search for. Defaults to None.

    None title str

    Title for the report. Defaults to \u2018All tests results\u2019.

    'Summary per test case'

    Returns:

    Name Type Description Table Table

    A fully populated rich Table

    Source code in anta/reporter/__init__.py
    def report_summary_tests(\n    self,\n    result_manager: ResultManager,\n    testcase: Optional[str] = None,\n    title: str = \"Summary per test case\",\n) -> Table:\n\"\"\"\n    Create a table report with result agregated per test.\n\n    Create table with full output: Test / Number of success / Number of failure / Number of error / List of nodes in error or failure\n\n    Args:\n        result_manager (ResultManager): A manager with a list of tests.\n        testcase (str, optional): A test name to search for. Defaults to None.\n        title (str, optional): Title for the report. Defaults to 'All tests results'.\n\n    Returns:\n        Table: A fully populated rich Table\n    \"\"\"\n    # sourcery skip: class-extract-method\n    table = Table(title=title)\n    headers = [\n        \"Test Case\",\n        \"# of success\",\n        \"# of skipped\",\n        \"# of failure\",\n        \"# of errors\",\n        \"List of failed or error nodes\",\n    ]\n    table = self._build_headers(headers=headers, table=table)\n    for testcase_read in result_manager.get_testcases():\n        if testcase is None or str(testcase_read) == testcase:\n            results = result_manager.get_result_by_test(testcase_read)\n            nb_failure = len([result for result in results if result.result == \"failure\"])\n            nb_error = len([result for result in results if result.result == \"error\"])\n            list_failure = [str(result.name) for result in results if result.result in [\"failure\", \"error\"]]\n            nb_success = len([result for result in results if result.result == \"success\"])\n            nb_skipped = len([result for result in results if result.result == \"skipped\"])\n            table.add_row(\n                testcase_read,\n                str(nb_success),\n                str(nb_skipped),\n                str(nb_failure),\n                str(nb_error),\n                str(list_failure),\n            )\n    return table\n
    "},{"location":"api/report_manager_models/","title":"Report Manager models","text":""},{"location":"api/report_manager_models/#anta.reporter.models.ColorManager","title":"ColorManager","text":"

    Bases: BaseModel

    Color management for status report.

    Attributes:

    Name Type Description level str

    Test result value.

    color str

    Associated color.

    "},{"location":"api/report_manager_models/#anta.reporter.models.ColorManager.string","title":"string","text":"
    string() -> str\n

    Build an str with color code

    Returns:

    Name Type Description str str

    String with level and its associated color

    Source code in anta/reporter/models.py
    def string(self) -> str:\n\"\"\"\n    Build an str with color code\n\n    Returns:\n        str: String with level and its associated color\n    \"\"\"\n    return f\"[{self.color}]{self.level}\"\n
    "},{"location":"api/report_manager_models/#anta.reporter.models.ColorManager.style_rich","title":"style_rich","text":"
    style_rich() -> Text\n

    Build a rich Text syntax with color

    Returns:

    Name Type Description Text Text

    object with level string and its associated color.

    Source code in anta/reporter/models.py
    def style_rich(self) -> Text:\n\"\"\"\n    Build a rich Text syntax with color\n\n    Returns:\n        Text: object with level string and its associated color.\n    \"\"\"\n    return Text(self.level, style=self.color)\n
    "},{"location":"api/result_manager/","title":"Result Manager module","text":""},{"location":"api/result_manager/#result-manager-definition","title":"Result Manager definition","text":""},{"location":"api/result_manager/#uml-diagram","title":"UML Diagram","text":""},{"location":"api/result_manager/#anta.result_manager.ResultManager","title":"ResultManager","text":"
    ResultManager()\n

    Helper to manage Test Results and generate reports.

    Examples:

    Create Inventory:\n\n    inventory_anta = AntaInventory.parse(\n        inventory_file='examples/inventory.yml',\n        username='ansible',\n        password='ansible',\n        timeout=0.5\n    )\n\nCreate Result Manager:\n\n    manager = ResultManager()\n\nRun tests for all connected devices:\n\n    for device in inventory_anta.get_inventory():\n        manager.add_test_result(\n            VerifyNTP(device=device).test()\n        )\n        manager.add_test_result(\n            VerifyEOSVersion(device=device).test(version='4.28.3M')\n        )\n\nPrint result in native format:\n\n    manager.get_results()\n    [\n        TestResult(\n            host=IPv4Address('192.168.0.10'),\n            test='VerifyNTP',\n            result='failure',\n            message=\"device is not running NTP correctly\"\n        ),\n        TestResult(\n            host=IPv4Address('192.168.0.10'),\n            test='VerifyEOSVersion',\n            result='success',\n            message=None\n        ),\n    ]\n

    The status of the class is initialized to \u201cunset\u201d

    Then when adding a test with a status that is NOT \u2018error\u2019 the following table shows the updated status:

    Current Status Added test Status Updated Status unset Any Any skipped unset, skipped skipped skipped success success skipped failure failure success unset, skipped, success success success failure failure failure unset, skipped success, failure failure

    If the status of the added test is error, the status is untouched and the error_status is set to True.

    Source code in anta/result_manager/__init__.py
    def __init__(self) -> None:\n\"\"\"\n    Class constructor.\n\n    The status of the class is initialized to \"unset\"\n\n    Then when adding a test with a status that is NOT 'error' the following\n    table shows the updated status:\n\n    | Current Status |         Added test Status       | Updated Status |\n    | -------------- | ------------------------------- | -------------- |\n    |      unset     |              Any                |       Any      |\n    |     skipped    |         unset, skipped          |     skipped    |\n    |     skipped    |            success              |     success    |\n    |     skipped    |            failure              |     failure    |\n    |     success    |     unset, skipped, success     |     success    |\n    |     success    |            failure              |     failure    |\n    |     failure    | unset, skipped success, failure |     failure    |\n\n    If the status of the added test is error, the status is untouched and the\n    error_status is set to True.\n    \"\"\"\n    self._result_entries = ListResult()\n    # Initialize status\n    self.status: TestStatus = \"unset\"\n    self.error_status = False\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.add_test_result","title":"add_test_result","text":"
    add_test_result(entry: TestResult) -> None\n

    Add a result to the list

    Parameters:

    Name Type Description Default entry TestResult

    TestResult data to add to the report

    required Source code in anta/result_manager/__init__.py
    def add_test_result(self, entry: TestResult) -> None:\n\"\"\"Add a result to the list\n\n    Args:\n        entry (TestResult): TestResult data to add to the report\n    \"\"\"\n    self._result_entries.append(entry)\n    self._update_status(entry.result)\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.add_test_results","title":"add_test_results","text":"
    add_test_results(entries: list[TestResult]) -> None\n

    Add a list of results to the list

    Parameters:

    Name Type Description Default entries list[TestResult]

    List of TestResult data to add to the report

    required Source code in anta/result_manager/__init__.py
    def add_test_results(self, entries: list[TestResult]) -> None:\n\"\"\"Add a list of results to the list\n\n    Args:\n        entries (list[TestResult]): List of TestResult data to add to the report\n    \"\"\"\n    for e in entries:\n        self.add_test_result(e)\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_hosts","title":"get_hosts","text":"
    get_hosts() -> list[str]\n

    Get list of IP addresses in current manager.

    Returns:

    Type Description list[str]

    list[str]: List of IP addresses.

    Source code in anta/result_manager/__init__.py
    def get_hosts(self) -> list[str]:\n\"\"\"\n    Get list of IP addresses in current manager.\n\n    Returns:\n        list[str]: List of IP addresses.\n    \"\"\"\n    result_list = []\n    for testcase in self._result_entries:\n        if str(testcase.name) not in result_list:\n            result_list.append(str(testcase.name))\n    return result_list\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_result_by_host","title":"get_result_by_host","text":"
    get_result_by_host(host_ip: str, output_format: str = 'native') -> Any\n

    Get list of test result for a given host.

    Parameters:

    Name Type Description Default host_ip str

    IP Address of the host to use to filter results.

    required output_format str

    format selector. Can be either native/list. Defaults to \u2018native\u2019.

    'native'

    Returns:

    Name Type Description Any Any

    List of results related to the host.

    Source code in anta/result_manager/__init__.py
    def get_result_by_host(self, host_ip: str, output_format: str = \"native\") -> Any:\n\"\"\"\n    Get list of test result for a given host.\n\n    Args:\n        host_ip (str): IP Address of the host to use to filter results.\n        output_format (str, optional): format selector. Can be either native/list. Defaults to 'native'.\n\n    Returns:\n        Any: List of results related to the host.\n    \"\"\"\n    if output_format == \"list\":\n        return [result for result in self._result_entries if str(result.name) == host_ip]\n\n    result_manager_filtered = ListResult()\n    for result in self._result_entries:\n        if str(result.name) == host_ip:\n            result_manager_filtered.append(result)\n    return result_manager_filtered\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_result_by_test","title":"get_result_by_test","text":"
    get_result_by_test(test_name: str, output_format: str = 'native') -> Any\n

    Get list of test result for a given test.

    Parameters:

    Name Type Description Default test_name str

    Test name to use to filter results

    required output_format str

    format selector. Can be either native/list. Defaults to \u2018native\u2019.

    'native'

    Returns:

    Type Description Any

    list[TestResult]: List of results related to the test.

    Source code in anta/result_manager/__init__.py
    def get_result_by_test(self, test_name: str, output_format: str = \"native\") -> Any:\n\"\"\"\n    Get list of test result for a given test.\n\n    Args:\n        test_name (str): Test name to use to filter results\n        output_format (str, optional): format selector. Can be either native/list. Defaults to 'native'.\n\n    Returns:\n        list[TestResult]: List of results related to the test.\n    \"\"\"\n    if output_format == \"list\":\n        return [result for result in self._result_entries if str(result.test) == test_name]\n\n    result_manager_filtered = ListResult()\n    for result in self._result_entries:\n        if result.test == test_name:\n            result_manager_filtered.append(result)\n    return result_manager_filtered\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_results","title":"get_results","text":"
    get_results(output_format: str = 'native') -> Any\n

    Expose list of all test results in different format

    Support multiple format
    • native: ListResults format
    • list: a list of TestResult
    • json: a native JSON format

    Parameters:

    Name Type Description Default output_format str

    format selector. Can be either native/list/json. Defaults to \u2018native\u2019.

    'native'

    Returns:

    Name Type Description any Any

    List of results.

    Source code in anta/result_manager/__init__.py
    def get_results(self, output_format: str = \"native\") -> Any:\n\"\"\"\n    Expose list of all test results in different format\n\n    Support multiple format:\n      - native: ListResults format\n      - list: a list of TestResult\n      - json: a native JSON format\n\n    Args:\n        output_format (str, optional): format selector. Can be either native/list/json. Defaults to 'native'.\n\n    Returns:\n        any: List of results.\n    \"\"\"\n    if output_format == \"list\":\n        return list(self._result_entries)\n\n    if output_format == \"json\":\n        return json.dumps(pydantic_to_dict(self._result_entries), indent=4)\n\n    if output_format == \"native\":\n        # Default return for native format.\n        return self._result_entries\n    raise ValueError(f\"{output_format} is not a valid value ['list', 'json', 'native']\")\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_status","title":"get_status","text":"
    get_status(ignore_error: bool = False) -> str\n

    Returns the current status including error_status if ignore_error is False

    Source code in anta/result_manager/__init__.py
    def get_status(self, ignore_error: bool = False) -> str:\n\"\"\"\n    Returns the current status including error_status if ignore_error is False\n    \"\"\"\n    return \"error\" if self.error_status and not ignore_error else self.status\n
    "},{"location":"api/result_manager/#anta.result_manager.ResultManager.get_testcases","title":"get_testcases","text":"
    get_testcases() -> list[str]\n

    Get list of name of all test cases in current manager.

    Returns:

    Type Description list[str]

    list[str]: List of names for all tests.

    Source code in anta/result_manager/__init__.py
    def get_testcases(self) -> list[str]:\n\"\"\"\n    Get list of name of all test cases in current manager.\n\n    Returns:\n        list[str]: List of names for all tests.\n    \"\"\"\n    result_list = []\n    for testcase in self._result_entries:\n        if str(testcase.test) not in result_list:\n            result_list.append(str(testcase.test))\n    return result_list\n
    "},{"location":"api/result_manager_models/","title":"Result Manager models","text":""},{"location":"api/result_manager_models/#test-result-model","title":"Test Result model","text":""},{"location":"api/result_manager_models/#uml-diagram","title":"UML Diagram","text":""},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult","title":"TestResult","text":"

    Bases: BaseModel

    Describe the result of a test from a single device.

    Attributes:

    Name Type Description name str

    Device name where the test has run.

    test str

    Test name runs on the device.

    categories List[str]

    List of categories the TestResult belongs to, by default the AntaTest categories.

    description str

    TestResult description, by default the AntaTest description.

    results str

    Result of the test. Can be one of [\u201cunset\u201d, \u201csuccess\u201d, \u201cfailure\u201d, \u201cerror\u201d, \u201cskipped\u201d].

    message str

    Message to report after the test if any.

    error Optional[Exception]

    Exception object if the test result is \u201cerror\u201d and an Exception occured

    custom_field Optional[str]

    Custom field to store a string for flexibility in integrating with ANTA

    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_error","title":"is_error","text":"
    is_error(message: str | None = None, exception: Exception | None = None) -> None\n

    Helper to set status to error

    Parameters:

    Name Type Description Default exception Exception | None

    Optional Exception objet related to the error

    None Source code in anta/result_manager/models.py
    def is_error(self, message: str | None = None, exception: Exception | None = None) -> None:\n\"\"\"\n    Helper to set status to error\n\n    Args:\n        exception: Optional Exception objet related to the error\n    \"\"\"\n    self._set_status(\"error\", message)\n    self.error = exception\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_failure","title":"is_failure","text":"
    is_failure(message: str | None = None) -> None\n

    Helper to set status to failure

    Parameters:

    Name Type Description Default message str | None

    Optional message related to the test

    None Source code in anta/result_manager/models.py
    def is_failure(self, message: str | None = None) -> None:\n\"\"\"\n    Helper to set status to failure\n\n    Args:\n        message: Optional message related to the test\n    \"\"\"\n    self._set_status(\"failure\", message)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_skipped","title":"is_skipped","text":"
    is_skipped(message: str | None = None) -> None\n

    Helper to set status to skipped

    Parameters:

    Name Type Description Default message str | None

    Optional message related to the test

    None Source code in anta/result_manager/models.py
    def is_skipped(self, message: str | None = None) -> None:\n\"\"\"\n    Helper to set status to skipped\n\n    Args:\n        message: Optional message related to the test\n    \"\"\"\n    self._set_status(\"skipped\", message)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.TestResult.is_success","title":"is_success","text":"
    is_success(message: str | None = None) -> None\n

    Helper to set status to success

    Parameters:

    Name Type Description Default message str | None

    Optional message related to the test

    None Source code in anta/result_manager/models.py
    def is_success(self, message: str | None = None) -> None:\n\"\"\"\n    Helper to set status to success\n\n    Args:\n        message: Optional message related to the test\n    \"\"\"\n    self._set_status(\"success\", message)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.ListResult","title":"ListResult","text":"

    Bases: RootModel[List[TestResult]]

    list result for all tests on all devices.

    Attributes:

    Name Type Description __root__ list[TestResult]

    A list of TestResult objects.

    "},{"location":"api/result_manager_models/#anta.result_manager.models.ListResult.append","title":"append","text":"
    append(value: TestResult) -> None\n

    Add support for append method.

    Source code in anta/result_manager/models.py
    def append(self, value: TestResult) -> None:\n\"\"\"Add support for append method.\"\"\"\n    self.root.append(value)\n
    "},{"location":"api/result_manager_models/#anta.result_manager.models.ListResult.extend","title":"extend","text":"
    extend(values: list[TestResult]) -> None\n

    Add support for extend method.

    Source code in anta/result_manager/models.py
    def extend(self, values: list[TestResult]) -> None:\n\"\"\"Add support for extend method.\"\"\"\n    self.root.extend(values)\n
    "},{"location":"api/tests.aaa/","title":"AAA","text":""},{"location":"api/tests.aaa/#anta-catalog-for-interfaces-tests","title":"ANTA catalog for interfaces tests","text":"

    Test functions related to the EOS various AAA settings

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods","title":"VerifyAcctConsoleMethods","text":"

    Bases: AntaTest

    Verifies the AAA accounting console method lists for different accounting types (system, exec, commands, dot1x).

    Expected Results
    • success: The test will pass if the provided AAA accounting console method list is matching in the configured accounting types.
    • failure: The test will fail if the provided AAA accounting console method list is NOT matching in the configured accounting types.
    Source code in anta/tests/aaa.py
    class VerifyAcctConsoleMethods(AntaTest):\n\"\"\"\n    Verifies the AAA accounting console method lists for different accounting types (system, exec, commands, dot1x).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA accounting console method list is matching in the configured accounting types.\n        * failure: The test will fail if the provided AAA accounting console method list is NOT matching in the configured accounting types.\n    \"\"\"\n\n    name = \"VerifyAcctConsoleMethods\"\n    description = \"Verifies the AAA accounting console method lists for different accounting types (system, exec, commands, dot1x).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods accounting\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting console methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting console types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        not_configured = []\n        for k, v in command_output.items():\n            acct_type = k.replace(\"AcctMethods\", \"\")\n            if acct_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            for methods in v.values():\n                if \"consoleAction\" not in methods:\n                    not_configured.append(acct_type)\n                if methods[\"consoleMethods\"] != self.inputs.methods:\n                    not_matching.append(acct_type)\n        if not_configured:\n            self.result.is_failure(f\"AAA console accounting is not configured for {not_configured}\")\n            return\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA accounting console methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting console methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting console types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA accounting console methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctConsoleMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['commands', 'exec', 'system', 'dot1x']]\n

    List of accounting console types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods","title":"VerifyAcctDefaultMethods","text":"

    Bases: AntaTest

    Verifies the AAA accounting default method lists for different accounting types (system, exec, commands, dot1x).

    Expected Results
    • success: The test will pass if the provided AAA accounting default method list is matching in the configured accounting types.
    • failure: The test will fail if the provided AAA accounting default method list is NOT matching in the configured accounting types.
    Source code in anta/tests/aaa.py
    class VerifyAcctDefaultMethods(AntaTest):\n\"\"\"\n    Verifies the AAA accounting default method lists for different accounting types (system, exec, commands, dot1x).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA accounting default method list is matching in the configured accounting types.\n        * failure: The test will fail if the provided AAA accounting default method list is NOT matching in the configured accounting types.\n    \"\"\"\n\n    name = \"VerifyAcctDefaultMethods\"\n    description = \"Verifies the AAA accounting default method lists for different accounting types (system, exec, commands, dot1x).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods accounting\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        not_configured = []\n        for k, v in command_output.items():\n            acct_type = k.replace(\"AcctMethods\", \"\")\n            if acct_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            for methods in v.values():\n                if \"defaultAction\" not in methods:\n                    not_configured.append(acct_type)\n                if methods[\"defaultMethods\"] != self.inputs.methods:\n                    not_matching.append(acct_type)\n        if not_configured:\n            self.result.is_failure(f\"AAA default accounting is not configured for {not_configured}\")\n            return\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA accounting default methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA accounting methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"commands\", \"exec\", \"system\", \"dot1x\"]]\n\"\"\"List of accounting types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA accounting methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAcctDefaultMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['commands', 'exec', 'system', 'dot1x']]\n

    List of accounting types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods","title":"VerifyAuthenMethods","text":"

    Bases: AntaTest

    Verifies the AAA authentication method lists for different authentication types (login, enable, dot1x).

    Expected Results
    • success: The test will pass if the provided AAA authentication method list is matching in the configured authentication types.
    • failure: The test will fail if the provided AAA authentication method list is NOT matching in the configured authentication types.
    Source code in anta/tests/aaa.py
    class VerifyAuthenMethods(AntaTest):\n\"\"\"\n    Verifies the AAA authentication method lists for different authentication types (login, enable, dot1x).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA authentication method list is matching in the configured authentication types.\n        * failure: The test will fail if the provided AAA authentication method list is NOT matching in the configured authentication types.\n    \"\"\"\n\n    name = \"VerifyAuthenMethods\"\n    description = \"Verifies the AAA authentication method lists for different authentication types (login, enable, dot1x).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods authentication\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA authentication methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"login\", \"enable\", \"dot1x\"]]\n\"\"\"List of authentication types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        for k, v in command_output.items():\n            auth_type = k.replace(\"AuthenMethods\", \"\")\n            if auth_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            if auth_type == \"login\":\n                if \"login\" not in v:\n                    self.result.is_failure(\"AAA authentication methods are not configured for login console\")\n                    return\n                if v[\"login\"][\"methods\"] != self.inputs.methods:\n                    self.result.is_failure(f\"AAA authentication methods {self.inputs.methods} are not matching for login console\")\n                    return\n            for methods in v.values():\n                if methods[\"methods\"] != self.inputs.methods:\n                    not_matching.append(auth_type)\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA authentication methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA authentication methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"login\", \"enable\", \"dot1x\"]]\n\"\"\"List of authentication types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA authentication methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthenMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['login', 'enable', 'dot1x']]\n

    List of authentication types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods","title":"VerifyAuthzMethods","text":"

    Bases: AntaTest

    Verifies the AAA authorization method lists for different authorization types (commands, exec).

    Expected Results
    • success: The test will pass if the provided AAA authorization method list is matching in the configured authorization types.
    • failure: The test will fail if the provided AAA authorization method list is NOT matching in the configured authorization types.
    Source code in anta/tests/aaa.py
    class VerifyAuthzMethods(AntaTest):\n\"\"\"\n    Verifies the AAA authorization method lists for different authorization types (commands, exec).\n\n    Expected Results:\n        * success: The test will pass if the provided AAA authorization method list is matching in the configured authorization types.\n        * failure: The test will fail if the provided AAA authorization method list is NOT matching in the configured authorization types.\n    \"\"\"\n\n    name = \"VerifyAuthzMethods\"\n    description = \"Verifies the AAA authorization method lists for different authorization types (commands, exec).\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show aaa methods authorization\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        methods: List[AAAAuthMethod]\n\"\"\"List of AAA authorization methods. Methods should be in the right order\"\"\"\n        types: Set[Literal[\"commands\", \"exec\"]]\n\"\"\"List of authorization types to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        not_matching = []\n        for k, v in command_output.items():\n            authz_type = k.replace(\"AuthzMethods\", \"\")\n            if authz_type not in self.inputs.types:\n                # We do not need to verify this accounting type\n                continue\n            for methods in v.values():\n                if methods[\"methods\"] != self.inputs.methods:\n                    not_matching.append(authz_type)\n        if not not_matching:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"AAA authorization methods {self.inputs.methods} are not matching for {not_matching}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    methods: List[AAAAuthMethod]\n\"\"\"List of AAA authorization methods. Methods should be in the right order\"\"\"\n    types: Set[Literal[\"commands\", \"exec\"]]\n\"\"\"List of authorization types to verify\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods.Input.methods","title":"methods instance-attribute","text":"
    methods: List[AAAAuthMethod]\n

    List of AAA authorization methods. Methods should be in the right order

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyAuthzMethods.Input.types","title":"types instance-attribute","text":"
    types: Set[Literal['commands', 'exec']]\n

    List of authorization types to verify

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServerGroups","title":"VerifyTacacsServerGroups","text":"

    Bases: AntaTest

    Verifies if the provided TACACS server group(s) are configured.

    Expected Results
    • success: The test will pass if the provided TACACS server group(s) are configured.
    • failure: The test will fail if one or all the provided TACACS server group(s) are NOT configured.
    Source code in anta/tests/aaa.py
    class VerifyTacacsServerGroups(AntaTest):\n\"\"\"\n    Verifies if the provided TACACS server group(s) are configured.\n\n    Expected Results:\n        * success: The test will pass if the provided TACACS server group(s) are configured.\n        * failure: The test will fail if one or all the provided TACACS server group(s) are NOT configured.\n    \"\"\"\n\n    name = \"VerifyTacacsServerGroups\"\n    description = \"Verifies if the provided TACACS server group(s) are configured.\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show tacacs\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        groups: List[str]\n\"\"\"List of TACACS server group\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        tacacs_groups = command_output[\"groups\"]\n        if not tacacs_groups:\n            self.result.is_failure(\"No TACACS server group(s) are configured\")\n            return\n        not_configured = [group for group in self.inputs.groups if group not in tacacs_groups]\n        if not not_configured:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"TACACS server group(s) {not_configured} are not configured\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServerGroups.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    groups: List[str]\n\"\"\"List of TACACS server group\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServerGroups.Input.groups","title":"groups instance-attribute","text":"
    groups: List[str]\n

    List of TACACS server group

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers","title":"VerifyTacacsServers","text":"

    Bases: AntaTest

    Verifies TACACS servers are configured for a specified VRF.

    Expected Results
    • success: The test will pass if the provided TACACS servers are configured in the specified VRF.
    • failure: The test will fail if the provided TACACS servers are NOT configured in the specified VRF.
    Source code in anta/tests/aaa.py
    class VerifyTacacsServers(AntaTest):\n\"\"\"\n    Verifies TACACS servers are configured for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided TACACS servers are configured in the specified VRF.\n        * failure: The test will fail if the provided TACACS servers are NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyTacacsServers\"\n    description = \"Verifies TACACS servers are configured for a specified VRF.\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show tacacs\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        servers: List[IPv4Address]\n\"\"\"List of TACACS servers\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        tacacs_servers = command_output[\"tacacsServers\"]\n        if not tacacs_servers:\n            self.result.is_failure(\"No TACACS servers are configured\")\n            return\n        not_configured = [\n            str(server)\n            for server in self.inputs.servers\n            if not any(\n                str(server) == tacacs_server[\"serverInfo\"][\"hostname\"] and self.inputs.vrf == tacacs_server[\"serverInfo\"][\"vrf\"] for tacacs_server in tacacs_servers\n            )\n        ]\n        if not not_configured:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"TACACS servers {not_configured} are not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    servers: List[IPv4Address]\n\"\"\"List of TACACS servers\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers.Input.servers","title":"servers instance-attribute","text":"
    servers: List[IPv4Address]\n

    List of TACACS servers

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsServers.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport TACACS messages

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf","title":"VerifyTacacsSourceIntf","text":"

    Bases: AntaTest

    Verifies TACACS source-interface for a specified VRF.

    Expected Results
    • success: The test will pass if the provided TACACS source-interface is configured in the specified VRF.
    • failure: The test will fail if the provided TACACS source-interface is NOT configured in the specified VRF.
    Source code in anta/tests/aaa.py
    class VerifyTacacsSourceIntf(AntaTest):\n\"\"\"\n    Verifies TACACS source-interface for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided TACACS source-interface is configured in the specified VRF.\n        * failure: The test will fail if the provided TACACS source-interface is NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyTacacsSourceIntf\"\n    description = \"Verifies TACACS source-interface for a specified VRF.\"\n    categories = [\"aaa\"]\n    commands = [AntaCommand(command=\"show tacacs\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        intf: str\n\"\"\"Source-interface to use as source IP of TACACS messages\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        try:\n            if command_output[\"srcIntf\"][self.inputs.vrf] == self.inputs.intf:\n                self.result.is_success()\n            else:\n                self.result.is_failure(f\"Wrong source-interface configured in VRF {self.inputs.vrf}\")\n        except KeyError:\n            self.result.is_failure(f\"Source-interface {self.inputs.intf} is not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/aaa.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    intf: str\n\"\"\"Source-interface to use as source IP of TACACS messages\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport TACACS messages\"\"\"\n
    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf.Input.intf","title":"intf instance-attribute","text":"
    intf: str\n

    Source-interface to use as source IP of TACACS messages

    "},{"location":"api/tests.aaa/#anta.tests.aaa.VerifyTacacsSourceIntf.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport TACACS messages

    "},{"location":"api/tests.configuration/","title":"Configuration","text":""},{"location":"api/tests.configuration/#anta-catalog-for-configuration-tests","title":"ANTA catalog for configuration tests","text":"

    Test functions related to the device configuration

    "},{"location":"api/tests.configuration/#anta.tests.configuration.VerifyRunningConfigDiffs","title":"VerifyRunningConfigDiffs","text":"

    Bases: AntaTest

    Verifies there is no difference between the running-config and the startup-config

    Source code in anta/tests/configuration.py
    class VerifyRunningConfigDiffs(AntaTest):\n\"\"\"\n    Verifies there is no difference between the running-config and the startup-config\n    \"\"\"\n\n    name = \"VerifyRunningConfigDiffs\"\n    description = \"Verifies there is no difference between the running-config and the startup-config\"\n    categories = [\"configuration\"]\n    commands = [AntaCommand(command=\"show running-config diffs\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].output\n        if command_output is None or command_output == \"\":\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            self.result.is_failure(str(command_output))\n
    "},{"location":"api/tests.configuration/#anta.tests.configuration.VerifyZeroTouch","title":"VerifyZeroTouch","text":"

    Bases: AntaTest

    Verifies ZeroTouch is disabled

    Source code in anta/tests/configuration.py
    class VerifyZeroTouch(AntaTest):\n\"\"\"\n    Verifies ZeroTouch is disabled\n    \"\"\"\n\n    name = \"VerifyZeroTouch\"\n    description = \"Verifies ZeroTouch is disabled\"\n    categories = [\"configuration\"]\n    commands = [AntaCommand(command=\"show zerotouch\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].output\n        assert isinstance(command_output, dict)\n        if command_output[\"mode\"] == \"disabled\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"ZTP is NOT disabled\")\n
    "},{"location":"api/tests.connectivity/","title":"Connectivity","text":""},{"location":"api/tests.connectivity/#anta-catalog-for-connectivity-tests","title":"ANTA catalog for connectivity tests","text":"

    Test functions related to various connectivity checks

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors","title":"VerifyLLDPNeighbors","text":"

    Bases: AntaTest

    This test verifies that the provided LLDP neighbors are present and connected with the correct configuration.

    Expected Results
    • success: The test will pass if each of the provided LLDP neighbors is present and connected to the specified port and device.
    • failure: The test will fail if any of the following conditions are met:
      • The provided LLDP neighbor is not found.
      • The system name or port of the LLDP neighbor does not match the provided information.
    Source code in anta/tests/connectivity.py
    class VerifyLLDPNeighbors(AntaTest):\n\"\"\"\n    This test verifies that the provided LLDP neighbors are present and connected with the correct configuration.\n\n    Expected Results:\n        * success: The test will pass if each of the provided LLDP neighbors is present and connected to the specified port and device.\n        * failure: The test will fail if any of the following conditions are met:\n            - The provided LLDP neighbor is not found.\n            - The system name or port of the LLDP neighbor does not match the provided information.\n    \"\"\"\n\n    name = \"VerifyLLDPNeighbors\"\n    description = \"Verifies that the provided LLDP neighbors are present and connected with the correct configuration.\"\n    categories = [\"connectivity\"]\n    commands = [AntaCommand(command=\"show lldp neighbors detail\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        neighbors: List[Neighbor]\n\"\"\"List of LLDP neighbors\"\"\"\n\n        class Neighbor(BaseModel):\n\"\"\"LLDP neighbor\"\"\"\n\n            port: Interface\n\"\"\"LLDP port\"\"\"\n            neighbor_device: str\n\"\"\"LLDP neighbor device\"\"\"\n            neighbor_port: Interface\n\"\"\"LLDP neighbor port\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        self.result.is_success()\n\n        no_lldp_neighbor = []\n        wrong_lldp_neighbor = []\n\n        for neighbor in self.inputs.neighbors:\n            if len(lldp_neighbor_info := command_output[\"lldpNeighbors\"][neighbor.port][\"lldpNeighborInfo\"]) == 0:\n                no_lldp_neighbor.append(neighbor.port)\n\n            elif (\n                lldp_neighbor_info[0][\"systemName\"] != neighbor.neighbor_device\n                or lldp_neighbor_info[0][\"neighborInterfaceInfo\"][\"interfaceId_v2\"] != neighbor.neighbor_port\n            ):\n                wrong_lldp_neighbor.append(neighbor.port)\n\n        if no_lldp_neighbor:\n            self.result.is_failure(f\"The following port(s) have no LLDP neighbor: {no_lldp_neighbor}\")\n\n        if wrong_lldp_neighbor:\n            self.result.is_failure(f\"The following port(s) have the wrong LLDP neighbor: {wrong_lldp_neighbor}\")\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/connectivity.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    neighbors: List[Neighbor]\n\"\"\"List of LLDP neighbors\"\"\"\n\n    class Neighbor(BaseModel):\n\"\"\"LLDP neighbor\"\"\"\n\n        port: Interface\n\"\"\"LLDP port\"\"\"\n        neighbor_device: str\n\"\"\"LLDP neighbor device\"\"\"\n        neighbor_port: Interface\n\"\"\"LLDP neighbor port\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.neighbors","title":"neighbors instance-attribute","text":"
    neighbors: List[Neighbor]\n

    List of LLDP neighbors

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor","title":"Neighbor","text":"

    Bases: BaseModel

    LLDP neighbor

    Source code in anta/tests/connectivity.py
    class Neighbor(BaseModel):\n\"\"\"LLDP neighbor\"\"\"\n\n    port: Interface\n\"\"\"LLDP port\"\"\"\n    neighbor_device: str\n\"\"\"LLDP neighbor device\"\"\"\n    neighbor_port: Interface\n\"\"\"LLDP neighbor port\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor.neighbor_device","title":"neighbor_device instance-attribute","text":"
    neighbor_device: str\n

    LLDP neighbor device

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor.neighbor_port","title":"neighbor_port instance-attribute","text":"
    neighbor_port: Interface\n

    LLDP neighbor port

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyLLDPNeighbors.Input.Neighbor.port","title":"port instance-attribute","text":"
    port: Interface\n

    LLDP port

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability","title":"VerifyReachability","text":"

    Bases: AntaTest

    Test network reachability to one or many destination IP(s).

    Expected Results
    • success: The test will pass if all destination IP(s) are reachable.
    • failure: The test will fail if one or many destination IP(s) are unreachable.
    Source code in anta/tests/connectivity.py
    class VerifyReachability(AntaTest):\n\"\"\"\n    Test network reachability to one or many destination IP(s).\n\n    Expected Results:\n        * success: The test will pass if all destination IP(s) are reachable.\n        * failure: The test will fail if one or many destination IP(s) are unreachable.\n    \"\"\"\n\n    name = \"VerifyReachability\"\n    description = \"Test the network reachability to one or many destination IP(s).\"\n    categories = [\"connectivity\"]\n    commands = [AntaTemplate(template=\"ping vrf {vrf} {destination} source {source} repeat 2\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        hosts: List[Host]\n\"\"\"List of hosts to ping\"\"\"\n\n        class Host(BaseModel):\n\"\"\"Remote host to ping\"\"\"\n\n            destination: IPv4Address\n\"\"\"IPv4 address to ping\"\"\"\n            source: Union[IPv4Address, Interface]\n\"\"\"IPv4 address source IP or Egress interface to use\"\"\"\n            vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(destination=host.destination, source=host.source, vrf=host.vrf) for host in self.inputs.hosts]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        failures = []\n        for command in self.instance_commands:\n            if command.params and \"source\" in command.params and \"destination\" in command.params:\n                src, dst = command.params[\"source\"], command.params[\"destination\"]\n            if \"2 received\" not in command.json_output[\"messages\"][0]:\n                failures.append((str(src), str(dst)))\n        if not failures:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Connectivity test failed for the following source-destination pairs: {failures}\")\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/connectivity.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    hosts: List[Host]\n\"\"\"List of hosts to ping\"\"\"\n\n    class Host(BaseModel):\n\"\"\"Remote host to ping\"\"\"\n\n        destination: IPv4Address\n\"\"\"IPv4 address to ping\"\"\"\n        source: Union[IPv4Address, Interface]\n\"\"\"IPv4 address source IP or Egress interface to use\"\"\"\n        vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.hosts","title":"hosts instance-attribute","text":"
    hosts: List[Host]\n

    List of hosts to ping

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host","title":"Host","text":"

    Bases: BaseModel

    Remote host to ping

    Source code in anta/tests/connectivity.py
    class Host(BaseModel):\n\"\"\"Remote host to ping\"\"\"\n\n    destination: IPv4Address\n\"\"\"IPv4 address to ping\"\"\"\n    source: Union[IPv4Address, Interface]\n\"\"\"IPv4 address source IP or Egress interface to use\"\"\"\n    vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n
    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host.destination","title":"destination instance-attribute","text":"
    destination: IPv4Address\n

    IPv4 address to ping

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host.source","title":"source instance-attribute","text":"
    source: Union[IPv4Address, Interface]\n

    IPv4 address source IP or Egress interface to use

    "},{"location":"api/tests.connectivity/#anta.tests.connectivity.VerifyReachability.Input.Host.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    VRF context

    "},{"location":"api/tests.field_notices/","title":"Field Notices","text":""},{"location":"api/tests.field_notices/#anta-catalog-for-field-notices-tests","title":"ANTA catalog for Field Notices tests","text":"

    Test functions to flag field notices

    "},{"location":"api/tests.field_notices/#anta.tests.field_notices.VerifyFieldNotice44Resolution","title":"VerifyFieldNotice44Resolution","text":"

    Bases: AntaTest

    Verifies the device is using an Aboot version that fix the bug discussed in the field notice 44 (Aboot manages system settings prior to EOS initialization).

    https://www.arista.com/en/support/advisories-notices/field-notice/8756-field-notice-44

    Source code in anta/tests/field_notices.py
    class VerifyFieldNotice44Resolution(AntaTest):\n\"\"\"\n    Verifies the device is using an Aboot version that fix the bug discussed\n    in the field notice 44 (Aboot manages system settings prior to EOS initialization).\n\n    https://www.arista.com/en/support/advisories-notices/field-notice/8756-field-notice-44\n    \"\"\"\n\n    name = \"VerifyFieldNotice44Resolution\"\n    description = (\n        \"Verifies the device is using an Aboot version that fix the bug discussed in the field notice 44 (Aboot manages system settings prior to EOS initialization)\"\n    )\n    categories = [\"field notices\", \"software\"]\n    commands = [AntaCommand(command=\"show version detail\")]\n\n    # TODO maybe implement ONLY ON PLATFORMS instead\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        devices = [\n            \"DCS-7010T-48\",\n            \"DCS-7010T-48-DC\",\n            \"DCS-7050TX-48\",\n            \"DCS-7050TX-64\",\n            \"DCS-7050TX-72\",\n            \"DCS-7050TX-72Q\",\n            \"DCS-7050TX-96\",\n            \"DCS-7050TX2-128\",\n            \"DCS-7050SX-64\",\n            \"DCS-7050SX-72\",\n            \"DCS-7050SX-72Q\",\n            \"DCS-7050SX2-72Q\",\n            \"DCS-7050SX-96\",\n            \"DCS-7050SX2-128\",\n            \"DCS-7050QX-32S\",\n            \"DCS-7050QX2-32S\",\n            \"DCS-7050SX3-48YC12\",\n            \"DCS-7050CX3-32S\",\n            \"DCS-7060CX-32S\",\n            \"DCS-7060CX2-32S\",\n            \"DCS-7060SX2-48YC6\",\n            \"DCS-7160-48YC6\",\n            \"DCS-7160-48TC6\",\n            \"DCS-7160-32CQ\",\n            \"DCS-7280SE-64\",\n            \"DCS-7280SE-68\",\n            \"DCS-7280SE-72\",\n            \"DCS-7150SC-24-CLD\",\n            \"DCS-7150SC-64-CLD\",\n            \"DCS-7020TR-48\",\n            \"DCS-7020TRA-48\",\n            \"DCS-7020SR-24C2\",\n            \"DCS-7020SRG-24C2\",\n            \"DCS-7280TR-48C6\",\n            \"DCS-7280TRA-48C6\",\n            \"DCS-7280SR-48C6\",\n            \"DCS-7280SRA-48C6\",\n            \"DCS-7280SRAM-48C6\",\n            \"DCS-7280SR2K-48C6-M\",\n            \"DCS-7280SR2-48YC6\",\n            \"DCS-7280SR2A-48YC6\",\n            \"DCS-7280SRM-40CX2\",\n            \"DCS-7280QR-C36\",\n            \"DCS-7280QRA-C36S\",\n        ]\n        variants = [\"-SSD-F\", \"-SSD-R\", \"-M-F\", \"-M-R\", \"-F\", \"-R\"]\n\n        model = command_output[\"modelName\"]\n        # TODO this list could be a regex\n        for variant in variants:\n            model = model.replace(variant, \"\")\n        if model not in devices:\n            self.result.is_skipped(\"device is not impacted by FN044\")\n            return\n\n        for component in command_output[\"details\"][\"components\"]:\n            if component[\"name\"] == \"Aboot\":\n                aboot_version = component[\"version\"].split(\"-\")[2]\n        self.result.is_success()\n        if aboot_version.startswith(\"4.0.\") and int(aboot_version.split(\".\")[2]) < 7:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n        elif aboot_version.startswith(\"4.1.\") and int(aboot_version.split(\".\")[2]) < 1:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n        elif aboot_version.startswith(\"6.0.\") and int(aboot_version.split(\".\")[2]) < 9:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n        elif aboot_version.startswith(\"6.1.\") and int(aboot_version.split(\".\")[2]) < 7:\n            self.result.is_failure(f\"device is running incorrect version of aboot ({aboot_version})\")\n
    "},{"location":"api/tests.field_notices/#anta.tests.field_notices.VerifyFieldNotice72Resolution","title":"VerifyFieldNotice72Resolution","text":"

    Bases: AntaTest

    Checks if the device is potentially exposed to Field Notice 72, and if the issue has been mitigated.

    https://www.arista.com/en/support/advisories-notices/field-notice/17410-field-notice-0072

    Source code in anta/tests/field_notices.py
    class VerifyFieldNotice72Resolution(AntaTest):\n\"\"\"\n    Checks if the device is potentially exposed to Field Notice 72, and if the issue has been mitigated.\n\n    https://www.arista.com/en/support/advisories-notices/field-notice/17410-field-notice-0072\n    \"\"\"\n\n    name = \"VerifyFieldNotice72Resolution\"\n    description = \"Verifies if the device has exposeure to FN72, and if the issue has been mitigated\"\n    categories = [\"field notices\", \"software\"]\n    commands = [AntaCommand(command=\"show version detail\")]\n\n    # TODO maybe implement ONLY ON PLATFORMS instead\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        devices = [\"DCS-7280SR3-48YC8\", \"DCS-7280SR3K-48YC8\"]\n        variants = [\"-SSD-F\", \"-SSD-R\", \"-M-F\", \"-M-R\", \"-F\", \"-R\"]\n        model = command_output[\"modelName\"]\n\n        for variant in variants:\n            model = model.replace(variant, \"\")\n        if model not in devices:\n            self.result.is_skipped(\"Platform is not impacted by FN072\")\n            return\n\n        serial = command_output[\"serialNumber\"]\n        number = int(serial[3:7])\n\n        if \"JPE\" not in serial and \"JAS\" not in serial:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3-48YC8\" and \"JPE\" in serial and number >= 2131:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3-48YC8\" and \"JAS\" in serial and number >= 2041:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3K-48YC8\" and \"JPE\" in serial and number >= 2134:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        if model == \"DCS-7280SR3K-48YC8\" and \"JAS\" in serial and number >= 2041:\n            self.result.is_skipped(\"Device not exposed\")\n            return\n\n        # Because each of the if checks above will return if taken, we only run the long\n        # check if we get this far\n        for entry in command_output[\"details\"][\"components\"]:\n            if entry[\"name\"] == \"FixedSystemvrm1\":\n                if int(entry[\"version\"]) < 7:\n                    self.result.is_failure(\"Device is exposed to FN72\")\n                else:\n                    self.result.is_success(\"FN72 is mitigated\")\n                return\n        # We should never hit this point\n        self.result.is_error(message=\"Error in running test - FixedSystemvrm1 not found\")\n        return\n
    "},{"location":"api/tests.hardware/","title":"Hardware","text":""},{"location":"api/tests.hardware/#anta-catalog-for-hardware-tests","title":"ANTA catalog for hardware tests","text":"

    Test functions related to the hardware or environment

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyAdverseDrops","title":"VerifyAdverseDrops","text":"

    Bases: AntaTest

    This test verifies if there are no adverse drops on DCS7280E and DCS7500E.

    Expected Results
    • success: The test will pass if there are no adverse drops.
    • failure: The test will fail if there are adverse drops.
    Source code in anta/tests/hardware.py
    class VerifyAdverseDrops(AntaTest):\n\"\"\"\n    This test verifies if there are no adverse drops on DCS7280E and DCS7500E.\n\n    Expected Results:\n      * success: The test will pass if there are no adverse drops.\n      * failure: The test will fail if there are adverse drops.\n    \"\"\"\n\n    name = \"VerifyAdverseDrops\"\n    description = \"Verifies there are no adverse drops on DCS7280E and DCS7500E\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show hardware counter drop\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        total_adverse_drop = command_output[\"totalAdverseDrops\"] if \"totalAdverseDrops\" in command_output.keys() else \"\"\n        if total_adverse_drop == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device totalAdverseDrops counter is: '{total_adverse_drop}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentCooling","title":"VerifyEnvironmentCooling","text":"

    Bases: AntaTest

    This test verifies the fans status.

    Expected Results
    • success: The test will pass if the fans status are within the accepted states list.
    • failure: The test will fail if some fans status is not within the accepted states list.
    Source code in anta/tests/hardware.py
    class VerifyEnvironmentCooling(AntaTest):\n\"\"\"\n    This test verifies the fans status.\n\n    Expected Results:\n      * success: The test will pass if the fans status are within the accepted states list.\n      * failure: The test will fail if some fans status is not within the accepted states list.\n    \"\"\"\n\n    name = \"VerifyEnvironmentCooling\"\n    description = \"Verifies if the fans status are within the accepted states list.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment cooling\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        states: List[str]\n\"\"\"Accepted states list for fan status\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        # First go through power supplies fans\n        for power_supply in command_output.get(\"powerSupplySlots\", []):\n            for fan in power_supply.get(\"fans\", []):\n                if (state := fan[\"status\"]) not in self.inputs.states:\n                    self.result.is_failure(f\"Fan {fan['label']} on PowerSupply {power_supply['label']} is: '{state}'\")\n        # Then go through fan trays\n        for fan_tray in command_output.get(\"fanTraySlots\", []):\n            for fan in fan_tray.get(\"fans\", []):\n                if (state := fan[\"status\"]) not in self.inputs.states:\n                    self.result.is_failure(f\"Fan {fan['label']} on Fan Tray {fan_tray['label']} is: '{state}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentCooling.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/hardware.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    states: List[str]\n\"\"\"Accepted states list for fan status\"\"\"\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentCooling.Input.states","title":"states instance-attribute","text":"
    states: List[str]\n

    Accepted states list for fan status

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentPower","title":"VerifyEnvironmentPower","text":"

    Bases: AntaTest

    This test verifies the power supplies status.

    Expected Results
    • success: The test will pass if the power supplies status are within the accepted states list.
    • failure: The test will fail if some power supplies status is not within the accepted states list.
    Source code in anta/tests/hardware.py
    class VerifyEnvironmentPower(AntaTest):\n\"\"\"\n    This test verifies the power supplies status.\n\n    Expected Results:\n      * success: The test will pass if the power supplies status are within the accepted states list.\n      * failure: The test will fail if some power supplies status is not within the accepted states list.\n    \"\"\"\n\n    name = \"VerifyEnvironmentPower\"\n    description = \"Verifies if the power supplies status are within the accepted states list.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment power\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        states: List[str]\n\"\"\"Accepted states list for power supplies status\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        power_supplies = command_output[\"powerSupplies\"] if \"powerSupplies\" in command_output.keys() else \"{}\"\n        wrong_power_supplies = {\n            powersupply: {\"state\": value[\"state\"]} for powersupply, value in dict(power_supplies).items() if value[\"state\"] not in self.inputs.states\n        }\n        if not wrong_power_supplies:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following power supplies status are not in the accepted states list: {wrong_power_supplies}\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentPower.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/hardware.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    states: List[str]\n\"\"\"Accepted states list for power supplies status\"\"\"\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentPower.Input.states","title":"states instance-attribute","text":"
    states: List[str]\n

    Accepted states list for power supplies status

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyEnvironmentSystemCooling","title":"VerifyEnvironmentSystemCooling","text":"

    Bases: AntaTest

    This test verifies the device\u2019s system cooling.

    Expected Results
    • success: The test will pass if the system cooling status is OK: \u2018coolingOk\u2019.
    • failure: The test will fail if the system cooling status is NOT OK.
    Source code in anta/tests/hardware.py
    class VerifyEnvironmentSystemCooling(AntaTest):\n\"\"\"\n    This test verifies the device's system cooling.\n\n    Expected Results:\n      * success: The test will pass if the system cooling status is OK: 'coolingOk'.\n      * failure: The test will fail if the system cooling status is NOT OK.\n    \"\"\"\n\n    name = \"VerifyEnvironmentSystemCooling\"\n    description = \"Verifies the system cooling status.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment cooling\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        sys_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        self.result.is_success()\n        if sys_status != \"coolingOk\":\n            self.result.is_failure(f\"Device system cooling is not OK: '{sys_status}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTemperature","title":"VerifyTemperature","text":"

    Bases: AntaTest

    This test verifies if the device temperature is within acceptable limits.

    Expected Results
    • success: The test will pass if the device temperature is currently OK: \u2018temperatureOk\u2019.
    • failure: The test will fail if the device temperature is NOT OK.
    Source code in anta/tests/hardware.py
    class VerifyTemperature(AntaTest):\n\"\"\"\n    This test verifies if the device temperature is within acceptable limits.\n\n    Expected Results:\n      * success: The test will pass if the device temperature is currently OK: 'temperatureOk'.\n      * failure: The test will fail if the device temperature is NOT OK.\n    \"\"\"\n\n    name = \"VerifyTemperature\"\n    description = \"Verifies if the device temperature is within the acceptable range.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment temperature\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        temperature_status = command_output[\"systemStatus\"] if \"systemStatus\" in command_output.keys() else \"\"\n        if temperature_status == \"temperatureOk\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device temperature exceeds acceptable limits. Current system status: '{temperature_status}'\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversManufacturers","title":"VerifyTransceiversManufacturers","text":"

    Bases: AntaTest

    This test verifies if all the transceivers come from approved manufacturers.

    Expected Results
    • success: The test will pass if all transceivers are from approved manufacturers.
    • failure: The test will fail if some transceivers are from unapproved manufacturers.
    Source code in anta/tests/hardware.py
    class VerifyTransceiversManufacturers(AntaTest):\n\"\"\"\n    This test verifies if all the transceivers come from approved manufacturers.\n\n    Expected Results:\n      * success: The test will pass if all transceivers are from approved manufacturers.\n      * failure: The test will fail if some transceivers are from unapproved manufacturers.\n    \"\"\"\n\n    name = \"VerifyTransceiversManufacturers\"\n    description = \"Verifies the transceiver's manufacturer against a list of approved manufacturers.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show inventory\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        manufacturers: List[str]\n\"\"\"List of approved transceivers manufacturers\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        wrong_manufacturers = {\n            interface: value[\"mfgName\"] for interface, value in command_output[\"xcvrSlots\"].items() if value[\"mfgName\"] not in self.inputs.manufacturers\n        }\n        if not wrong_manufacturers:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Some transceivers are from unapproved manufacturers: {wrong_manufacturers}\")\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversManufacturers.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/hardware.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    manufacturers: List[str]\n\"\"\"List of approved transceivers manufacturers\"\"\"\n
    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversManufacturers.Input.manufacturers","title":"manufacturers instance-attribute","text":"
    manufacturers: List[str]\n

    List of approved transceivers manufacturers

    "},{"location":"api/tests.hardware/#anta.tests.hardware.VerifyTransceiversTemperature","title":"VerifyTransceiversTemperature","text":"

    Bases: AntaTest

    This test verifies if all the transceivers are operating at an acceptable temperature.

    Expected Results
    • success: The test will pass if all transceivers status are OK: \u2018ok\u2019.
    • failure: The test will fail if some transceivers are NOT OK.
    Source code in anta/tests/hardware.py
    class VerifyTransceiversTemperature(AntaTest):\n\"\"\"\n    This test verifies if all the transceivers are operating at an acceptable temperature.\n\n    Expected Results:\n          * success: The test will pass if all transceivers status are OK: 'ok'.\n          * failure: The test will fail if some transceivers are NOT OK.\n    \"\"\"\n\n    name = \"VerifyTransceiversTemperature\"\n    description = \"Verifies that all transceivers are operating within the acceptable temperature range.\"\n    categories = [\"hardware\"]\n    commands = [AntaCommand(command=\"show system environment temperature transceiver\", ofmt=\"json\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        sensors = command_output[\"tempSensors\"] if \"tempSensors\" in command_output.keys() else \"\"\n        wrong_sensors = {\n            sensor[\"name\"]: {\n                \"hwStatus\": sensor[\"hwStatus\"],\n                \"alertCount\": sensor[\"alertCount\"],\n            }\n            for sensor in sensors\n            if sensor[\"hwStatus\"] != \"ok\" or sensor[\"alertCount\"] != 0\n        }\n        if not wrong_sensors:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following sensors are operating outside the acceptable temperature range or have raised alerts: {wrong_sensors}\")\n
    "},{"location":"api/tests.interfaces/","title":"Interfaces","text":""},{"location":"api/tests.interfaces/#anta-catalog-for-interfaces-tests","title":"ANTA catalog for interfaces tests","text":"

    Test functions related to the device interfaces

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIPProxyARP","title":"VerifyIPProxyARP","text":"

    Bases: AntaTest

    Verifies if Proxy-ARP is enabled for the provided list of interface(s).

    Expected Results
    • success: The test will pass if Proxy-ARP is enabled on the specified interface(s).
    • failure: The test will fail if Proxy-ARP is disabled on the specified interface(s).
    Source code in anta/tests/interfaces.py
    class VerifyIPProxyARP(AntaTest):\n\"\"\"\n    Verifies if Proxy-ARP is enabled for the provided list of interface(s).\n\n    Expected Results:\n        * success: The test will pass if Proxy-ARP is enabled on the specified interface(s).\n        * failure: The test will fail if Proxy-ARP is disabled on the specified interface(s).\n    \"\"\"\n\n    name = \"VerifyIPProxyARP\"\n    description = \"Verifies if Proxy-ARP is enabled for the provided list of interface(s).\"\n    categories = [\"interfaces\"]\n    commands = [AntaTemplate(template=\"show ip interface {intf}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        interfaces: List[str]\n\"\"\"list of interfaces to be tested\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(intf=intf) for intf in self.inputs.interfaces]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        disabled_intf = []\n        for command in self.instance_commands:\n            if command.params and \"intf\" in command.params:\n                intf = command.params[\"intf\"]\n            if not command.json_output[\"interfaces\"][intf][\"proxyArp\"]:\n                disabled_intf.append(intf)\n        if disabled_intf:\n            self.result.is_failure(f\"The following interface(s) have Proxy-ARP disabled: {disabled_intf}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIPProxyARP.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    interfaces: List[str]\n\"\"\"list of interfaces to be tested\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIPProxyARP.Input.interfaces","title":"interfaces instance-attribute","text":"
    interfaces: List[str]\n

    list of interfaces to be tested

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyIllegalLACP","title":"VerifyIllegalLACP","text":"

    Bases: AntaTest

    Verifies there is no illegal LACP packets received.

    Source code in anta/tests/interfaces.py
    class VerifyIllegalLACP(AntaTest):\n\"\"\"\n    Verifies there is no illegal LACP packets received.\n    \"\"\"\n\n    name = \"VerifyIllegalLACP\"\n    description = \"Verifies there is no illegal LACP packets received.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show lacp counters all-ports\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        po_with_illegal_lacp: list[dict[str, dict[str, int]]] = []\n        for portchannel, portchannel_dict in command_output[\"portChannels\"].items():\n            po_with_illegal_lacp.extend(\n                {portchannel: interface} for interface, interface_dict in portchannel_dict[\"interfaces\"].items() if interface_dict[\"illegalRxCount\"] != 0\n            )\n        if not po_with_illegal_lacp:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"The following port-channels have recieved illegal lacp packets on the \" f\"following ports: {po_with_illegal_lacp}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceDiscards","title":"VerifyInterfaceDiscards","text":"

    Bases: AntaTest

    Verifies interfaces packet discard counters are equal to zero.

    Source code in anta/tests/interfaces.py
    class VerifyInterfaceDiscards(AntaTest):\n\"\"\"\n    Verifies interfaces packet discard counters are equal to zero.\n    \"\"\"\n\n    name = \"VerifyInterfaceDiscards\"\n    description = \"Verifies interfaces packet discard counters are equal to zero.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces counters discards\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        wrong_interfaces: list[dict[str, dict[str, int]]] = []\n        for interface, outer_v in command_output[\"interfaces\"].items():\n            wrong_interfaces.extend({interface: outer_v} for counter, value in outer_v.items() if value > 0)\n        if not wrong_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interfaces have non 0 discard counter(s): {wrong_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceErrDisabled","title":"VerifyInterfaceErrDisabled","text":"

    Bases: AntaTest

    Verifies there is no interface in error disable state.

    Source code in anta/tests/interfaces.py
    class VerifyInterfaceErrDisabled(AntaTest):\n\"\"\"\n    Verifies there is no interface in error disable state.\n    \"\"\"\n\n    name = \"VerifyInterfaceErrDisabled\"\n    description = \"Verifies there is no interface in error disable state.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces status\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        errdisabled_interfaces = [interface for interface, value in command_output[\"interfaceStatuses\"].items() if value[\"linkStatus\"] == \"errdisabled\"]\n        if errdisabled_interfaces:\n            self.result.is_failure(f\"The following interfaces are in error disabled state: {errdisabled_interfaces}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceErrors","title":"VerifyInterfaceErrors","text":"

    Bases: AntaTest

    This test verifies that interfaces error counters are equal to zero.

    Expected Results
    • success: The test will pass if all interfaces have error counters equal to zero.
    • failure: The test will fail if one or more interfaces have non-zero error counters.
    Source code in anta/tests/interfaces.py
    class VerifyInterfaceErrors(AntaTest):\n\"\"\"\n    This test verifies that interfaces error counters are equal to zero.\n\n    Expected Results:\n        * success: The test will pass if all interfaces have error counters equal to zero.\n        * failure: The test will fail if one or more interfaces have non-zero error counters.\n    \"\"\"\n\n    name = \"VerifyInterfaceErrors\"\n    description = \"Verifies that interfaces error counters are equal to zero.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces counters errors\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        wrong_interfaces: list[dict[str, dict[str, int]]] = []\n        for interface, counters in command_output[\"interfaceErrorCounters\"].items():\n            if any(value > 0 for value in counters.values()) and all(interface not in wrong_interface for wrong_interface in wrong_interfaces):\n                wrong_interfaces.append({interface: counters})\n        if not wrong_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interface(s) have non-zero error counters: {wrong_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfaceUtilization","title":"VerifyInterfaceUtilization","text":"

    Bases: AntaTest

    Verifies interfaces utilization is below 75%.

    Source code in anta/tests/interfaces.py
    class VerifyInterfaceUtilization(AntaTest):\n\"\"\"\n    Verifies interfaces utilization is below 75%.\n    \"\"\"\n\n    name = \"VerifyInterfaceUtilization\"\n    description = \"Verifies interfaces utilization is below 75%.\"\n    categories = [\"interfaces\"]\n    # TODO - move from text to json if possible\n    commands = [AntaCommand(command=\"show interfaces counters rates\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        wrong_interfaces = {}\n        for line in command_output.split(\"\\n\")[1:]:\n            if len(line) > 0:\n                if line.split()[-5] == \"-\" or line.split()[-2] == \"-\":\n                    pass\n                elif float(line.split()[-5].replace(\"%\", \"\")) > 75.0:\n                    wrong_interfaces[line.split()[0]] = line.split()[-5]\n                elif float(line.split()[-2].replace(\"%\", \"\")) > 75.0:\n                    wrong_interfaces[line.split()[0]] = line.split()[-2]\n        if not wrong_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interfaces have a usage > 75%: {wrong_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfacesStatus","title":"VerifyInterfacesStatus","text":"

    Bases: AntaTest

    This test verifies if the provided list of interfaces are all in the expected state.

    Expected Results
    • success: The test will pass if the provided interfaces are all in the expected state.
    • failure: The test will fail if any interface is not in the expected state.
    Source code in anta/tests/interfaces.py
    class VerifyInterfacesStatus(AntaTest):\n\"\"\"\n    This test verifies if the provided list of interfaces are all in the expected state.\n\n    Expected Results:\n        * success: The test will pass if the provided interfaces are all in the expected state.\n        * failure: The test will fail if any interface is not in the expected state.\n    \"\"\"\n\n    name = \"VerifyInterfacesStatus\"\n    description = \"Verifies if the provided list of interfaces are all in the expected state.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces description\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        interfaces: List[InterfaceStatus]\n\"\"\"List of interfaces to validate with the expected state\"\"\"\n\n        class InterfaceStatus(BaseModel):  # pylint: disable=missing-class-docstring\n            interface: Interface\n            state: Literal[\"up\", \"adminDown\"]\n            protocol_status: Literal[\"up\", \"down\"] = \"up\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n\n        self.result.is_success()\n\n        intf_not_configured = []\n        intf_wrong_state = []\n\n        for interface_status in self.inputs.interfaces:\n            intf_status = get_value(command_output[\"interfaceDescriptions\"], interface_status.interface)\n            if intf_status is None:\n                intf_not_configured.append(interface_status.interface)\n                continue\n\n            proto = intf_status[\"lineProtocolStatus\"]\n            status = intf_status[\"interfaceStatus\"]\n\n            if interface_status.state == \"up\" and not (re.match(r\"connected|up\", proto) and re.match(r\"connected|up\", status)):\n                intf_wrong_state.append(f\"{interface_status.interface} is {proto}/{status} expected {interface_status.protocol_status}/{interface_status.state}\")\n            elif interface_status.state == \"adminDown\":\n                if interface_status.protocol_status == \"up\" and not (re.match(r\"up\", proto) and re.match(r\"adminDown\", status)):\n                    intf_wrong_state.append(f\"{interface_status.interface} is {proto}/{status} expected {interface_status.protocol_status}/{interface_status.state}\")\n                elif interface_status.protocol_status == \"down\" and not (re.match(r\"down\", proto) and re.match(r\"adminDown\", status)):\n                    intf_wrong_state.append(f\"{interface_status.interface} is {proto}/{status} expected {interface_status.protocol_status}/{interface_status.state}\")\n\n        if intf_not_configured:\n            self.result.is_failure(f\"The following interface(s) are not configured: {intf_not_configured}\")\n\n        if intf_wrong_state:\n            self.result.is_failure(f\"The following interface(s) are not in the expected state: {intf_wrong_state}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfacesStatus.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    interfaces: List[InterfaceStatus]\n\"\"\"List of interfaces to validate with the expected state\"\"\"\n\n    class InterfaceStatus(BaseModel):  # pylint: disable=missing-class-docstring\n        interface: Interface\n        state: Literal[\"up\", \"adminDown\"]\n        protocol_status: Literal[\"up\", \"down\"] = \"up\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyInterfacesStatus.Input.interfaces","title":"interfaces instance-attribute","text":"
    interfaces: List[InterfaceStatus]\n

    List of interfaces to validate with the expected state

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU","title":"VerifyL2MTU","text":"

    Bases: AntaTest

    Verifies the global layer 2 Maximum Transfer Unit (MTU) for all L2 interfaces.

    Test that L2 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces. You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.

    Expected Results
    • success: The test will pass if all layer 2 interfaces have the proper MTU configured.
    • failure: The test will fail if one or many layer 2 interfaces have the wrong MTU configured.
    Source code in anta/tests/interfaces.py
    class VerifyL2MTU(AntaTest):\n\"\"\"\n    Verifies the global layer 2 Maximum Transfer Unit (MTU) for all L2 interfaces.\n\n    Test that L2 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces.\n    You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.\n\n    Expected Results:\n        * success: The test will pass if all layer 2 interfaces have the proper MTU configured.\n        * failure: The test will fail if one or many layer 2 interfaces have the wrong MTU configured.\n    \"\"\"\n\n    name = \"VerifyL2MTU\"\n    description = \"Verifies the global layer 2 Maximum Transfer Unit (MTU) for all layer 2 interfaces.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mtu: int = 9214\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n        ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L2 interfaces to ignore\"\"\"\n        specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L2 interfaces with their specific MTU configured\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Parameter to save incorrect interface settings\n        wrong_l2mtu_intf: list[dict[str, int]] = []\n        command_output = self.instance_commands[0].json_output\n        # Set list of interfaces with specific settings\n        specific_interfaces: list[str] = []\n        if self.inputs.specific_mtu:\n            for d in self.inputs.specific_mtu:\n                specific_interfaces.extend(d)\n        for interface, values in command_output[\"interfaces\"].items():\n            if re.findall(r\"[a-z]+\", interface, re.IGNORECASE)[0] not in self.inputs.ignored_interfaces and values[\"forwardingModel\"] == \"bridged\":\n                if interface in specific_interfaces:\n                    wrong_l2mtu_intf.extend({interface: values[\"mtu\"]} for custom_data in self.inputs.specific_mtu if values[\"mtu\"] != custom_data[interface])\n                # Comparison with generic setting\n                elif values[\"mtu\"] != self.inputs.mtu:\n                    wrong_l2mtu_intf.append({interface: values[\"mtu\"]})\n        if wrong_l2mtu_intf:\n            self.result.is_failure(f\"Some L2 interfaces do not have correct MTU configured:\\n{wrong_l2mtu_intf}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mtu: int = 9214\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n    ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L2 interfaces to ignore\"\"\"\n    specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L2 interfaces with their specific MTU configured\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input.ignored_interfaces","title":"ignored_interfaces class-attribute instance-attribute","text":"
    ignored_interfaces: List[str] = ['Management', 'Loopback', 'Vxlan', 'Tunnel']\n

    A list of L2 interfaces to ignore

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input.mtu","title":"mtu class-attribute instance-attribute","text":"
    mtu: int = 9214\n

    Default MTU we should have configured on all non-excluded interfaces

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL2MTU.Input.specific_mtu","title":"specific_mtu class-attribute instance-attribute","text":"
    specific_mtu: List[Dict[str, int]] = []\n

    A list of dictionary of L2 interfaces with their specific MTU configured

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU","title":"VerifyL3MTU","text":"

    Bases: AntaTest

    Verifies the global layer 3 Maximum Transfer Unit (MTU) for all L3 interfaces.

    Test that L3 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces. You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.

    Expected Results
    • success: The test will pass if all layer 3 interfaces have the proper MTU configured.
    • failure: The test will fail if one or many layer 3 interfaces have the wrong MTU configured.
    Source code in anta/tests/interfaces.py
    class VerifyL3MTU(AntaTest):\n\"\"\"\n    Verifies the global layer 3 Maximum Transfer Unit (MTU) for all L3 interfaces.\n\n    Test that L3 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces.\n    You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.\n\n    Expected Results:\n        * success: The test will pass if all layer 3 interfaces have the proper MTU configured.\n        * failure: The test will fail if one or many layer 3 interfaces have the wrong MTU configured.\n    \"\"\"\n\n    name = \"VerifyL3MTU\"\n    description = \"Verifies the global layer 3 Maximum Transfer Unit (MTU) for all layer 3 interfaces.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show interfaces\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mtu: int = 1500\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n        ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L3 interfaces to ignore\"\"\"\n        specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L3 interfaces with their specific MTU configured\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        # Parameter to save incorrect interface settings\n        wrong_l3mtu_intf: list[dict[str, int]] = []\n        command_output = self.instance_commands[0].json_output\n        # Set list of interfaces with specific settings\n        specific_interfaces: list[str] = []\n        if self.inputs.specific_mtu:\n            for d in self.inputs.specific_mtu:\n                specific_interfaces.extend(d)\n        for interface, values in command_output[\"interfaces\"].items():\n            if re.findall(r\"[a-z]+\", interface, re.IGNORECASE)[0] not in self.inputs.ignored_interfaces and values[\"forwardingModel\"] == \"routed\":\n                if interface in specific_interfaces:\n                    wrong_l3mtu_intf.extend({interface: values[\"mtu\"]} for custom_data in self.inputs.specific_mtu if values[\"mtu\"] != custom_data[interface])\n                # Comparison with generic setting\n                elif values[\"mtu\"] != self.inputs.mtu:\n                    wrong_l3mtu_intf.append({interface: values[\"mtu\"]})\n        if wrong_l3mtu_intf:\n            self.result.is_failure(f\"Some interfaces do not have correct MTU configured:\\n{wrong_l3mtu_intf}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mtu: int = 1500\n\"\"\"Default MTU we should have configured on all non-excluded interfaces\"\"\"\n    ignored_interfaces: List[str] = [\"Management\", \"Loopback\", \"Vxlan\", \"Tunnel\"]\n\"\"\"A list of L3 interfaces to ignore\"\"\"\n    specific_mtu: List[Dict[str, int]] = []\n\"\"\"A list of dictionary of L3 interfaces with their specific MTU configured\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input.ignored_interfaces","title":"ignored_interfaces class-attribute instance-attribute","text":"
    ignored_interfaces: List[str] = ['Management', 'Loopback', 'Vxlan', 'Tunnel']\n

    A list of L3 interfaces to ignore

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input.mtu","title":"mtu class-attribute instance-attribute","text":"
    mtu: int = 1500\n

    Default MTU we should have configured on all non-excluded interfaces

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyL3MTU.Input.specific_mtu","title":"specific_mtu class-attribute instance-attribute","text":"
    specific_mtu: List[Dict[str, int]] = []\n

    A list of dictionary of L3 interfaces with their specific MTU configured

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyLoopbackCount","title":"VerifyLoopbackCount","text":"

    Bases: AntaTest

    Verifies the number of loopback interfaces on the device is the one we expect and if none of the loopback is down.

    Source code in anta/tests/interfaces.py
    class VerifyLoopbackCount(AntaTest):\n\"\"\"\n    Verifies the number of loopback interfaces on the device is the one we expect and if none of the loopback is down.\n    \"\"\"\n\n    name = \"VerifyLoopbackCount\"\n    description = \"Verifies the number of loopback interfaces on the device is the one we expect and if none of the loopback is down.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show ip interface brief\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type: ignore\n\"\"\"Number of loopback interfaces expected to be present\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        loopback_count = 0\n        down_loopback_interfaces = []\n        for interface in command_output[\"interfaces\"]:\n            interface_dict = command_output[\"interfaces\"][interface]\n            if \"Loopback\" in interface:\n                loopback_count += 1\n                if not (interface_dict[\"lineProtocolStatus\"] == \"up\" and interface_dict[\"interfaceStatus\"] == \"connected\"):\n                    down_loopback_interfaces.append(interface)\n        if loopback_count == self.inputs.number and len(down_loopback_interfaces) == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            if loopback_count != self.inputs.number:\n                self.result.is_failure(f\"Found {loopback_count} Loopbacks when expecting {self.inputs.number}\")\n            elif len(down_loopback_interfaces) != 0:\n                self.result.is_failure(f\"The following Loopbacks are not up: {down_loopback_interfaces}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyLoopbackCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/interfaces.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type: ignore\n\"\"\"Number of loopback interfaces expected to be present\"\"\"\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyLoopbackCount.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    Number of loopback interfaces expected to be present

    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyPortChannels","title":"VerifyPortChannels","text":"

    Bases: AntaTest

    Verifies there is no inactive port in port channels.

    Source code in anta/tests/interfaces.py
    class VerifyPortChannels(AntaTest):\n\"\"\"\n    Verifies there is no inactive port in port channels.\n    \"\"\"\n\n    name = \"VerifyPortChannels\"\n    description = \"Verifies there is no inactive port in port channels.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show port-channel\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        po_with_invactive_ports: list[dict[str, str]] = []\n        for portchannel, portchannel_dict in command_output[\"portChannels\"].items():\n            if len(portchannel_dict[\"inactivePorts\"]) != 0:\n                po_with_invactive_ports.extend({portchannel: portchannel_dict[\"inactivePorts\"]})\n        if not po_with_invactive_ports:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following port-channels have inactive port(s): {po_with_invactive_ports}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifySVI","title":"VerifySVI","text":"

    Bases: AntaTest

    Verifies there is no interface vlan down.

    Source code in anta/tests/interfaces.py
    class VerifySVI(AntaTest):\n\"\"\"\n    Verifies there is no interface vlan down.\n    \"\"\"\n\n    name = \"VerifySVI\"\n    description = \"Verifies there is no interface vlan down.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show ip interface brief\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        down_svis = []\n        for interface in command_output[\"interfaces\"]:\n            interface_dict = command_output[\"interfaces\"][interface]\n            if \"Vlan\" in interface:\n                if not (interface_dict[\"lineProtocolStatus\"] == \"up\" and interface_dict[\"interfaceStatus\"] == \"connected\"):\n                    down_svis.append(interface)\n        if len(down_svis) == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following SVIs are not up: {down_svis}\")\n
    "},{"location":"api/tests.interfaces/#anta.tests.interfaces.VerifyStormControlDrops","title":"VerifyStormControlDrops","text":"

    Bases: AntaTest

    Verifies the device did not drop packets due its to storm-control configuration.

    Source code in anta/tests/interfaces.py
    class VerifyStormControlDrops(AntaTest):\n\"\"\"\n    Verifies the device did not drop packets due its to storm-control configuration.\n    \"\"\"\n\n    name = \"VerifyStormControlDrops\"\n    description = \"Verifies the device did not drop packets due its to storm-control configuration.\"\n    categories = [\"interfaces\"]\n    commands = [AntaCommand(command=\"show storm-control\")]\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        storm_controlled_interfaces: dict[str, dict[str, Any]] = {}\n        for interface, interface_dict in command_output[\"interfaces\"].items():\n            for traffic_type, traffic_type_dict in interface_dict[\"trafficTypes\"].items():\n                if \"drop\" in traffic_type_dict and traffic_type_dict[\"drop\"] != 0:\n                    storm_controlled_interface_dict = storm_controlled_interfaces.setdefault(interface, {})\n                    storm_controlled_interface_dict.update({traffic_type: traffic_type_dict[\"drop\"]})\n        if not storm_controlled_interfaces:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following interfaces have none 0 storm-control drop counters {storm_controlled_interfaces}\")\n
    "},{"location":"api/tests.logging/","title":"Logging","text":""},{"location":"api/tests.logging/#anta-catalog-for-logging-tests","title":"ANTA catalog for logging tests","text":"

    Test functions related to the EOS various logging settings

    NOTE: \u2018show logging\u2019 does not support json output yet

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingAccounting","title":"VerifyLoggingAccounting","text":"

    Bases: AntaTest

    Verifies if AAA accounting logs are generated.

    Expected Results
    • success: The test will pass if AAA accounting logs are generated.
    • failure: The test will fail if AAA accounting logs are NOT generated.
    Source code in anta/tests/logging.py
    class VerifyLoggingAccounting(AntaTest):\n\"\"\"\n    Verifies if AAA accounting logs are generated.\n\n    Expected Results:\n        * success: The test will pass if AAA accounting logs are generated.\n        * failure: The test will fail if AAA accounting logs are NOT generated.\n    \"\"\"\n\n    name = \"VerifyLoggingAccounting\"\n    description = \"Verifies if AAA accounting logs are generated.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show aaa accounting logs | tail\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        pattern = r\"cmd=show aaa accounting logs\"\n        output = self.instance_commands[0].text_output\n        if re.search(pattern, output):\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"AAA accounting logs are not generated\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingErrors","title":"VerifyLoggingErrors","text":"

    Bases: AntaTest

    This test verifies there are no syslog messages with a severity of ERRORS or higher.

    Expected Results
    • success: The test will pass if there are NO syslog messages with a severity of ERRORS or higher.
    • failure: The test will fail if ERRORS or higher syslog messages are present.
    Source code in anta/tests/logging.py
    class VerifyLoggingErrors(AntaTest):\n\"\"\"\n    This test verifies there are no syslog messages with a severity of ERRORS or higher.\n\n    Expected Results:\n      * success: The test will pass if there are NO syslog messages with a severity of ERRORS or higher.\n      * failure: The test will fail if ERRORS or higher syslog messages are present.\n    \"\"\"\n\n    name = \"VerifyLoggingWarning\"\n    description = \"This test verifies there are no syslog messages with a severity of ERRORS or higher.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show logging threshold errors\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n\"\"\"\n        Run VerifyLoggingWarning validation\n        \"\"\"\n        command_output = self.instance_commands[0].text_output\n\n        if len(command_output) == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Device has reported syslog messages with a severity of ERRORS or higher\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingErrors.test","title":"test","text":"
    test() -> None\n

    Run VerifyLoggingWarning validation

    Source code in anta/tests/logging.py
    @AntaTest.anta_test\ndef test(self) -> None:\n\"\"\"\n    Run VerifyLoggingWarning validation\n    \"\"\"\n    command_output = self.instance_commands[0].text_output\n\n    if len(command_output) == 0:\n        self.result.is_success()\n    else:\n        self.result.is_failure(\"Device has reported syslog messages with a severity of ERRORS or higher\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHostname","title":"VerifyLoggingHostname","text":"

    Bases: AntaTest

    Verifies if logs are generated with the device FQDN.

    Expected Results
    • success: The test will pass if logs are generated with the device FQDN.
    • failure: The test will fail if logs are NOT generated with the device FQDN.
    Source code in anta/tests/logging.py
    class VerifyLoggingHostname(AntaTest):\n\"\"\"\n    Verifies if logs are generated with the device FQDN.\n\n    Expected Results:\n        * success: The test will pass if logs are generated with the device FQDN.\n        * failure: The test will fail if logs are NOT generated with the device FQDN.\n    \"\"\"\n\n    name = \"VerifyLoggingHostname\"\n    description = \"Verifies if logs are generated with the device FQDN.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"show hostname\"),\n        AntaCommand(command=\"send log level informational message ANTA VerifyLoggingHostname validation\"),\n        AntaCommand(command=\"show logging informational last 30 seconds | grep ANTA\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        output_hostname = self.instance_commands[0].json_output\n        output_logging = self.instance_commands[2].text_output\n        fqdn = output_hostname[\"fqdn\"]\n        lines = output_logging.strip().split(\"\\n\")[::-1]\n        log_pattern = r\"ANTA VerifyLoggingHostname validation\"\n        last_line_with_pattern = \"\"\n        for line in lines:\n            if re.search(log_pattern, line):\n                last_line_with_pattern = line\n                break\n        if fqdn in last_line_with_pattern:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Logs are not generated with the device FQDN\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts","title":"VerifyLoggingHosts","text":"

    Bases: AntaTest

    Verifies logging hosts (syslog servers) for a specified VRF.

    Expected Results
    • success: The test will pass if the provided syslog servers are configured in the specified VRF.
    • failure: The test will fail if the provided syslog servers are NOT configured in the specified VRF.
    Source code in anta/tests/logging.py
    class VerifyLoggingHosts(AntaTest):\n\"\"\"\n    Verifies logging hosts (syslog servers) for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided syslog servers are configured in the specified VRF.\n        * failure: The test will fail if the provided syslog servers are NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyLoggingHosts\"\n    description = \"Verifies logging hosts (syslog servers) for a specified VRF.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show logging\", ofmt=\"text\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        hosts: List[IPv4Address]\n\"\"\"List of hosts (syslog servers) IP addresses\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        output = self.instance_commands[0].text_output\n        not_configured = []\n        for host in self.inputs.hosts:\n            pattern = rf\"Logging to '{str(host)}'.*VRF {self.inputs.vrf}\"\n            if not re.search(pattern, _get_logging_states(self.logger, output)):\n                not_configured.append(str(host))\n\n        if not not_configured:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Syslog servers {not_configured} are not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/logging.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    hosts: List[IPv4Address]\n\"\"\"List of hosts (syslog servers) IP addresses\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts.Input.hosts","title":"hosts instance-attribute","text":"
    hosts: List[IPv4Address]\n

    List of hosts (syslog servers) IP addresses

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingHosts.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport log messages

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingLogsGeneration","title":"VerifyLoggingLogsGeneration","text":"

    Bases: AntaTest

    Verifies if logs are generated.

    Expected Results
    • success: The test will pass if logs are generated.
    • failure: The test will fail if logs are NOT generated.
    Source code in anta/tests/logging.py
    class VerifyLoggingLogsGeneration(AntaTest):\n\"\"\"\n    Verifies if logs are generated.\n\n    Expected Results:\n        * success: The test will pass if logs are generated.\n        * failure: The test will fail if logs are NOT generated.\n    \"\"\"\n\n    name = \"VerifyLoggingLogsGeneration\"\n    description = \"Verifies if logs are generated.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"send log level informational message ANTA VerifyLoggingLogsGeneration validation\"),\n        AntaCommand(command=\"show logging informational last 30 seconds | grep ANTA\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        log_pattern = r\"ANTA VerifyLoggingLogsGeneration validation\"\n        output = self.instance_commands[1].text_output\n        lines = output.strip().split(\"\\n\")[::-1]\n        for line in lines:\n            if re.search(log_pattern, line):\n                self.result.is_success()\n                return\n        self.result.is_failure(\"Logs are not generated\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingPersistent","title":"VerifyLoggingPersistent","text":"

    Bases: AntaTest

    Verifies if logging persistent is enabled and logs are saved in flash.

    Expected Results
    • success: The test will pass if logging persistent is enabled and logs are in flash.
    • failure: The test will fail if logging persistent is disabled or no logs are saved in flash.
    Source code in anta/tests/logging.py
    class VerifyLoggingPersistent(AntaTest):\n\"\"\"\n    Verifies if logging persistent is enabled and logs are saved in flash.\n\n    Expected Results:\n        * success: The test will pass if logging persistent is enabled and logs are in flash.\n        * failure: The test will fail if logging persistent is disabled or no logs are saved in flash.\n    \"\"\"\n\n    name = \"VerifyLoggingPersistent\"\n    description = \"Verifies if logging persistent is enabled and logs are saved in flash.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"show logging\", ofmt=\"text\"),\n        AntaCommand(command=\"dir flash:/persist/messages\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n        log_output = self.instance_commands[0].text_output\n        dir_flash_output = self.instance_commands[1].text_output\n        if \"Persistent logging: disabled\" in _get_logging_states(self.logger, log_output):\n            self.result.is_failure(\"Persistent logging is disabled\")\n            return\n        pattern = r\"-rw-\\s+(\\d+)\"\n        persist_logs = re.search(pattern, dir_flash_output)\n        if not persist_logs or int(persist_logs.group(1)) == 0:\n            self.result.is_failure(\"No persistent logs are saved in flash\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf","title":"VerifyLoggingSourceIntf","text":"

    Bases: AntaTest

    Verifies logging source-interface for a specified VRF.

    Expected Results
    • success: The test will pass if the provided logging source-interface is configured in the specified VRF.
    • failure: The test will fail if the provided logging source-interface is NOT configured in the specified VRF.
    Source code in anta/tests/logging.py
    class VerifyLoggingSourceIntf(AntaTest):\n\"\"\"\n    Verifies logging source-interface for a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided logging source-interface is configured in the specified VRF.\n        * failure: The test will fail if the provided logging source-interface is NOT configured in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyLoggingSourceInt\"\n    description = \"Verifies logging source-interface for a specified VRF.\"\n    categories = [\"logging\"]\n    commands = [AntaCommand(command=\"show logging\", ofmt=\"text\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        interface: str\n\"\"\"Source-interface to use as source IP of log messages\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        output = self.instance_commands[0].text_output\n        pattern = rf\"Logging source-interface '{self.inputs.interface}'.*VRF {self.inputs.vrf}\"\n        if re.search(pattern, _get_logging_states(self.logger, output)):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Source-interface '{self.inputs.interface}' is not configured in VRF {self.inputs.vrf}\")\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/logging.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    interface: str\n\"\"\"Source-interface to use as source IP of log messages\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF to transport log messages\"\"\"\n
    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf.Input.interface","title":"interface instance-attribute","text":"
    interface: str\n

    Source-interface to use as source IP of log messages

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingSourceIntf.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF to transport log messages

    "},{"location":"api/tests.logging/#anta.tests.logging.VerifyLoggingTimestamp","title":"VerifyLoggingTimestamp","text":"

    Bases: AntaTest

    Verifies if logs are generated with the approprate timestamp.

    Expected Results
    • success: The test will pass if logs are generated with the appropriated timestamp.
    • failure: The test will fail if logs are NOT generated with the appropriated timestamp.
    Source code in anta/tests/logging.py
    class VerifyLoggingTimestamp(AntaTest):\n\"\"\"\n    Verifies if logs are generated with the approprate timestamp.\n\n    Expected Results:\n        * success: The test will pass if logs are generated with the appropriated timestamp.\n        * failure: The test will fail if logs are NOT generated with the appropriated timestamp.\n    \"\"\"\n\n    name = \"VerifyLoggingTimestamp\"\n    description = \"Verifies if logs are generated with the appropriate timestamp.\"\n    categories = [\"logging\"]\n    commands = [\n        AntaCommand(command=\"send log level informational message ANTA VerifyLoggingTimestamp validation\"),\n        AntaCommand(command=\"show logging informational last 30 seconds | grep ANTA\", ofmt=\"text\"),\n    ]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        log_pattern = r\"ANTA VerifyLoggingTimestamp validation\"\n        timestamp_pattern = r\"\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d{6}-\\d{2}:\\d{2}\"\n        output = self.instance_commands[1].text_output\n        lines = output.strip().split(\"\\n\")[::-1]\n        last_line_with_pattern = \"\"\n        for line in lines:\n            if re.search(log_pattern, line):\n                last_line_with_pattern = line\n                break\n        if re.search(timestamp_pattern, last_line_with_pattern):\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Logs are not generated with the appropriate timestamp format\")\n
    "},{"location":"api/tests/","title":"Overview","text":""},{"location":"api/tests/#anta-tests-landing-page","title":"ANTA Tests landing page","text":"

    This section describes all the available tests provided by ANTA package.

    • AAA
    • Configuration
    • Connectivity
    • Field Notice
    • Hardware
    • Interfaces
    • Logging
    • MLAG
    • Multicast
    • Profiles
    • Routing Generic
    • Routing BGP
    • Routing OSPF
    • Security
    • SNMP
    • Software
    • STP
    • System
    • VXLAN

    All these tests can be imported in a catalog to be used by the anta cli or in your own framework

    "},{"location":"api/tests.mlag/","title":"MLAG","text":""},{"location":"api/tests.mlag/#anta-catalog-for-mlag-tests","title":"ANTA catalog for mlag tests","text":"

    Test functions related to Multi-chassis Link Aggregation (MLAG)

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagConfigSanity","title":"VerifyMlagConfigSanity","text":"

    Bases: AntaTest

    This test verifies there are no MLAG config-sanity inconsistencies.

    Expected Results
    • success: The test will pass if there are NO MLAG config-sanity inconsistencies.
    • failure: The test will fail if there are MLAG config-sanity inconsistencies.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    • error: The test will give an error if \u2018mlagActive\u2019 is not found in the JSON response.
    Source code in anta/tests/mlag.py
    class VerifyMlagConfigSanity(AntaTest):\n\"\"\"\n    This test verifies there are no MLAG config-sanity inconsistencies.\n\n    Expected Results:\n        * success: The test will pass if there are NO MLAG config-sanity inconsistencies.\n        * failure: The test will fail if there are MLAG config-sanity inconsistencies.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n        * error: The test will give an error if 'mlagActive' is not found in the JSON response.\n    \"\"\"\n\n    name = \"VerifyMlagConfigSanity\"\n    description = \"This test verifies there are no MLAG config-sanity inconsistencies.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag config-sanity\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if (mlag_status := get_value(command_output, \"mlagActive\")) is None:\n            self.result.is_error(message=\"Incorrect JSON response - 'mlagActive' state was not found\")\n            return\n        if mlag_status is False:\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        keys_to_verify = [\"globalConfiguration\", \"interfaceConfiguration\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if not any(verified_output.values()):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"MLAG config-sanity returned inconsistencies: {verified_output}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary","title":"VerifyMlagDualPrimary","text":"

    Bases: AntaTest

    This test verifies the dual-primary detection and its parameters of the MLAG configuration.

    Expected Results
    • success: The test will pass if the dual-primary detection is enabled and its parameters are configured properly.
    • failure: The test will fail if the dual-primary detection is NOT enabled or its parameters are NOT configured properly.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagDualPrimary(AntaTest):\n\"\"\"\n    This test verifies the dual-primary detection and its parameters of the MLAG configuration.\n\n    Expected Results:\n        * success: The test will pass if the dual-primary detection is enabled and its parameters are configured properly.\n        * failure: The test will fail if the dual-primary detection is NOT enabled or its parameters are NOT configured properly.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagDualPrimary\"\n    description = \"This test verifies the dual-primary detection and its parameters of the MLAG configuration.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag detail\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        detection_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay detection (seconds)\"\"\"\n        errdisabled: bool = False\n\"\"\"Errdisabled all interfaces when dual-primary is detected\"\"\"\n        recovery_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until non peer-link ports that are part of an MLAG are enabled\"\"\"\n        recovery_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until ports that are not part of an MLAG are enabled\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        errdisabled_action = \"errdisableAllInterfaces\" if self.inputs.errdisabled else \"none\"\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        if command_output[\"dualPrimaryDetectionState\"] == \"disabled\":\n            self.result.is_failure(\"Dual-primary detection is disabled\")\n            return\n        keys_to_verify = [\"detail.dualPrimaryDetectionDelay\", \"detail.dualPrimaryAction\", \"dualPrimaryMlagRecoveryDelay\", \"dualPrimaryNonMlagRecoveryDelay\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if (\n            verified_output[\"detail.dualPrimaryDetectionDelay\"] == self.inputs.detection_delay\n            and verified_output[\"detail.dualPrimaryAction\"] == errdisabled_action\n            and verified_output[\"dualPrimaryMlagRecoveryDelay\"] == self.inputs.recovery_delay\n            and verified_output[\"dualPrimaryNonMlagRecoveryDelay\"] == self.inputs.recovery_delay_non_mlag\n        ):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The dual-primary parameters are not configured properly: {verified_output}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/mlag.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    detection_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay detection (seconds)\"\"\"\n    errdisabled: bool = False\n\"\"\"Errdisabled all interfaces when dual-primary is detected\"\"\"\n    recovery_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until non peer-link ports that are part of an MLAG are enabled\"\"\"\n    recovery_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after dual-primary detection resolves until ports that are not part of an MLAG are enabled\"\"\"\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.detection_delay","title":"detection_delay instance-attribute","text":"
    detection_delay: conint(ge=0)\n

    Delay detection (seconds)

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.errdisabled","title":"errdisabled class-attribute instance-attribute","text":"
    errdisabled: bool = False\n

    Errdisabled all interfaces when dual-primary is detected

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.recovery_delay","title":"recovery_delay instance-attribute","text":"
    recovery_delay: conint(ge=0)\n

    Delay (seconds) after dual-primary detection resolves until non peer-link ports that are part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagDualPrimary.Input.recovery_delay_non_mlag","title":"recovery_delay_non_mlag instance-attribute","text":"
    recovery_delay_non_mlag: conint(ge=0)\n

    Delay (seconds) after dual-primary detection resolves until ports that are not part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagInterfaces","title":"VerifyMlagInterfaces","text":"

    Bases: AntaTest

    This test verifies there are no inactive or active-partial MLAG ports.

    Expected Results
    • success: The test will pass if there are NO inactive or active-partial MLAG ports.
    • failure: The test will fail if there are inactive or active-partial MLAG ports.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagInterfaces(AntaTest):\n\"\"\"\n    This test verifies there are no inactive or active-partial MLAG ports.\n\n    Expected Results:\n        * success: The test will pass if there are NO inactive or active-partial MLAG ports.\n        * failure: The test will fail if there are inactive or active-partial MLAG ports.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagInterfaces\"\n    description = \"This test verifies there are no inactive or active-partial MLAG ports.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        if command_output[\"mlagPorts\"][\"Inactive\"] == 0 and command_output[\"mlagPorts\"][\"Active-partial\"] == 0:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"MLAG status is not OK: {command_output['mlagPorts']}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay","title":"VerifyMlagReloadDelay","text":"

    Bases: AntaTest

    This test verifies the reload-delay parameters of the MLAG configuration.

    Expected Results
    • success: The test will pass if the reload-delay parameters are configured properly.
    • failure: The test will fail if the reload-delay parameters are NOT configured properly.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagReloadDelay(AntaTest):\n\"\"\"\n    This test verifies the reload-delay parameters of the MLAG configuration.\n\n    Expected Results:\n        * success: The test will pass if the reload-delay parameters are configured properly.\n        * failure: The test will fail if the reload-delay parameters are NOT configured properly.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagReloadDelay\"\n    description = \"This test verifies the reload-delay parameters of the MLAG configuration.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        reload_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until non peer-link ports that are part of an MLAG are enabled\"\"\"\n        reload_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until ports that are not part of an MLAG are enabled\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        keys_to_verify = [\"reloadDelay\", \"reloadDelayNonMlag\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if verified_output[\"reloadDelay\"] == self.inputs.reload_delay and verified_output[\"reloadDelayNonMlag\"] == self.inputs.reload_delay_non_mlag:\n            self.result.is_success()\n\n        else:\n            self.result.is_failure(f\"The reload-delay parameters are not configured properly: {verified_output}\")\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/mlag.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    reload_delay: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until non peer-link ports that are part of an MLAG are enabled\"\"\"\n    reload_delay_non_mlag: conint(ge=0)  # type: ignore\n\"\"\"Delay (seconds) after reboot until ports that are not part of an MLAG are enabled\"\"\"\n
    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay.Input.reload_delay","title":"reload_delay instance-attribute","text":"
    reload_delay: conint(ge=0)\n

    Delay (seconds) after reboot until non peer-link ports that are part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagReloadDelay.Input.reload_delay_non_mlag","title":"reload_delay_non_mlag instance-attribute","text":"
    reload_delay_non_mlag: conint(ge=0)\n

    Delay (seconds) after reboot until ports that are not part of an MLAG are enabled

    "},{"location":"api/tests.mlag/#anta.tests.mlag.VerifyMlagStatus","title":"VerifyMlagStatus","text":"

    Bases: AntaTest

    This test verifies the health status of the MLAG configuration.

    Expected Results
    • success: The test will pass if the MLAG state is \u2018active\u2019, negotiation status is \u2018connected\u2019, peer-link status and local interface status are \u2018up\u2019.
    • failure: The test will fail if the MLAG state is not \u2018active\u2019, negotiation status is not \u2018connected\u2019, peer-link status or local interface status are not \u2018up\u2019.
    • skipped: The test will be skipped if MLAG is \u2018disabled\u2019.
    Source code in anta/tests/mlag.py
    class VerifyMlagStatus(AntaTest):\n\"\"\"\n    This test verifies the health status of the MLAG configuration.\n\n    Expected Results:\n        * success: The test will pass if the MLAG state is 'active', negotiation status is 'connected',\n                   peer-link status and local interface status are 'up'.\n        * failure: The test will fail if the MLAG state is not 'active', negotiation status is not 'connected',\n                   peer-link status or local interface status are not 'up'.\n        * skipped: The test will be skipped if MLAG is 'disabled'.\n    \"\"\"\n\n    name = \"VerifyMlagStatus\"\n    description = \"This test verifies the health status of the MLAG configuration.\"\n    categories = [\"mlag\"]\n    commands = [AntaCommand(command=\"show mlag\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"state\"] == \"disabled\":\n            self.result.is_skipped(\"MLAG is disabled\")\n            return\n        keys_to_verify = [\"state\", \"negStatus\", \"localIntfStatus\", \"peerLinkStatus\"]\n        verified_output = {key: get_value(command_output, key) for key in keys_to_verify}\n        if (\n            verified_output[\"state\"] == \"active\"\n            and verified_output[\"negStatus\"] == \"connected\"\n            and verified_output[\"localIntfStatus\"] == \"up\"\n            and verified_output[\"peerLinkStatus\"] == \"up\"\n        ):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"MLAG status is not OK: {verified_output}\")\n
    "},{"location":"api/tests.multicast/","title":"Multicast","text":""},{"location":"api/tests.multicast/#anta-catalog-for-multicast-tests","title":"ANTA catalog for multicast tests","text":"

    Test functions related to multicast

    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingGlobal","title":"VerifyIGMPSnoopingGlobal","text":"

    Bases: AntaTest

    Verifies the IGMP snooping global configuration.

    Source code in anta/tests/multicast.py
    class VerifyIGMPSnoopingGlobal(AntaTest):\n\"\"\"\n    Verifies the IGMP snooping global configuration.\n    \"\"\"\n\n    name = \"VerifyIGMPSnoopingGlobal\"\n    description = \"Verifies the IGMP snooping global configuration.\"\n    categories = [\"multicast\", \"igmp\"]\n    commands = [AntaCommand(command=\"show ip igmp snooping\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        enabled: bool\n\"\"\"Expected global IGMP snooping configuration (True=enabled, False=disabled)\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        igmp_state = command_output[\"igmpSnoopingState\"]\n        if igmp_state != \"enabled\" if self.inputs.enabled else igmp_state != \"disabled\":\n            self.result.is_failure(f\"IGMP state is not valid: {igmp_state}\")\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingGlobal.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/multicast.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    enabled: bool\n\"\"\"Expected global IGMP snooping configuration (True=enabled, False=disabled)\"\"\"\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingGlobal.Input.enabled","title":"enabled instance-attribute","text":"
    enabled: bool\n

    Expected global IGMP snooping configuration (True=enabled, False=disabled)

    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingVlans","title":"VerifyIGMPSnoopingVlans","text":"

    Bases: AntaTest

    Verifies the IGMP snooping configuration for some VLANs.

    Source code in anta/tests/multicast.py
    class VerifyIGMPSnoopingVlans(AntaTest):\n\"\"\"\n    Verifies the IGMP snooping configuration for some VLANs.\n    \"\"\"\n\n    name = \"VerifyIGMPSnoopingVlans\"\n    description = \"Verifies the IGMP snooping configuration for some VLANs.\"\n    categories = [\"multicast\", \"igmp\"]\n    commands = [AntaCommand(command=\"show ip igmp snooping\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vlans: Dict[Vlan, bool]\n\"\"\"Dictionary of VLANs with associated IGMP configuration status (True=enabled, False=disabled)\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        for vlan, enabled in self.inputs.vlans.items():\n            if str(vlan) not in command_output[\"vlans\"]:\n                self.result.is_failure(f\"Supplied vlan {vlan} is not present on the device.\")\n                continue\n\n            igmp_state = command_output[\"vlans\"][str(vlan)][\"igmpSnoopingState\"]\n            if igmp_state != \"enabled\" if enabled else igmp_state != \"disabled\":\n                self.result.is_failure(f\"IGMP state for vlan {vlan} is {igmp_state}\")\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingVlans.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/multicast.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vlans: Dict[Vlan, bool]\n\"\"\"Dictionary of VLANs with associated IGMP configuration status (True=enabled, False=disabled)\"\"\"\n
    "},{"location":"api/tests.multicast/#anta.tests.multicast.VerifyIGMPSnoopingVlans.Input.vlans","title":"vlans instance-attribute","text":"
    vlans: Dict[Vlan, bool]\n

    Dictionary of VLANs with associated IGMP configuration status (True=enabled, False=disabled)

    "},{"location":"api/tests.profiles/","title":"Profiles","text":""},{"location":"api/tests.profiles/#anta-catalog-for-profiles-tests","title":"ANTA catalog for profiles tests","text":"

    Test functions related to ASIC profiles

    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyTcamProfile","title":"VerifyTcamProfile","text":"

    Bases: AntaTest

    Verifies the device is using the configured TCAM profile.

    Source code in anta/tests/profiles.py
    class VerifyTcamProfile(AntaTest):\n\"\"\"\n    Verifies the device is using the configured TCAM profile.\n    \"\"\"\n\n    name = \"VerifyTcamProfile\"\n    description = \"Verify that the assigned TCAM profile is actually running on the device\"\n    categories = [\"profiles\"]\n    commands = [AntaCommand(command=\"show hardware tcam profile\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        profile: str\n\"\"\"Expected TCAM profile\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"pmfProfiles\"][\"FixedSystem\"][\"status\"] == command_output[\"pmfProfiles\"][\"FixedSystem\"][\"config\"] == self.inputs.profile:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Incorrect profile running on device: {command_output['pmfProfiles']['FixedSystem']['status']}\")\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyTcamProfile.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/profiles.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    profile: str\n\"\"\"Expected TCAM profile\"\"\"\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyTcamProfile.Input.profile","title":"profile instance-attribute","text":"
    profile: str\n

    Expected TCAM profile

    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyUnifiedForwardingTableMode","title":"VerifyUnifiedForwardingTableMode","text":"

    Bases: AntaTest

    Verifies the device is using the expected Unified Forwarding Table mode.

    Source code in anta/tests/profiles.py
    class VerifyUnifiedForwardingTableMode(AntaTest):\n\"\"\"\n    Verifies the device is using the expected Unified Forwarding Table mode.\n    \"\"\"\n\n    name = \"VerifyUnifiedForwardingTableMode\"\n    description = \"\"\n    categories = [\"profiles\"]\n    commands = [AntaCommand(command=\"show platform trident forwarding-table partition\", ofmt=\"json\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mode: Literal[0, 1, 2, 3, 4, \"flexible\"]\n\"\"\"Expected UFT mode\"\"\"\n\n    @skip_on_platforms([\"cEOSLab\", \"vEOS-lab\"])\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"uftMode\"] == str(self.inputs.mode):\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device is not running correct UFT mode (expected: {self.inputs.mode} / running: {command_output['uftMode']})\")\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyUnifiedForwardingTableMode.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/profiles.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mode: Literal[0, 1, 2, 3, 4, \"flexible\"]\n\"\"\"Expected UFT mode\"\"\"\n
    "},{"location":"api/tests.profiles/#anta.tests.profiles.VerifyUnifiedForwardingTableMode.Input.mode","title":"mode instance-attribute","text":"
    mode: Literal[0, 1, 2, 3, 4, 'flexible']\n

    Expected UFT mode

    "},{"location":"api/tests.routing.bgp/","title":"BGP","text":""},{"location":"api/tests.routing.bgp/#anta-catalog-for-bgp-tests","title":"ANTA catalog for BGP tests","text":"

    Deprecation Notice

    As part of our ongoing effort to improve the ANTA catalog and align it with best practices, we are announcing the deprecation of certain BGP tests along with a specific decorator. These will be removed in a future major release of ANTA.

    What is being deprecated?

    • Tests: The following BGP tests in the ANTA catalog are marked for deprecation.
    anta.tests.routing:\nbgp:\n- VerifyBGPIPv4UnicastState:\n- VerifyBGPIPv4UnicastCount:\n- VerifyBGPIPv6UnicastState:\n- VerifyBGPEVPNState:\n- VerifyBGPEVPNCount:\n- VerifyBGPRTCState:\n- VerifyBGPRTCCount:\n
    • Decorator: The check_bgp_family_enable decorator is also being deprecated as it is no longer needed with the new refactored BGP tests.

    What should you do?

    We strongly recommend transitioning to the new set of BGP tests that have been introduced to replace the deprecated ones. Please refer to each test documentation on this page below.

    anta.tests.routing:\nbgp:\n- VerifyBGPPeerCount:\n- VerifyBGPPeersHealth:\n- VerifyBGPSpecificPeers:\n

    BGP test functions

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNCount","title":"VerifyBGPEVPNCount","text":"

    Bases: AntaTest

    Verifies all EVPN BGP sessions are established (default VRF) and the actual number of BGP EVPN neighbors is the one we expect (default VRF).

    • self.result = \u201csuccess\u201d if all EVPN BGP sessions are Established and if the actual number of BGP EVPN neighbors is the one we expect.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPEVPNCount(AntaTest):\n\"\"\"\n    Verifies all EVPN BGP sessions are established (default VRF)\n    and the actual number of BGP EVPN neighbors is the one we expect (default VRF).\n\n    * self.result = \"success\" if all EVPN BGP sessions are Established and if the actual\n                         number of BGP EVPN neighbors is the one we expect.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPEVPNCount\"\n    description = \"Verifies all EVPN BGP sessions are established (default VRF) and the actual number of BGP EVPN neighbors is the one we expect (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp evpn summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: int\n\"\"\"The expected number of BGP EVPN neighbors in the default VRF\"\"\"\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeerCount\", \"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"evpn\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        peers = command_output[\"vrfs\"][\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers and len(peers) == self.inputs.number:\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            if len(peers) != self.inputs.number:\n                self.result.is_failure(f\"Expecting {self.inputs.number} BGP EVPN peers and got {len(peers)}\")\n            if non_established_peers:\n                self.result.is_failure(f\"The following EVPN peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: int\n\"\"\"The expected number of BGP EVPN neighbors in the default VRF\"\"\"\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNCount.Input.number","title":"number instance-attribute","text":"
    number: int\n

    The expected number of BGP EVPN neighbors in the default VRF

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPEVPNState","title":"VerifyBGPEVPNState","text":"

    Bases: AntaTest

    Verifies all EVPN BGP sessions are established (default VRF).

    • self.result = \u201cskipped\u201d if no BGP EVPN peers are returned by the device
    • self.result = \u201csuccess\u201d if all EVPN BGP sessions are established.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPEVPNState(AntaTest):\n\"\"\"\n    Verifies all EVPN BGP sessions are established (default VRF).\n\n    * self.result = \"skipped\" if no BGP EVPN peers are returned by the device\n    * self.result = \"success\" if all EVPN BGP sessions are established.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPEVPNState\"\n    description = \"Verifies all EVPN BGP sessions are established (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp evpn summary\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"evpn\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        bgp_vrfs = command_output[\"vrfs\"]\n        peers = bgp_vrfs[\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following EVPN peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastCount","title":"VerifyBGPIPv4UnicastCount","text":"

    Bases: AntaTest

    Verifies all IPv4 unicast BGP sessions are established and all BGP messages queues for these sessions are empty and the actual number of BGP IPv4 unicast neighbors is the one we expect in all VRFs specified as input.

    • self.result = \u201csuccess\u201d if all IPv4 unicast BGP sessions are established and if all BGP messages queues for these sessions are empty and if the actual number of BGP IPv4 unicast neighbors is equal to `number in all VRFs specified as input.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPIPv4UnicastCount(AntaTest):\n\"\"\"\n    Verifies all IPv4 unicast BGP sessions are established\n    and all BGP messages queues for these sessions are empty\n    and the actual number of BGP IPv4 unicast neighbors is the one we expect\n    in all VRFs specified as input.\n\n    * self.result = \"success\" if all IPv4 unicast BGP sessions are established\n                         and if all BGP messages queues for these sessions are empty\n                         and if the actual number of BGP IPv4 unicast neighbors is equal to `number\n                         in all VRFs specified as input.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPIPv4UnicastCount\"\n    description = (\n        \"Verifies all IPv4 unicast BGP sessions are established and all their BGP messages queues are empty and \"\n        \" the actual number of BGP IPv4 unicast neighbors is the one we expect.\"\n    )\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaTemplate(template=\"show bgp ipv4 unicast summary vrf {vrf}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vrfs: Dict[str, int]\n\"\"\"VRFs associated with neighbors count to verify\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vrf=vrf) for vrf in self.inputs.vrfs]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeerCount\", \"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"ipv4\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n        for command in self.instance_commands:\n            if command.params and \"vrf\" in command.params:\n                vrf = command.params[\"vrf\"]\n                count = self.inputs.vrfs[vrf]\n                if vrf not in command.json_output[\"vrfs\"]:\n                    self.result.is_failure(f\"VRF {vrf} is not configured\")\n                    return\n                peers = command.json_output[\"vrfs\"][vrf][\"peers\"]\n                state_issue = _check_bgp_vrfs(command.json_output[\"vrfs\"])\n                if len(peers) != count:\n                    self.result.is_failure(f\"Expecting {count} BGP peer(s) in vrf {vrf} but got {len(peers)} peer(s)\")\n                if state_issue:\n                    self.result.is_failure(f\"The following IPv4 peer(s) are not established: {state_issue}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vrfs: Dict[str, int]\n\"\"\"VRFs associated with neighbors count to verify\"\"\"\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastCount.Input.vrfs","title":"vrfs instance-attribute","text":"
    vrfs: Dict[str, int]\n

    VRFs associated with neighbors count to verify

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv4UnicastState","title":"VerifyBGPIPv4UnicastState","text":"

    Bases: AntaTest

    Verifies all IPv4 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).

    • self.result = \u201cskipped\u201d if no BGP vrf are returned by the device
    • self.result = \u201csuccess\u201d if all IPv4 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPIPv4UnicastState(AntaTest):\n\"\"\"\n    Verifies all IPv4 unicast BGP sessions are established (for all VRF)\n    and all BGP messages queues for these sessions are empty (for all VRF).\n\n    * self.result = \"skipped\" if no BGP vrf are returned by the device\n    * self.result = \"success\" if all IPv4 unicast BGP sessions are established (for all VRF)\n                         and all BGP messages queues for these sessions are empty (for all VRF).\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPIPv4UnicastState\"\n    description = \"Verifies all IPv4 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp ipv4 unicast summary vrf all\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"ipv4\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        state_issue = _check_bgp_vrfs(command_output[\"vrfs\"])\n        if not state_issue:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Some IPv4 Unicast BGP Peer are not up: {state_issue}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPIPv6UnicastState","title":"VerifyBGPIPv6UnicastState","text":"

    Bases: AntaTest

    Verifies all IPv6 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).

    • self.result = \u201cskipped\u201d if no BGP vrf are returned by the device
    • self.result = \u201csuccess\u201d if all IPv6 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPIPv6UnicastState(AntaTest):\n\"\"\"\n    Verifies all IPv6 unicast BGP sessions are established (for all VRF)\n    and all BGP messages queues for these sessions are empty (for all VRF).\n\n    * self.result = \"skipped\" if no BGP vrf are returned by the device\n    * self.result = \"success\" if all IPv6 unicast BGP sessions are established (for all VRF)\n                         and all BGP messages queues for these sessions are empty (for all VRF).\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPIPv6UnicastState\"\n    description = \"Verifies all IPv6 unicast BGP sessions are established (for all VRF) and all BGP messages queues for these sessions are empty (for all VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp ipv6 unicast summary vrf all\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"ipv6\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        state_issue = _check_bgp_vrfs(command_output[\"vrfs\"])\n        if not state_issue:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Some IPv4 Unicast BGP Peer are not up: {state_issue}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount","title":"VerifyBGPPeerCount","text":"

    Bases: AntaTest

    This test verifies the count of BGP peers for a given address family.

    It supports multiple types of address families (AFI) and subsequent service families (SAFI). Please refer to the Input class attributes below for details.

    Expected Results
    • success: If the count of BGP peers matches the expected count for each address family and VRF.
    • failure: If the count of BGP peers does not match the expected count, or if BGP is not configured for an expected VRF or address family.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPPeerCount(AntaTest):\n\"\"\"\n    This test verifies the count of BGP peers for a given address family.\n\n    It supports multiple types of address families (AFI) and subsequent service families (SAFI).\n    Please refer to the Input class attributes below for details.\n\n    Expected Results:\n        * success: If the count of BGP peers matches the expected count for each address family and VRF.\n        * failure: If the count of BGP peers does not match the expected count, or if BGP is not configured for an expected VRF or address family.\n    \"\"\"\n\n    name = \"VerifyBGPPeerCount\"\n    description = \"Verifies the count of BGP peers.\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [\n        AntaTemplate(template=\"show bgp {afi} {safi} summary vrf {vrf}\"),\n        AntaTemplate(template=\"show bgp {afi} summary\"),\n    ]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        address_families: List[BgpAfi]\n\"\"\"\n        List of BGP address families (BgpAfi)\n        \"\"\"\n\n        class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n            afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n            safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n            If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n            \"\"\"\n            vrf: str = \"default\"\n\"\"\"\n            Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n            If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n            \"\"\"\n            num_peers: PositiveInt\n\"\"\"Number of expected BGP peer(s)\"\"\"\n\n            @model_validator(mode=\"after\")\n            def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n                Validate the inputs provided to the BgpAfi class.\n\n                If afi is either ipv4 or ipv6, safi must be provided.\n\n                If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n                \"\"\"\n                if self.afi in [\"ipv4\", \"ipv6\"]:\n                    if self.safi is None:\n                        raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                elif self.safi is not None:\n                    raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n                elif self.vrf != \"default\":\n                    raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n                return self\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        commands = []\n        for afi in self.inputs.address_families:\n            if template == VerifyBGPPeerCount.commands[0] and afi.afi in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, safi=afi.safi, vrf=afi.vrf, num_peers=afi.num_peers))\n            elif template == VerifyBGPPeerCount.commands[1] and afi.afi not in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, vrf=afi.vrf, num_peers=afi.num_peers))\n        return commands\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n\n        failures: dict[tuple[str, Any], dict[str, Any]] = {}\n\n        for command in self.instance_commands:\n            if command.params:\n                peer_count = 0\n                command_output = command.json_output\n\n                afi = cast(Afi, command.params.get(\"afi\"))\n                safi = cast(Optional[Safi], command.params.get(\"safi\"))\n                afi_vrf = cast(str, command.params.get(\"vrf\"))\n                num_peers = cast(PositiveInt, command.params.get(\"num_peers\"))\n\n                if not (vrfs := command_output.get(\"vrfs\")):\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"Not Configured\")\n                    continue\n\n                if afi_vrf == \"all\":\n                    for vrf_data in vrfs.values():\n                        peer_count += len(vrf_data[\"peers\"])\n                else:\n                    peer_count += len(command_output[\"vrfs\"][afi_vrf][\"peers\"])\n\n                if peer_count != num_peers:\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=f\"Expected: {num_peers}, Actual: {peer_count}\")\n\n        if failures:\n            self.result.is_failure(f\"Failures: {list(failures.values())}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    address_families: List[BgpAfi]\n\"\"\"\n    List of BGP address families (BgpAfi)\n    \"\"\"\n\n    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n        afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n        safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n        If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n        \"\"\"\n        vrf: str = \"default\"\n\"\"\"\n        Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n        If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n        \"\"\"\n        num_peers: PositiveInt\n\"\"\"Number of expected BGP peer(s)\"\"\"\n\n        @model_validator(mode=\"after\")\n        def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n            Validate the inputs provided to the BgpAfi class.\n\n            If afi is either ipv4 or ipv6, safi must be provided.\n\n            If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n            \"\"\"\n            if self.afi in [\"ipv4\", \"ipv6\"]:\n                if self.safi is None:\n                    raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n            elif self.safi is not None:\n                raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n            elif self.vrf != \"default\":\n                raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n            return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.address_families","title":"address_families instance-attribute","text":"
    address_families: List[BgpAfi]\n

    List of BGP address families (BgpAfi)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi","title":"BgpAfi","text":"

    Bases: BaseModel

    Source code in anta/tests/routing/bgp.py
    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n    afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n    safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n    If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n    \"\"\"\n    vrf: str = \"default\"\n\"\"\"\n    Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n    If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n    \"\"\"\n    num_peers: PositiveInt\n\"\"\"Number of expected BGP peer(s)\"\"\"\n\n    @model_validator(mode=\"after\")\n    def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n        Validate the inputs provided to the BgpAfi class.\n\n        If afi is either ipv4 or ipv6, safi must be provided.\n\n        If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n        \"\"\"\n        if self.afi in [\"ipv4\", \"ipv6\"]:\n            if self.safi is None:\n                raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n        elif self.safi is not None:\n            raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n        elif self.vrf != \"default\":\n            raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n        return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.afi","title":"afi instance-attribute","text":"
    afi: Afi\n

    BGP address family (AFI)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.num_peers","title":"num_peers instance-attribute","text":"
    num_peers: PositiveInt\n

    Number of expected BGP peer(s)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.safi","title":"safi class-attribute instance-attribute","text":"
    safi: Optional[Safi] = None\n

    Optional BGP subsequent service family (SAFI).

    If the input afi is ipv4 or ipv6, a valid safi must be provided.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    Optional VRF for IPv4 and IPv6. If not provided, it defaults to default.

    If the input afi is not ipv4 or ipv6, e.g. evpn, vrf must be default.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeerCount.Input.BgpAfi.validate_inputs","title":"validate_inputs","text":"
    validate_inputs() -> BaseModel\n

    Validate the inputs provided to the BgpAfi class.

    If afi is either ipv4 or ipv6, safi must be provided.

    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.

    Source code in anta/tests/routing/bgp.py
    @model_validator(mode=\"after\")\ndef validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n    Validate the inputs provided to the BgpAfi class.\n\n    If afi is either ipv4 or ipv6, safi must be provided.\n\n    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n    \"\"\"\n    if self.afi in [\"ipv4\", \"ipv6\"]:\n        if self.safi is None:\n            raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n    elif self.safi is not None:\n        raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n    elif self.vrf != \"default\":\n        raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n    return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth","title":"VerifyBGPPeersHealth","text":"

    Bases: AntaTest

    This test verifies the health of BGP peers.

    It will validate that all BGP sessions are established and all message queues for these BGP sessions are empty for a given address family.

    It supports multiple types of address families (AFI) and subsequent service families (SAFI). Please refer to the Input class attributes below for details.

    Expected Results
    • success: If all BGP sessions are established and all messages queues are empty for each address family and VRF.
    • failure: If there are issues with any of the BGP sessions, or if BGP is not configured for an expected VRF or address family.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPPeersHealth(AntaTest):\n\"\"\"\n    This test verifies the health of BGP peers.\n\n    It will validate that all BGP sessions are established and all message queues for these BGP sessions are empty for a given address family.\n\n    It supports multiple types of address families (AFI) and subsequent service families (SAFI).\n    Please refer to the Input class attributes below for details.\n\n    Expected Results:\n        * success: If all BGP sessions are established and all messages queues are empty for each address family and VRF.\n        * failure: If there are issues with any of the BGP sessions, or if BGP is not configured for an expected VRF or address family.\n    \"\"\"\n\n    name = \"VerifyBGPPeersHealth\"\n    description = \"Verifies the health of BGP peers\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [\n        AntaTemplate(template=\"show bgp {afi} {safi} summary vrf {vrf}\"),\n        AntaTemplate(template=\"show bgp {afi} summary\"),\n    ]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        address_families: List[BgpAfi]\n\"\"\"\n        List of BGP address families (BgpAfi)\n        \"\"\"\n\n        class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n            afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n            safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n            If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n            \"\"\"\n            vrf: str = \"default\"\n\"\"\"\n            Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n            If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n            \"\"\"\n\n            @model_validator(mode=\"after\")\n            def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n                Validate the inputs provided to the BgpAfi class.\n\n                If afi is either ipv4 or ipv6, safi must be provided.\n\n                If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n                \"\"\"\n                if self.afi in [\"ipv4\", \"ipv6\"]:\n                    if self.safi is None:\n                        raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                elif self.safi is not None:\n                    raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n                elif self.vrf != \"default\":\n                    raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n                return self\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        commands = []\n        for afi in self.inputs.address_families:\n            if template == VerifyBGPPeersHealth.commands[0] and afi.afi in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, safi=afi.safi, vrf=afi.vrf))\n            elif template == VerifyBGPPeersHealth.commands[1] and afi.afi not in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, vrf=afi.vrf))\n        return commands\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n\n        failures: dict[tuple[str, Any], dict[str, Any]] = {}\n\n        for command in self.instance_commands:\n            if command.params:\n                command_output = command.json_output\n\n                afi = cast(Afi, command.params.get(\"afi\"))\n                safi = cast(Optional[Safi], command.params.get(\"safi\"))\n                afi_vrf = cast(str, command.params.get(\"vrf\"))\n\n                if not (vrfs := command_output.get(\"vrfs\")):\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"Not Configured\")\n                    continue\n\n                for vrf, vrf_data in vrfs.items():\n                    if not (peers := vrf_data.get(\"peers\")):\n                        _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"No Peers\")\n                        continue\n\n                    peer_issues = {}\n                    for peer, peer_data in peers.items():\n                        issues = _check_peer_issues(peer_data)\n\n                        if issues:\n                            peer_issues[peer] = issues\n\n                    if peer_issues:\n                        _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=vrf, issue=peer_issues)\n\n        if failures:\n            self.result.is_failure(f\"Failures: {list(failures.values())}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    address_families: List[BgpAfi]\n\"\"\"\n    List of BGP address families (BgpAfi)\n    \"\"\"\n\n    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n        afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n        safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n        If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n        \"\"\"\n        vrf: str = \"default\"\n\"\"\"\n        Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n        If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n        \"\"\"\n\n        @model_validator(mode=\"after\")\n        def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n            Validate the inputs provided to the BgpAfi class.\n\n            If afi is either ipv4 or ipv6, safi must be provided.\n\n            If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n            \"\"\"\n            if self.afi in [\"ipv4\", \"ipv6\"]:\n                if self.safi is None:\n                    raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n            elif self.safi is not None:\n                raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n            elif self.vrf != \"default\":\n                raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n            return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.address_families","title":"address_families instance-attribute","text":"
    address_families: List[BgpAfi]\n

    List of BGP address families (BgpAfi)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi","title":"BgpAfi","text":"

    Bases: BaseModel

    Source code in anta/tests/routing/bgp.py
    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n    afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n    safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n    If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n    \"\"\"\n    vrf: str = \"default\"\n\"\"\"\n    Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n    If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n    \"\"\"\n\n    @model_validator(mode=\"after\")\n    def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n        Validate the inputs provided to the BgpAfi class.\n\n        If afi is either ipv4 or ipv6, safi must be provided.\n\n        If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n        \"\"\"\n        if self.afi in [\"ipv4\", \"ipv6\"]:\n            if self.safi is None:\n                raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n        elif self.safi is not None:\n            raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n        elif self.vrf != \"default\":\n            raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n        return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.afi","title":"afi instance-attribute","text":"
    afi: Afi\n

    BGP address family (AFI)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.safi","title":"safi class-attribute instance-attribute","text":"
    safi: Optional[Safi] = None\n

    Optional BGP subsequent service family (SAFI).

    If the input afi is ipv4 or ipv6, a valid safi must be provided.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    Optional VRF for IPv4 and IPv6. If not provided, it defaults to default.

    If the input afi is not ipv4 or ipv6, e.g. evpn, vrf must be default.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPPeersHealth.Input.BgpAfi.validate_inputs","title":"validate_inputs","text":"
    validate_inputs() -> BaseModel\n

    Validate the inputs provided to the BgpAfi class.

    If afi is either ipv4 or ipv6, safi must be provided.

    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.

    Source code in anta/tests/routing/bgp.py
    @model_validator(mode=\"after\")\ndef validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n    Validate the inputs provided to the BgpAfi class.\n\n    If afi is either ipv4 or ipv6, safi must be provided.\n\n    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n    \"\"\"\n    if self.afi in [\"ipv4\", \"ipv6\"]:\n        if self.safi is None:\n            raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n    elif self.safi is not None:\n        raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n    elif self.vrf != \"default\":\n        raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n    return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCCount","title":"VerifyBGPRTCCount","text":"

    Bases: AntaTest

    Verifies all RTC BGP sessions are established (default VRF) and the actual number of BGP RTC neighbors is the one we expect (default VRF).

    • self.result = \u201csuccess\u201d if all RTC BGP sessions are Established and if the actual number of BGP RTC neighbors is the one we expect.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPRTCCount(AntaTest):\n\"\"\"\n    Verifies all RTC BGP sessions are established (default VRF)\n    and the actual number of BGP RTC neighbors is the one we expect (default VRF).\n\n    * self.result = \"success\" if all RTC BGP sessions are Established and if the actual\n                         number of BGP RTC neighbors is the one we expect.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPRTCCount\"\n    description = \"Verifies all RTC BGP sessions are established (default VRF) and the actual number of BGP RTC neighbors is the one we expect (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp rt-membership summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: int\n\"\"\"The expected number of BGP RTC neighbors in the default VRF\"\"\"\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeerCount\", \"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"rtc\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        peers = command_output[\"vrfs\"][\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers and len(peers) == self.inputs.number:\n            self.result.is_success()\n        else:\n            self.result.is_failure()\n            if len(peers) != self.inputs.number:\n                self.result.is_failure(f\"Expecting {self.inputs.number} BGP RTC peers and got {len(peers)}\")\n            if non_established_peers:\n                self.result.is_failure(f\"The following RTC peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: int\n\"\"\"The expected number of BGP RTC neighbors in the default VRF\"\"\"\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCCount.Input.number","title":"number instance-attribute","text":"
    number: int\n

    The expected number of BGP RTC neighbors in the default VRF

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPRTCState","title":"VerifyBGPRTCState","text":"

    Bases: AntaTest

    Verifies all RTC BGP sessions are established (default VRF).

    • self.result = \u201cskipped\u201d if no BGP RTC peers are returned by the device
    • self.result = \u201csuccess\u201d if all RTC BGP sessions are established.
    • self.result = \u201cfailure\u201d otherwise.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPRTCState(AntaTest):\n\"\"\"\n    Verifies all RTC BGP sessions are established (default VRF).\n\n    * self.result = \"skipped\" if no BGP RTC peers are returned by the device\n    * self.result = \"success\" if all RTC BGP sessions are established.\n    * self.result = \"failure\" otherwise.\n    \"\"\"\n\n    name = \"VerifyBGPRTCState\"\n    description = \"Verifies all RTC BGP sessions are established (default VRF).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [AntaCommand(command=\"show bgp rt-membership summary\")]\n\n    @deprecated_test(new_tests=[\"VerifyBGPPeersHealth\"])\n    @check_bgp_family_enable(\"rtc\")\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        bgp_vrfs = command_output[\"vrfs\"]\n        peers = bgp_vrfs[\"default\"][\"peers\"]\n        non_established_peers = [peer for peer, peer_dict in peers.items() if peer_dict[\"peerState\"] != \"Established\"]\n        if not non_established_peers:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following RTC peers are not established: {non_established_peers}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers","title":"VerifyBGPSpecificPeers","text":"

    Bases: AntaTest

    This test verifies the health of specific BGP peer(s).

    It will validate that the BGP session is established and all message queues for this BGP session are empty for the given peer(s).

    It supports multiple types of address families (AFI) and subsequent service families (SAFI). Please refer to the Input class attributes below for details.

    Expected Results
    • success: If the BGP session is established and all messages queues are empty for each given peer.
    • failure: If the BGP session has issues or is not configured, or if BGP is not configured for an expected VRF or address family.
    Source code in anta/tests/routing/bgp.py
    class VerifyBGPSpecificPeers(AntaTest):\n\"\"\"\n    This test verifies the health of specific BGP peer(s).\n\n    It will validate that the BGP session is established and all message queues for this BGP session are empty for the given peer(s).\n\n    It supports multiple types of address families (AFI) and subsequent service families (SAFI).\n    Please refer to the Input class attributes below for details.\n\n    Expected Results:\n        * success: If the BGP session is established and all messages queues are empty for each given peer.\n        * failure: If the BGP session has issues or is not configured, or if BGP is not configured for an expected VRF or address family.\n    \"\"\"\n\n    name = \"VerifyBGPSpecificPeers\"\n    description = \"Verifies the health of specific BGP peer(s).\"\n    categories = [\"routing\", \"bgp\"]\n    commands = [\n        AntaTemplate(template=\"show bgp {afi} {safi} summary vrf {vrf}\"),\n        AntaTemplate(template=\"show bgp {afi} summary\"),\n    ]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        address_families: List[BgpAfi]\n\"\"\"\n        List of BGP address families (BgpAfi)\n        \"\"\"\n\n        class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n            afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n            safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n            If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n            \"\"\"\n            vrf: str = \"default\"\n\"\"\"\n            Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n            `all` is NOT supported.\n\n            If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n            \"\"\"\n            peers: List[Union[IPv4Address, IPv6Address]]\n\"\"\"List of BGP IPv4 or IPv6 peer\"\"\"\n\n            @model_validator(mode=\"after\")\n            def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n                Validate the inputs provided to the BgpAfi class.\n\n                If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n                If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n                \"\"\"\n                if self.afi in [\"ipv4\", \"ipv6\"]:\n                    if self.safi is None:\n                        raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                    if self.vrf == \"all\":\n                        raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n                elif self.safi is not None:\n                    raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n                elif self.vrf != \"default\":\n                    raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n                return self\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        commands = []\n        for afi in self.inputs.address_families:\n            if template == VerifyBGPSpecificPeers.commands[0] and afi.afi in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, safi=afi.safi, vrf=afi.vrf, peers=afi.peers))\n            elif template == VerifyBGPSpecificPeers.commands[1] and afi.afi not in [\"ipv4\", \"ipv6\"]:\n                commands.append(template.render(afi=afi.afi, vrf=afi.vrf, peers=afi.peers))\n        return commands\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        self.result.is_success()\n\n        failures: dict[tuple[str, Any], dict[str, Any]] = {}\n\n        for command in self.instance_commands:\n            if command.params:\n                command_output = command.json_output\n\n                afi = cast(Afi, command.params.get(\"afi\"))\n                safi = cast(Optional[Safi], command.params.get(\"safi\"))\n                afi_vrf = cast(str, command.params.get(\"vrf\"))\n                afi_peers = cast(List[Union[IPv4Address, IPv6Address]], command.params.get(\"peers\", []))\n\n                if not (vrfs := command_output.get(\"vrfs\")):\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=\"Not Configured\")\n                    continue\n\n                peer_issues = {}\n                for peer in afi_peers:\n                    peer_ip = str(peer)\n                    peer_data = get_value(dictionary=vrfs, key=f\"{afi_vrf}_peers_{peer_ip}\", separator=\"_\")\n                    issues = _check_peer_issues(peer_data)\n                    if issues:\n                        peer_issues[peer_ip] = issues\n\n                if peer_issues:\n                    _add_bgp_failures(failures=failures, afi=afi, safi=safi, vrf=afi_vrf, issue=peer_issues)\n\n        if failures:\n            self.result.is_failure(f\"Failures: {list(failures.values())}\")\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/bgp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    address_families: List[BgpAfi]\n\"\"\"\n    List of BGP address families (BgpAfi)\n    \"\"\"\n\n    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n        afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n        safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n        If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n        \"\"\"\n        vrf: str = \"default\"\n\"\"\"\n        Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n        `all` is NOT supported.\n\n        If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n        \"\"\"\n        peers: List[Union[IPv4Address, IPv6Address]]\n\"\"\"List of BGP IPv4 or IPv6 peer\"\"\"\n\n        @model_validator(mode=\"after\")\n        def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n            Validate the inputs provided to the BgpAfi class.\n\n            If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n            If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n            \"\"\"\n            if self.afi in [\"ipv4\", \"ipv6\"]:\n                if self.safi is None:\n                    raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n                if self.vrf == \"all\":\n                    raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n            elif self.safi is not None:\n                raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n            elif self.vrf != \"default\":\n                raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n            return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.address_families","title":"address_families instance-attribute","text":"
    address_families: List[BgpAfi]\n

    List of BGP address families (BgpAfi)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi","title":"BgpAfi","text":"

    Bases: BaseModel

    Source code in anta/tests/routing/bgp.py
    class BgpAfi(BaseModel):  # pylint: disable=missing-class-docstring\n    afi: Afi\n\"\"\"BGP address family (AFI)\"\"\"\n    safi: Optional[Safi] = None\n\"\"\"Optional BGP subsequent service family (SAFI).\n\n    If the input `afi` is `ipv4` or `ipv6`, a valid `safi` must be provided.\n    \"\"\"\n    vrf: str = \"default\"\n\"\"\"\n    Optional VRF for IPv4 and IPv6. If not provided, it defaults to `default`.\n\n    `all` is NOT supported.\n\n    If the input `afi` is not `ipv4` or `ipv6`, e.g. `evpn`, `vrf` must be `default`.\n    \"\"\"\n    peers: List[Union[IPv4Address, IPv6Address]]\n\"\"\"List of BGP IPv4 or IPv6 peer\"\"\"\n\n    @model_validator(mode=\"after\")\n    def validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n        Validate the inputs provided to the BgpAfi class.\n\n        If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n        If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n        \"\"\"\n        if self.afi in [\"ipv4\", \"ipv6\"]:\n            if self.safi is None:\n                raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n            if self.vrf == \"all\":\n                raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n        elif self.safi is not None:\n            raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n        elif self.vrf != \"default\":\n            raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n        return self\n
    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.afi","title":"afi instance-attribute","text":"
    afi: Afi\n

    BGP address family (AFI)

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.peers","title":"peers instance-attribute","text":"
    peers: List[Union[IPv4Address, IPv6Address]]\n

    List of BGP IPv4 or IPv6 peer

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.safi","title":"safi class-attribute instance-attribute","text":"
    safi: Optional[Safi] = None\n

    Optional BGP subsequent service family (SAFI).

    If the input afi is ipv4 or ipv6, a valid safi must be provided.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    Optional VRF for IPv4 and IPv6. If not provided, it defaults to default.

    all is NOT supported.

    If the input afi is not ipv4 or ipv6, e.g. evpn, vrf must be default.

    "},{"location":"api/tests.routing.bgp/#anta.tests.routing.bgp.VerifyBGPSpecificPeers.Input.BgpAfi.validate_inputs","title":"validate_inputs","text":"
    validate_inputs() -> BaseModel\n

    Validate the inputs provided to the BgpAfi class.

    If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.

    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.

    Source code in anta/tests/routing/bgp.py
    @model_validator(mode=\"after\")\ndef validate_inputs(self: BaseModel) -> BaseModel:\n\"\"\"\n    Validate the inputs provided to the BgpAfi class.\n\n    If afi is either ipv4 or ipv6, safi must be provided and vrf must NOT be all.\n\n    If afi is not ipv4 or ipv6, safi must not be provided and vrf must be default.\n    \"\"\"\n    if self.afi in [\"ipv4\", \"ipv6\"]:\n        if self.safi is None:\n            raise ValueError(\"'safi' must be provided when afi is ipv4 or ipv6\")\n        if self.vrf == \"all\":\n            raise ValueError(\"'all' is not supported in this test. Use VerifyBGPPeersHealth test instead.\")\n    elif self.safi is not None:\n        raise ValueError(\"'safi' must not be provided when afi is not ipv4 or ipv6\")\n    elif self.vrf != \"default\":\n        raise ValueError(\"'vrf' must be default when afi is not ipv4 or ipv6\")\n    return self\n
    "},{"location":"api/tests.routing.generic/","title":"Generic","text":""},{"location":"api/tests.routing.generic/#anta-catalog-for-routing-generic-tests","title":"ANTA catalog for routing-generic tests","text":"

    Generic routing test functions

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyBFD","title":"VerifyBFD","text":"

    Bases: AntaTest

    Verifies there is no BFD peer in down state (all VRF, IPv4 neighbors).

    Source code in anta/tests/routing/generic.py
    class VerifyBFD(AntaTest):\n\"\"\"\n    Verifies there is no BFD peer in down state (all VRF, IPv4 neighbors).\n    \"\"\"\n\n    name = \"VerifyBFD\"\n    description = \"Verifies there is no BFD peer in down state (all VRF, IPv4 neighbors).\"\n    categories = [\"routing\", \"generic\"]\n    # revision 1 as later revision introduce additional nesting for type\n    commands = [AntaCommand(command=\"show bfd peers\", revision=1)]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        self.result.is_success()\n        for _, vrf_data in command_output[\"vrfs\"].items():\n            for _, neighbor_data in vrf_data[\"ipv4Neighbors\"].items():\n                for peer, peer_data in neighbor_data[\"peerStats\"].items():\n                    if (peer_status := peer_data[\"status\"]) != \"up\":\n                        failure_message = f\"bfd state for peer '{peer}' is {peer_status} (expected up).\"\n                        if (peer_l3intf := peer_data.get(\"l3intf\")) is not None and peer_l3intf != \"\":\n                            failure_message += f\" Interface: {peer_l3intf}.\"\n                        self.result.is_failure(failure_message)\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingProtocolModel","title":"VerifyRoutingProtocolModel","text":"

    Bases: AntaTest

    Verifies the configured routing protocol model is the one we expect. And if there is no mismatch between the configured and operating routing protocol model.

    Source code in anta/tests/routing/generic.py
    class VerifyRoutingProtocolModel(AntaTest):\n\"\"\"\n    Verifies the configured routing protocol model is the one we expect.\n    And if there is no mismatch between the configured and operating routing protocol model.\n    \"\"\"\n\n    name = \"VerifyRoutingProtocolModel\"\n    description = (\n        \"Verifies the configured routing protocol model is the expected one and if there is no mismatch between the configured and operating routing protocol model.\"\n    )\n    categories = [\"routing\", \"generic\"]\n    commands = [AntaCommand(command=\"show ip route summary\", revision=3)]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        model: Literal[\"multi-agent\", \"ribd\"] = \"multi-agent\"\n\"\"\"Expected routing protocol model\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        configured_model = command_output[\"protoModelStatus\"][\"configuredProtoModel\"]\n        operating_model = command_output[\"protoModelStatus\"][\"operatingProtoModel\"]\n        if configured_model == operating_model == self.inputs.model:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"routing model is misconfigured: configured: {configured_model} - operating: {operating_model} - expected: {self.inputs.model}\")\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingProtocolModel.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/generic.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    model: Literal[\"multi-agent\", \"ribd\"] = \"multi-agent\"\n\"\"\"Expected routing protocol model\"\"\"\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingProtocolModel.Input.model","title":"model class-attribute instance-attribute","text":"
    model: Literal['multi-agent', 'ribd'] = 'multi-agent'\n

    Expected routing protocol model

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry","title":"VerifyRoutingTableEntry","text":"

    Bases: AntaTest

    This test verifies that the provided routes are present in the routing table of a specified VRF.

    Expected Results
    • success: The test will pass if the provided routes are present in the routing table.
    • failure: The test will fail if one or many provided routes are missing from the routing table.
    Source code in anta/tests/routing/generic.py
    class VerifyRoutingTableEntry(AntaTest):\n\"\"\"\n    This test verifies that the provided routes are present in the routing table of a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the provided routes are present in the routing table.\n        * failure: The test will fail if one or many provided routes are missing from the routing table.\n    \"\"\"\n\n    name = \"VerifyRoutingTableEntry\"\n    description = \"Verifies that the provided routes are present in the routing table of a specified VRF.\"\n    categories = [\"routing\", \"generic\"]\n    commands = [AntaTemplate(template=\"show ip route vrf {vrf} {route}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n        routes: List[IPv4Address]\n\"\"\"Routes to verify\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vrf=self.inputs.vrf, route=route) for route in self.inputs.routes]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        missing_routes = []\n\n        for command in self.instance_commands:\n            if command.params and \"vrf\" in command.params and \"route\" in command.params:\n                vrf, route = command.params[\"vrf\"], command.params[\"route\"]\n                if len(routes := command.json_output[\"vrfs\"][vrf][\"routes\"]) == 0 or route != ip_interface(list(routes)[0]).ip:\n                    missing_routes.append(str(route))\n\n        if not missing_routes:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"The following route(s) are missing from the routing table of VRF {self.inputs.vrf}: {missing_routes}\")\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/generic.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vrf: str = \"default\"\n\"\"\"VRF context\"\"\"\n    routes: List[IPv4Address]\n\"\"\"Routes to verify\"\"\"\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry.Input.routes","title":"routes instance-attribute","text":"
    routes: List[IPv4Address]\n

    Routes to verify

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableEntry.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    VRF context

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize","title":"VerifyRoutingTableSize","text":"

    Bases: AntaTest

    Verifies the size of the IP routing table (default VRF). Should be between the two provided thresholds.

    Source code in anta/tests/routing/generic.py
    class VerifyRoutingTableSize(AntaTest):\n\"\"\"\n    Verifies the size of the IP routing table (default VRF).\n    Should be between the two provided thresholds.\n    \"\"\"\n\n    name = \"VerifyRoutingTableSize\"\n    description = \"Verifies the size of the IP routing table (default VRF). Should be between the two provided thresholds.\"\n    categories = [\"routing\", \"generic\"]\n    commands = [AntaCommand(command=\"show ip route summary\", revision=3)]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        minimum: int\n\"\"\"Expected minimum routing table (default VRF) size\"\"\"\n        maximum: int\n\"\"\"Expected maximum routing table (default VRF) size\"\"\"\n\n        @model_validator(mode=\"after\")  # type: ignore\n        def check_min_max(self) -> AntaTest.Input:\n\"\"\"Validate that maximum is greater than minimum\"\"\"\n            if self.minimum > self.maximum:\n                raise ValueError(f\"Minimum {self.minimum} is greater than maximum {self.maximum}\")\n            return self\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        total_routes = int(command_output[\"vrfs\"][\"default\"][\"totalRoutes\"])\n        if self.inputs.minimum <= total_routes <= self.inputs.maximum:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"routing-table has {total_routes} routes and not between min ({self.inputs.minimum}) and maximum ({self.inputs.maximum})\")\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/generic.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    minimum: int\n\"\"\"Expected minimum routing table (default VRF) size\"\"\"\n    maximum: int\n\"\"\"Expected maximum routing table (default VRF) size\"\"\"\n\n    @model_validator(mode=\"after\")  # type: ignore\n    def check_min_max(self) -> AntaTest.Input:\n\"\"\"Validate that maximum is greater than minimum\"\"\"\n        if self.minimum > self.maximum:\n            raise ValueError(f\"Minimum {self.minimum} is greater than maximum {self.maximum}\")\n        return self\n
    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input.maximum","title":"maximum instance-attribute","text":"
    maximum: int\n

    Expected maximum routing table (default VRF) size

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input.minimum","title":"minimum instance-attribute","text":"
    minimum: int\n

    Expected minimum routing table (default VRF) size

    "},{"location":"api/tests.routing.generic/#anta.tests.routing.generic.VerifyRoutingTableSize.Input.check_min_max","title":"check_min_max","text":"
    check_min_max() -> AntaTest.Input\n

    Validate that maximum is greater than minimum

    Source code in anta/tests/routing/generic.py
    @model_validator(mode=\"after\")  # type: ignore\ndef check_min_max(self) -> AntaTest.Input:\n\"\"\"Validate that maximum is greater than minimum\"\"\"\n    if self.minimum > self.maximum:\n        raise ValueError(f\"Minimum {self.minimum} is greater than maximum {self.maximum}\")\n    return self\n
    "},{"location":"api/tests.routing.ospf/","title":"OSPF","text":""},{"location":"api/tests.routing.ospf/#anta-catalog-for-routing-ospf-tests","title":"ANTA catalog for routing-ospf tests","text":"

    OSPF test functions

    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborCount","title":"VerifyOSPFNeighborCount","text":"

    Bases: AntaTest

    Verifies the number of OSPF neighbors in FULL state is the one we expect.

    Source code in anta/tests/routing/ospf.py
    class VerifyOSPFNeighborCount(AntaTest):\n\"\"\"\n    Verifies the number of OSPF neighbors in FULL state is the one we expect.\n    \"\"\"\n\n    name = \"VerifyOSPFNeighborCount\"\n    description = \"Verifies the number of OSPF neighbors in FULL state is the one we expect.\"\n    categories = [\"routing\", \"ospf\"]\n    commands = [AntaCommand(command=\"show ip ospf neighbor\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: int\n\"\"\"The expected number of OSPF neighbors in FULL state\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if (neighbor_count := _count_ospf_neighbor(command_output)) == 0:\n            self.result.is_skipped(\"no OSPF neighbor found\")\n            return\n        self.result.is_success()\n        if neighbor_count != self.inputs.number:\n            self.result.is_failure(f\"device has {neighbor_count} neighbors (expected {self.inputs.number})\")\n        not_full_neighbors = _get_not_full_ospf_neighbors(command_output)\n        print(not_full_neighbors)\n        if not_full_neighbors:\n            self.result.is_failure(f\"Some neighbors are not correctly configured: {not_full_neighbors}.\")\n
    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborCount.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/routing/ospf.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: int\n\"\"\"The expected number of OSPF neighbors in FULL state\"\"\"\n
    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborCount.Input.number","title":"number instance-attribute","text":"
    number: int\n

    The expected number of OSPF neighbors in FULL state

    "},{"location":"api/tests.routing.ospf/#anta.tests.routing.ospf.VerifyOSPFNeighborState","title":"VerifyOSPFNeighborState","text":"

    Bases: AntaTest

    Verifies all OSPF neighbors are in FULL state.

    Source code in anta/tests/routing/ospf.py
    class VerifyOSPFNeighborState(AntaTest):\n\"\"\"\n    Verifies all OSPF neighbors are in FULL state.\n    \"\"\"\n\n    name = \"VerifyOSPFNeighborState\"\n    description = \"Verifies all OSPF neighbors are in FULL state.\"\n    categories = [\"routing\", \"ospf\"]\n    commands = [AntaCommand(command=\"show ip ospf neighbor\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if _count_ospf_neighbor(command_output) == 0:\n            self.result.is_skipped(\"no OSPF neighbor found\")\n            return\n        self.result.is_success()\n        not_full_neighbors = _get_not_full_ospf_neighbors(command_output)\n        if not_full_neighbors:\n            self.result.is_failure(f\"Some neighbors are not correctly configured: {not_full_neighbors}.\")\n
    "},{"location":"api/tests.security/","title":"Security","text":""},{"location":"api/tests.security/#anta-catalog-for-security-tests","title":"ANTA catalog for security tests","text":"

    Test functions related to the EOS various security settings

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpStatus","title":"VerifyAPIHttpStatus","text":"

    Bases: AntaTest

    Verifies if eAPI HTTP server is disabled globally.

    Expected Results
    • success: The test will pass if eAPI HTTP server is disabled globally.
    • failure: The test will fail if eAPI HTTP server is NOT disabled globally.
    Source code in anta/tests/security.py
    class VerifyAPIHttpStatus(AntaTest):\n\"\"\"\n    Verifies if eAPI HTTP server is disabled globally.\n\n    Expected Results:\n        * success: The test will pass if eAPI HTTP server is disabled globally.\n        * failure: The test will fail if eAPI HTTP server is NOT disabled globally.\n    \"\"\"\n\n    name = \"VerifyAPIHttpStatus\"\n    description = \"Verifies if eAPI HTTP server is disabled globally.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"enabled\"] and not command_output[\"httpServer\"][\"running\"]:\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"eAPI HTTP server is enabled globally\")\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpsSSL","title":"VerifyAPIHttpsSSL","text":"

    Bases: AntaTest

    Verifies if eAPI HTTPS server SSL profile is configured and valid.

    Expected results
    • success: The test will pass if the eAPI HTTPS server SSL profile is configured and valid.
    • failure: The test will fail if the eAPI HTTPS server SSL profile is NOT configured, misconfigured or invalid.
    Source code in anta/tests/security.py
    class VerifyAPIHttpsSSL(AntaTest):\n\"\"\"\n    Verifies if eAPI HTTPS server SSL profile is configured and valid.\n\n    Expected results:\n        * success: The test will pass if the eAPI HTTPS server SSL profile is configured and valid.\n        * failure: The test will fail if the eAPI HTTPS server SSL profile is NOT configured, misconfigured or invalid.\n    \"\"\"\n\n    name = \"VerifyAPIHttpsSSL\"\n    description = \"Verifies if eAPI HTTPS server SSL profile is configured and valid.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        profile: str\n\"\"\"SSL profile to verify\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        try:\n            if command_output[\"sslProfile\"][\"name\"] == self.inputs.profile and command_output[\"sslProfile\"][\"state\"] == \"valid\":\n                self.result.is_success()\n            else:\n                self.result.is_failure(f\"eAPI HTTPS server SSL profile ({self.inputs.profile}) is misconfigured or invalid\")\n\n        except KeyError:\n            self.result.is_failure(f\"eAPI HTTPS server SSL profile ({self.inputs.profile}) is not configured\")\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpsSSL.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    profile: str\n\"\"\"SSL profile to verify\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIHttpsSSL.Input.profile","title":"profile instance-attribute","text":"
    profile: str\n

    SSL profile to verify

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl","title":"VerifyAPIIPv4Acl","text":"

    Bases: AntaTest

    Verifies if eAPI has the right number IPv4 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if eAPI has the provided number of IPv4 ACL(s) in the specified VRF.
    • failure: The test will fail if eAPI has not the right number of IPv4 ACL(s) in the specified VRF.
    Source code in anta/tests/security.py
    class VerifyAPIIPv4Acl(AntaTest):\n\"\"\"\n    Verifies if eAPI has the right number IPv4 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if eAPI has the provided number of IPv4 ACL(s) in the specified VRF.\n        * failure: The test will fail if eAPI has not the right number of IPv4 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifyAPIIPv4Acl\"\n    description = \"Verifies if eAPI has the right number IPv4 ACL(s) configured for a specified VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands ip access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv4_acl_list = command_output[\"ipAclList\"][\"aclList\"]\n        ipv4_acl_number = len(ipv4_acl_list)\n        not_configured_acl_list = []\n        if ipv4_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} eAPI IPv4 ACL(s) in vrf {self.inputs.vrf} but got {ipv4_acl_number}\")\n            return\n        for ipv4_acl in ipv4_acl_list:\n            if self.inputs.vrf not in ipv4_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv4_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv4_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"eAPI IPv4 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv4 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv4Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for eAPI

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl","title":"VerifyAPIIPv6Acl","text":"

    Bases: AntaTest

    Verifies if eAPI has the right number IPv6 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if eAPI has the provided number of IPv6 ACL(s) in the specified VRF.
    • failure: The test will fail if eAPI has not the right number of IPv6 ACL(s) in the specified VRF.
    • skipped: The test will be skipped if the number of IPv6 ACL(s) or VRF parameter is not provided.
    Source code in anta/tests/security.py
    class VerifyAPIIPv6Acl(AntaTest):\n\"\"\"\n    Verifies if eAPI has the right number IPv6 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if eAPI has the provided number of IPv6 ACL(s) in the specified VRF.\n        * failure: The test will fail if eAPI has not the right number of IPv6 ACL(s) in the specified VRF.\n        * skipped: The test will be skipped if the number of IPv6 ACL(s) or VRF parameter is not provided.\n    \"\"\"\n\n    name = \"VerifyAPIIPv6Acl\"\n    description = \"Verifies if eAPI has the right number IPv6 ACL(s) configured for a specified VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management api http-commands ipv6 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv6_acl_list = command_output[\"ipv6AclList\"][\"aclList\"]\n        ipv6_acl_number = len(ipv6_acl_list)\n        not_configured_acl_list = []\n        if ipv6_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} eAPI IPv6 ACL(s) in vrf {self.inputs.vrf} but got {ipv6_acl_number}\")\n            return\n        for ipv6_acl in ipv6_acl_list:\n            if self.inputs.vrf not in ipv6_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv6_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv6_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"eAPI IPv6 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for eAPI\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv6 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifyAPIIPv6Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for eAPI

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl","title":"VerifySSHIPv4Acl","text":"

    Bases: AntaTest

    Verifies if the SSHD agent has the right number IPv4 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SSHD agent has the provided number of IPv4 ACL(s) in the specified VRF.
    • failure: The test will fail if the SSHD agent has not the right number of IPv4 ACL(s) in the specified VRF.
    Source code in anta/tests/security.py
    class VerifySSHIPv4Acl(AntaTest):\n\"\"\"\n    Verifies if the SSHD agent has the right number IPv4 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SSHD agent has the provided number of IPv4 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SSHD agent has not the right number of IPv4 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySSHIPv4Acl\"\n    description = \"Verifies if the SSHD agent has IPv4 ACL(s) configured.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management ssh ip access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv4_acl_list = command_output[\"ipAclList\"][\"aclList\"]\n        ipv4_acl_number = len(ipv4_acl_list)\n        not_configured_acl_list = []\n        if ipv4_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SSH IPv4 ACL(s) in vrf {self.inputs.vrf} but got {ipv4_acl_number}\")\n            return\n        for ipv4_acl in ipv4_acl_list:\n            if self.inputs.vrf not in ipv4_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv4_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv4_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SSH IPv4 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv4 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv4Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SSHD agent

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl","title":"VerifySSHIPv6Acl","text":"

    Bases: AntaTest

    Verifies if the SSHD agent has the right number IPv6 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SSHD agent has the provided number of IPv6 ACL(s) in the specified VRF.
    • failure: The test will fail if the SSHD agent has not the right number of IPv6 ACL(s) in the specified VRF.
    Source code in anta/tests/security.py
    class VerifySSHIPv6Acl(AntaTest):\n\"\"\"\n    Verifies if the SSHD agent has the right number IPv6 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SSHD agent has the provided number of IPv6 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SSHD agent has not the right number of IPv6 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySSHIPv6Acl\"\n    description = \"Verifies if the SSHD agent has IPv6 ACL(s) configured.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management ssh ipv6 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv6_acl_list = command_output[\"ipv6AclList\"][\"aclList\"]\n        ipv6_acl_number = len(ipv6_acl_list)\n        not_configured_acl_list = []\n        if ipv6_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SSH IPv6 ACL(s) in vrf {self.inputs.vrf} but got {ipv6_acl_number}\")\n            return\n        for ipv6_acl in ipv6_acl_list:\n            if self.inputs.vrf not in ipv6_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv6_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv6_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SSH IPv6 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/security.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SSHD agent\"\"\"\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv6 ACL(s)

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHIPv6Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SSHD agent

    "},{"location":"api/tests.security/#anta.tests.security.VerifySSHStatus","title":"VerifySSHStatus","text":"

    Bases: AntaTest

    Verifies if the SSHD agent is disabled in the default VRF.

    Expected Results
    • success: The test will pass if the SSHD agent is disabled in the default VRF.
    • failure: The test will fail if the SSHD agent is NOT disabled in the default VRF.
    Source code in anta/tests/security.py
    class VerifySSHStatus(AntaTest):\n\"\"\"\n    Verifies if the SSHD agent is disabled in the default VRF.\n\n    Expected Results:\n        * success: The test will pass if the SSHD agent is disabled in the default VRF.\n        * failure: The test will fail if the SSHD agent is NOT disabled in the default VRF.\n    \"\"\"\n\n    name = \"VerifySSHStatus\"\n    description = \"Verifies if the SSHD agent is disabled in the default VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management ssh\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n\n        line = [line for line in command_output.split(\"\\n\") if line.startswith(\"SSHD status\")][0]\n        status = line.split(\"is \")[1]\n\n        if status == \"disabled\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(line)\n
    "},{"location":"api/tests.security/#anta.tests.security.VerifyTelnetStatus","title":"VerifyTelnetStatus","text":"

    Bases: AntaTest

    Verifies if Telnet is disabled in the default VRF.

    Expected Results
    • success: The test will pass if Telnet is disabled in the default VRF.
    • failure: The test will fail if Telnet is NOT disabled in the default VRF.
    Source code in anta/tests/security.py
    class VerifyTelnetStatus(AntaTest):\n\"\"\"\n    Verifies if Telnet is disabled in the default VRF.\n\n    Expected Results:\n        * success: The test will pass if Telnet is disabled in the default VRF.\n        * failure: The test will fail if Telnet is NOT disabled in the default VRF.\n    \"\"\"\n\n    name = \"VerifyTelnetStatus\"\n    description = \"Verifies if Telnet is disabled in the default VRF.\"\n    categories = [\"security\"]\n    commands = [AntaCommand(command=\"show management telnet\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"serverState\"] == \"disabled\":\n            self.result.is_success()\n        else:\n            self.result.is_failure(\"Telnet status for Default VRF is enabled\")\n
    "},{"location":"api/tests.snmp/","title":"SNMP","text":""},{"location":"api/tests.snmp/#anta-catalog-for-snmp-tests","title":"ANTA catalog for SNMP tests","text":"

    Test functions related to the EOS various SNMP settings

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl","title":"VerifySnmpIPv4Acl","text":"

    Bases: AntaTest

    Verifies if the SNMP agent has the right number IPv4 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SNMP agent has the provided number of IPv4 ACL(s) in the specified VRF.
    • failure: The test will fail if the SNMP agent has not the right number of IPv4 ACL(s) in the specified VRF.
    Source code in anta/tests/snmp.py
    class VerifySnmpIPv4Acl(AntaTest):\n\"\"\"\n    Verifies if the SNMP agent has the right number IPv4 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SNMP agent has the provided number of IPv4 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SNMP agent has not the right number of IPv4 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySnmpIPv4Acl\"\n    description = \"Verifies if the SNMP agent has IPv4 ACL(s) configured.\"\n    categories = [\"snmp\"]\n    commands = [AntaCommand(command=\"show snmp ipv4 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv4_acl_list = command_output[\"ipAclList\"][\"aclList\"]\n        ipv4_acl_number = len(ipv4_acl_list)\n        not_configured_acl_list = []\n        if ipv4_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SNMP IPv4 ACL(s) in vrf {self.inputs.vrf} but got {ipv4_acl_number}\")\n            return\n        for ipv4_acl in ipv4_acl_list:\n            if self.inputs.vrf not in ipv4_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv4_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv4_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SNMP IPv4 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/snmp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv4 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv4 ACL(s)

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv4Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SNMP agent

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl","title":"VerifySnmpIPv6Acl","text":"

    Bases: AntaTest

    Verifies if the SNMP agent has the right number IPv6 ACL(s) configured for a specified VRF.

    Expected results
    • success: The test will pass if the SNMP agent has the provided number of IPv6 ACL(s) in the specified VRF.
    • failure: The test will fail if the SNMP agent has not the right number of IPv6 ACL(s) in the specified VRF.
    Source code in anta/tests/snmp.py
    class VerifySnmpIPv6Acl(AntaTest):\n\"\"\"\n    Verifies if the SNMP agent has the right number IPv6 ACL(s) configured for a specified VRF.\n\n    Expected results:\n        * success: The test will pass if the SNMP agent has the provided number of IPv6 ACL(s) in the specified VRF.\n        * failure: The test will fail if the SNMP agent has not the right number of IPv6 ACL(s) in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySnmpIPv6Acl\"\n    description = \"Verifies if the SNMP agent has IPv6 ACL(s) configured.\"\n    categories = [\"snmp\"]\n    commands = [AntaCommand(command=\"show snmp ipv6 access-list summary\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        ipv6_acl_list = command_output[\"ipv6AclList\"][\"aclList\"]\n        ipv6_acl_number = len(ipv6_acl_list)\n        not_configured_acl_list = []\n        if ipv6_acl_number != self.inputs.number:\n            self.result.is_failure(f\"Expected {self.inputs.number} SNMP IPv6 ACL(s) in vrf {self.inputs.vrf} but got {ipv6_acl_number}\")\n            return\n        for ipv6_acl in ipv6_acl_list:\n            if self.inputs.vrf not in ipv6_acl[\"configuredVrfs\"] or self.inputs.vrf not in ipv6_acl[\"activeVrfs\"]:\n                not_configured_acl_list.append(ipv6_acl[\"name\"])\n        if not_configured_acl_list:\n            self.result.is_failure(f\"SNMP IPv6 ACL(s) not configured or active in vrf {self.inputs.vrf}: {not_configured_acl_list}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/snmp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    number: conint(ge=0)  # type:ignore\n\"\"\"The number of expected IPv6 ACL(s)\"\"\"\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl.Input.number","title":"number instance-attribute","text":"
    number: conint(ge=0)\n

    The number of expected IPv6 ACL(s)

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpIPv6Acl.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SNMP agent

    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpStatus","title":"VerifySnmpStatus","text":"

    Bases: AntaTest

    Verifies whether the SNMP agent is enabled in a specified VRF.

    Expected Results
    • success: The test will pass if the SNMP agent is enabled in the specified VRF.
    • failure: The test will fail if the SNMP agent is disabled in the specified VRF.
    Source code in anta/tests/snmp.py
    class VerifySnmpStatus(AntaTest):\n\"\"\"\n    Verifies whether the SNMP agent is enabled in a specified VRF.\n\n    Expected Results:\n        * success: The test will pass if the SNMP agent is enabled in the specified VRF.\n        * failure: The test will fail if the SNMP agent is disabled in the specified VRF.\n    \"\"\"\n\n    name = \"VerifySnmpStatus\"\n    description = \"Verifies if the SNMP agent is enabled.\"\n    categories = [\"snmp\"]\n    commands = [AntaCommand(command=\"show snmp\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"enabled\"] and self.inputs.vrf in command_output[\"vrfs\"][\"snmpVrfs\"]:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"SNMP agent disabled in vrf {self.inputs.vrf}\")\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpStatus.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/snmp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vrf: str = \"default\"\n\"\"\"The name of the VRF in which to check for the SNMP agent\"\"\"\n
    "},{"location":"api/tests.snmp/#anta.tests.snmp.VerifySnmpStatus.Input.vrf","title":"vrf class-attribute instance-attribute","text":"
    vrf: str = 'default'\n

    The name of the VRF in which to check for the SNMP agent

    "},{"location":"api/tests.software/","title":"Software","text":""},{"location":"api/tests.software/#anta-catalog-for-software-tests","title":"ANTA catalog for software tests","text":"

    Test functions related to the EOS software

    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSExtensions","title":"VerifyEOSExtensions","text":"

    Bases: AntaTest

    Verifies all EOS extensions installed on the device are enabled for boot persistence.

    Source code in anta/tests/software.py
    class VerifyEOSExtensions(AntaTest):\n\"\"\"\n    Verifies all EOS extensions installed on the device are enabled for boot persistence.\n    \"\"\"\n\n    name = \"VerifyEOSExtensions\"\n    description = \"Verifies all EOS extensions installed on the device are enabled for boot persistence.\"\n    categories = [\"software\"]\n    commands = [AntaCommand(command=\"show extensions\"), AntaCommand(command=\"show boot-extensions\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        boot_extensions = []\n        show_extensions_command_output = self.instance_commands[0].json_output\n        show_boot_extensions_command_output = self.instance_commands[1].json_output\n        installed_extensions = [\n            extension for extension, extension_data in show_extensions_command_output[\"extensions\"].items() if extension_data[\"status\"] == \"installed\"\n        ]\n        for extension in show_boot_extensions_command_output[\"extensions\"]:\n            extension = extension.strip(\"\\n\")\n            if extension != \"\":\n                boot_extensions.append(extension)\n        installed_extensions.sort()\n        boot_extensions.sort()\n        if installed_extensions == boot_extensions:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Missing EOS extensions: installed {installed_extensions} / configured: {boot_extensions}\")\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSVersion","title":"VerifyEOSVersion","text":"

    Bases: AntaTest

    Verifies the device is running one of the allowed EOS version.

    Source code in anta/tests/software.py
    class VerifyEOSVersion(AntaTest):\n\"\"\"\n    Verifies the device is running one of the allowed EOS version.\n    \"\"\"\n\n    name = \"VerifyEOSVersion\"\n    description = \"Verifies the device is running one of the allowed EOS version.\"\n    categories = [\"software\"]\n    commands = [AntaCommand(command=\"show version\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        versions: List[str]\n\"\"\"List of allowed EOS versions\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"version\"] in self.inputs.versions:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f'device is running version {command_output[\"version\"]} not in expected versions: {self.inputs.versions}')\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSVersion.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/software.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    versions: List[str]\n\"\"\"List of allowed EOS versions\"\"\"\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyEOSVersion.Input.versions","title":"versions instance-attribute","text":"
    versions: List[str]\n

    List of allowed EOS versions

    "},{"location":"api/tests.software/#anta.tests.software.VerifyTerminAttrVersion","title":"VerifyTerminAttrVersion","text":"

    Bases: AntaTest

    Verifies the device is running one of the allowed TerminAttr version.

    Source code in anta/tests/software.py
    class VerifyTerminAttrVersion(AntaTest):\n\"\"\"\n    Verifies the device is running one of the allowed TerminAttr version.\n    \"\"\"\n\n    name = \"VerifyTerminAttrVersion\"\n    description = \"Verifies the device is running one of the allowed TerminAttr version.\"\n    categories = [\"software\"]\n    commands = [AntaCommand(command=\"show version detail\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        versions: List[str]\n\"\"\"List of allowed TerminAttr versions\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        command_output_data = command_output[\"details\"][\"packages\"][\"TerminAttr-core\"][\"version\"]\n        if command_output_data in self.inputs.versions:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"device is running TerminAttr version {command_output_data} and is not in the allowed list: {self.inputs.versions}\")\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyTerminAttrVersion.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/software.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    versions: List[str]\n\"\"\"List of allowed TerminAttr versions\"\"\"\n
    "},{"location":"api/tests.software/#anta.tests.software.VerifyTerminAttrVersion.Input.versions","title":"versions instance-attribute","text":"
    versions: List[str]\n

    List of allowed TerminAttr versions

    "},{"location":"api/tests.stp/","title":"STP","text":""},{"location":"api/tests.stp/#anta-catalog-for-stp-tests","title":"ANTA catalog for STP tests","text":"

    Test functions related to various Spanning Tree Protocol (STP) settings

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPBlockedPorts","title":"VerifySTPBlockedPorts","text":"

    Bases: AntaTest

    Verifies there is no STP blocked ports.

    Expected Results
    • success: The test will pass if there are NO ports blocked by STP.
    • failure: The test will fail if there are ports blocked by STP.
    Source code in anta/tests/stp.py
    class VerifySTPBlockedPorts(AntaTest):\n\"\"\"\n    Verifies there is no STP blocked ports.\n\n    Expected Results:\n        * success: The test will pass if there are NO ports blocked by STP.\n        * failure: The test will fail if there are ports blocked by STP.\n    \"\"\"\n\n    name = \"VerifySTPBlockedPorts\"\n    description = \"Verifies there is no STP blocked ports.\"\n    categories = [\"stp\"]\n    commands = [AntaCommand(command=\"show spanning-tree blockedports\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if not (stp_instances := command_output[\"spanningTreeInstances\"]):\n            self.result.is_success()\n        else:\n            for key, value in stp_instances.items():\n                stp_instances[key] = value.pop(\"spanningTreeBlockedPorts\")\n            self.result.is_failure(f\"The following ports are blocked by STP: {stp_instances}\")\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPCounters","title":"VerifySTPCounters","text":"

    Bases: AntaTest

    Verifies there is no errors in STP BPDU packets.

    Expected Results
    • success: The test will pass if there are NO STP BPDU packet errors under all interfaces participating in STP.
    • failure: The test will fail if there are STP BPDU packet errors on one or many interface(s).
    Source code in anta/tests/stp.py
    class VerifySTPCounters(AntaTest):\n\"\"\"\n    Verifies there is no errors in STP BPDU packets.\n\n    Expected Results:\n        * success: The test will pass if there are NO STP BPDU packet errors under all interfaces participating in STP.\n        * failure: The test will fail if there are STP BPDU packet errors on one or many interface(s).\n    \"\"\"\n\n    name = \"VerifySTPCounters\"\n    description = \"Verifies there is no errors in STP BPDU packets.\"\n    categories = [\"stp\"]\n    commands = [AntaCommand(command=\"show spanning-tree counters\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        interfaces_with_errors = [\n            interface for interface, counters in command_output[\"interfaces\"].items() if counters[\"bpduTaggedError\"] or counters[\"bpduOtherError\"] != 0\n        ]\n        if interfaces_with_errors:\n            self.result.is_failure(f\"The following interfaces have STP BPDU packet errors: {interfaces_with_errors}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPForwardingPorts","title":"VerifySTPForwardingPorts","text":"

    Bases: AntaTest

    Verifies that all interfaces are in a forwarding state for a provided list of VLAN(s).

    Expected Results
    • success: The test will pass if all interfaces are in a forwarding state for the specified VLAN(s).
    • failure: The test will fail if one or many interfaces are NOT in a forwarding state in the specified VLAN(s).
    Source code in anta/tests/stp.py
    class VerifySTPForwardingPorts(AntaTest):\n\"\"\"\n    Verifies that all interfaces are in a forwarding state for a provided list of VLAN(s).\n\n    Expected Results:\n        * success: The test will pass if all interfaces are in a forwarding state for the specified VLAN(s).\n        * failure: The test will fail if one or many interfaces are NOT in a forwarding state in the specified VLAN(s).\n    \"\"\"\n\n    name = \"VerifySTPForwardingPorts\"\n    description = \"Verifies that all interfaces are forwarding for a provided list of VLAN(s).\"\n    categories = [\"stp\"]\n    commands = [AntaTemplate(template=\"show spanning-tree topology vlan {vlan} status\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify forwarding states\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vlan=vlan) for vlan in self.inputs.vlans]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        not_configured = []\n        not_forwarding = []\n        for command in self.instance_commands:\n            if command.params and \"vlan\" in command.params:\n                vlan_id = command.params[\"vlan\"]\n            if not (topologies := get_value(command.json_output, \"topologies\")):\n                not_configured.append(vlan_id)\n            else:\n                for value in topologies.values():\n                    if int(vlan_id) in value[\"vlans\"]:\n                        interfaces_not_forwarding = [interface for interface, state in value[\"interfaces\"].items() if state[\"state\"] != \"forwarding\"]\n                if interfaces_not_forwarding:\n                    not_forwarding.append({f\"VLAN {vlan_id}\": interfaces_not_forwarding})\n        if not_configured:\n            self.result.is_failure(f\"STP instance is not configured for the following VLAN(s): {not_configured}\")\n        if not_forwarding:\n            self.result.is_failure(f\"The following VLAN(s) have interface(s) that are not in a fowarding state: {not_forwarding}\")\n        if not not_configured and not interfaces_not_forwarding:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPForwardingPorts.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/stp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify forwarding states\"\"\"\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPForwardingPorts.Input.vlans","title":"vlans instance-attribute","text":"
    vlans: List[Vlan]\n

    List of VLAN on which to verify forwarding states

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode","title":"VerifySTPMode","text":"

    Bases: AntaTest

    Verifies the configured STP mode for a provided list of VLAN(s).

    Expected Results
    • success: The test will pass if the STP mode is configured properly in the specified VLAN(s).
    • failure: The test will fail if the STP mode is NOT configured properly for one or more specified VLAN(s).
    Source code in anta/tests/stp.py
    class VerifySTPMode(AntaTest):\n\"\"\"\n    Verifies the configured STP mode for a provided list of VLAN(s).\n\n    Expected Results:\n        * success: The test will pass if the STP mode is configured properly in the specified VLAN(s).\n        * failure: The test will fail if the STP mode is NOT configured properly for one or more specified VLAN(s).\n    \"\"\"\n\n    name = \"VerifySTPMode\"\n    description = \"Verifies the configured STP mode for a provided list of VLAN(s).\"\n    categories = [\"stp\"]\n    commands = [AntaTemplate(template=\"show spanning-tree vlan {vlan}\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        mode: Literal[\"mstp\", \"rstp\", \"rapidPvst\"] = \"mstp\"\n\"\"\"STP mode to verify\"\"\"\n        vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify STP mode\"\"\"\n\n    def render(self, template: AntaTemplate) -> list[AntaCommand]:\n        return [template.render(vlan=vlan) for vlan in self.inputs.vlans]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        not_configured = []\n        wrong_stp_mode = []\n        for command in self.instance_commands:\n            if command.params and \"vlan\" in command.params:\n                vlan_id = command.params[\"vlan\"]\n            if not (stp_mode := get_value(command.json_output, f\"spanningTreeVlanInstances.{vlan_id}.spanningTreeVlanInstance.protocol\")):\n                not_configured.append(vlan_id)\n            elif stp_mode != self.inputs.mode:\n                wrong_stp_mode.append(vlan_id)\n        if not_configured:\n            self.result.is_failure(f\"STP mode '{self.inputs.mode}' not configured for the following VLAN(s): {not_configured}\")\n        if wrong_stp_mode:\n            self.result.is_failure(f\"Wrong STP mode configured for the following VLAN(s): {wrong_stp_mode}\")\n        if not not_configured and not wrong_stp_mode:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/stp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    mode: Literal[\"mstp\", \"rstp\", \"rapidPvst\"] = \"mstp\"\n\"\"\"STP mode to verify\"\"\"\n    vlans: List[Vlan]\n\"\"\"List of VLAN on which to verify STP mode\"\"\"\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode.Input.mode","title":"mode class-attribute instance-attribute","text":"
    mode: Literal['mstp', 'rstp', 'rapidPvst'] = 'mstp'\n

    STP mode to verify

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPMode.Input.vlans","title":"vlans instance-attribute","text":"
    vlans: List[Vlan]\n

    List of VLAN on which to verify STP mode

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority","title":"VerifySTPRootPriority","text":"

    Bases: AntaTest

    Verifies the STP root priority for a provided list of VLAN or MST instance ID(s).

    Expected Results
    • success: The test will pass if the STP root priority is configured properly for the specified VLAN or MST instance ID(s).
    • failure: The test will fail if the STP root priority is NOT configured properly for the specified VLAN or MST instance ID(s).
    Source code in anta/tests/stp.py
    class VerifySTPRootPriority(AntaTest):\n\"\"\"\n    Verifies the STP root priority for a provided list of VLAN or MST instance ID(s).\n\n    Expected Results:\n        * success: The test will pass if the STP root priority is configured properly for the specified VLAN or MST instance ID(s).\n        * failure: The test will fail if the STP root priority is NOT configured properly for the specified VLAN or MST instance ID(s).\n    \"\"\"\n\n    name = \"VerifySTPRootPriority\"\n    description = \"Verifies the STP root priority for a provided list of VLAN or MST instance ID(s).\"\n    categories = [\"stp\"]\n    commands = [AntaCommand(command=\"show spanning-tree root detail\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        priority: int\n\"\"\"STP root priority to verify\"\"\"\n        instances: List[Vlan] = []\n\"\"\"List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified.\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if not (stp_instances := command_output[\"instances\"]):\n            self.result.is_failure(\"No STP instances configured\")\n            return\n        for instance in stp_instances:\n            if instance.startswith(\"MST\"):\n                prefix = \"MST\"\n                break\n            if instance.startswith(\"VL\"):\n                prefix = \"VL\"\n                break\n        check_instances = [f\"{prefix}{instance_id}\" for instance_id in self.inputs.instances] if self.inputs.instances else command_output[\"instances\"].keys()\n        wrong_priority_instances = [\n            instance for instance in check_instances if get_value(command_output, f\"instances.{instance}.rootBridge.priority\") != self.inputs.priority\n        ]\n        if wrong_priority_instances:\n            self.result.is_failure(f\"The following instance(s) have the wrong STP root priority configured: {wrong_priority_instances}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/stp.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    priority: int\n\"\"\"STP root priority to verify\"\"\"\n    instances: List[Vlan] = []\n\"\"\"List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified.\"\"\"\n
    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority.Input.instances","title":"instances class-attribute instance-attribute","text":"
    instances: List[Vlan] = []\n

    List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified.

    "},{"location":"api/tests.stp/#anta.tests.stp.VerifySTPRootPriority.Input.priority","title":"priority instance-attribute","text":"
    priority: int\n

    STP root priority to verify

    "},{"location":"api/tests.system/","title":"System","text":""},{"location":"api/tests.system/#anta-catalog-for-system-tests","title":"ANTA catalog for system tests","text":"

    Test functions related to system-level features and protocols

    "},{"location":"api/tests.system/#anta.tests.system.VerifyAgentLogs","title":"VerifyAgentLogs","text":"

    Bases: AntaTest

    This test verifies that no agent crash reports are present on the device.

    Expected Results
    • success: The test will pass if there is NO agent crash reported.
    • failure: The test will fail if any agent crashes are reported.
    Source code in anta/tests/system.py
    class VerifyAgentLogs(AntaTest):\n\"\"\"\n    This test verifies that no agent crash reports are present on the device.\n\n    Expected Results:\n      * success: The test will pass if there is NO agent crash reported.\n      * failure: The test will fail if any agent crashes are reported.\n    \"\"\"\n\n    name = \"VerifyAgentLogs\"\n    description = \"This test verifies that no agent crash reports are present on the device.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show agent logs crash\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        if len(command_output) == 0:\n            self.result.is_success()\n        else:\n            pattern = re.compile(r\"^===> (.*?) <===$\", re.MULTILINE)\n            agents = \"\\n * \".join(pattern.findall(command_output))\n            self.result.is_failure(f\"Device has reported agent crashes:\\n * {agents}\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyCPUUtilization","title":"VerifyCPUUtilization","text":"

    Bases: AntaTest

    This test verifies whether the CPU utilization is below 75%.

    Expected Results
    • success: The test will pass if the CPU utilization is below 75%.
    • failure: The test will fail if the CPU utilization is over 75%.
    Source code in anta/tests/system.py
    class VerifyCPUUtilization(AntaTest):\n\"\"\"\n    This test verifies whether the CPU utilization is below 75%.\n\n    Expected Results:\n      * success: The test will pass if the CPU utilization is below 75%.\n      * failure: The test will fail if the CPU utilization is over 75%.\n    \"\"\"\n\n    name = \"VerifyCPUUtilization\"\n    description = \"This test verifies whether the CPU utilization is below 75%.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show processes top once\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        command_output_data = command_output[\"cpuInfo\"][\"%Cpu(s)\"][\"idle\"]\n        if command_output_data > 25:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device has reported a high CPU utilization: {100 - command_output_data}%\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyCoredump","title":"VerifyCoredump","text":"

    Bases: AntaTest

    This test verifies if there are core dump files in the /var/core directory.

    Expected Results
    • success: The test will pass if there are NO core dump(s) in /var/core.
    • failure: The test will fail if there are core dump(s) in /var/core.
    Note
    • This test will NOT check for minidump(s) generated by certain agents in /var/core/minidump.
    Source code in anta/tests/system.py
    class VerifyCoredump(AntaTest):\n\"\"\"\n    This test verifies if there are core dump files in the /var/core directory.\n\n    Expected Results:\n      * success: The test will pass if there are NO core dump(s) in /var/core.\n      * failure: The test will fail if there are core dump(s) in /var/core.\n\n    Note:\n      * This test will NOT check for minidump(s) generated by certain agents in /var/core/minidump.\n    \"\"\"\n\n    name = \"VerifyCoredump\"\n    description = \"This test verifies if there are core dump files in the /var/core directory.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show system coredump\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        core_files = command_output[\"coreFiles\"]\n        if \"minidump\" in core_files:\n            core_files.remove(\"minidump\")\n        if not core_files:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Core dump(s) have been found: {core_files}\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyFileSystemUtilization","title":"VerifyFileSystemUtilization","text":"

    Bases: AntaTest

    This test verifies that no partition is utilizing more than 75% of its disk space.

    Expected Results
    • success: The test will pass if all partitions are using less than 75% of its disk space.
    • failure: The test will fail if any partitions are using more than 75% of its disk space.
    Source code in anta/tests/system.py
    class VerifyFileSystemUtilization(AntaTest):\n\"\"\"\n    This test verifies that no partition is utilizing more than 75% of its disk space.\n\n    Expected Results:\n      * success: The test will pass if all partitions are using less than 75% of its disk space.\n      * failure: The test will fail if any partitions are using more than 75% of its disk space.\n    \"\"\"\n\n    name = \"VerifyFileSystemUtilization\"\n    description = \"This test verifies that no partition is utilizing more than 75% of its disk space.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"bash timeout 10 df -h\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        self.result.is_success()\n        for line in command_output.split(\"\\n\")[1:]:\n            if \"loop\" not in line and len(line) > 0 and (percentage := int(line.split()[4].replace(\"%\", \"\"))) > 75:\n                self.result.is_failure(f\"Mount point {line} is higher than 75%: reported {percentage}%\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyMemoryUtilization","title":"VerifyMemoryUtilization","text":"

    Bases: AntaTest

    This test verifies whether the memory utilization is below 75%.

    Expected Results
    • success: The test will pass if the memory utilization is below 75%.
    • failure: The test will fail if the memory utilization is over 75%.
    Source code in anta/tests/system.py
    class VerifyMemoryUtilization(AntaTest):\n\"\"\"\n    This test verifies whether the memory utilization is below 75%.\n\n    Expected Results:\n      * success: The test will pass if the memory utilization is below 75%.\n      * failure: The test will fail if the memory utilization is over 75%.\n    \"\"\"\n\n    name = \"VerifyMemoryUtilization\"\n    description = \"This test verifies whether the memory utilization is below 75%.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show version\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        memory_usage = command_output[\"memFree\"] / command_output[\"memTotal\"]\n        if memory_usage > 0.25:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device has reported a high memory usage: {(1 - memory_usage)*100:.2f}%\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyNTP","title":"VerifyNTP","text":"

    Bases: AntaTest

    This test verifies that the Network Time Protocol (NTP) is synchronized.

    Expected Results
    • success: The test will pass if the NTP is synchronised.
    • failure: The test will fail if the NTP is NOT synchronised.
    Source code in anta/tests/system.py
    class VerifyNTP(AntaTest):\n\"\"\"\n    This test verifies that the Network Time Protocol (NTP) is synchronized.\n\n    Expected Results:\n      * success: The test will pass if the NTP is synchronised.\n      * failure: The test will fail if the NTP is NOT synchronised.\n    \"\"\"\n\n    name = \"VerifyNTP\"\n    description = \"This test verifies if NTP is synchronised.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show ntp status\", ofmt=\"text\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].text_output\n        if command_output.split(\"\\n\")[0].split(\" \")[0] == \"synchronised\":\n            self.result.is_success()\n        else:\n            data = command_output.split(\"\\n\")[0]\n            self.result.is_failure(f\"The device is not synchronized with the configured NTP server(s): '{data}'\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyReloadCause","title":"VerifyReloadCause","text":"

    Bases: AntaTest

    This test verifies the last reload cause of the device.

    Expected results
    • success: The test will pass if there are NO reload causes or if the last reload was caused by the user or after an FPGA upgrade.
    • failure: The test will fail if the last reload was NOT caused by the user or after an FPGA upgrade.
    • error: The test will report an error if the reload cause is NOT available.
    Source code in anta/tests/system.py
    class VerifyReloadCause(AntaTest):\n\"\"\"\n    This test verifies the last reload cause of the device.\n\n    Expected results:\n      * success: The test will pass if there are NO reload causes or if the last reload was caused by the user or after an FPGA upgrade.\n      * failure: The test will fail if the last reload was NOT caused by the user or after an FPGA upgrade.\n      * error: The test will report an error if the reload cause is NOT available.\n    \"\"\"\n\n    name = \"VerifyReloadCause\"\n    description = \"This test verifies the last reload cause of the device.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show reload cause\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if \"resetCauses\" not in command_output.keys():\n            self.result.is_error(message=\"No reload causes available\")\n            return\n        if len(command_output[\"resetCauses\"]) == 0:\n            # No reload causes\n            self.result.is_success()\n            return\n        reset_causes = command_output[\"resetCauses\"]\n        command_output_data = reset_causes[0].get(\"description\")\n        if command_output_data in [\n            \"Reload requested by the user.\",\n            \"Reload requested after FPGA upgrade\",\n        ]:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Reload cause is: '{command_output_data}'\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyUptime","title":"VerifyUptime","text":"

    Bases: AntaTest

    This test verifies if the device uptime is higher than the provided minimum uptime value.

    Expected Results
    • success: The test will pass if the device uptime is higher than the provided value.
    • failure: The test will fail if the device uptime is lower than the provided value.
    Source code in anta/tests/system.py
    class VerifyUptime(AntaTest):\n\"\"\"\n    This test verifies if the device uptime is higher than the provided minimum uptime value.\n\n    Expected Results:\n      * success: The test will pass if the device uptime is higher than the provided value.\n      * failure: The test will fail if the device uptime is lower than the provided value.\n    \"\"\"\n\n    name = \"VerifyUptime\"\n    description = \"This test verifies if the device uptime is higher than the provided minimum uptime value.\"\n    categories = [\"system\"]\n    commands = [AntaCommand(command=\"show uptime\")]\n\n    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n        minimum: conint(ge=0)  # type: ignore\n\"\"\"Minimum uptime in seconds\"\"\"\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if command_output[\"upTime\"] > self.inputs.minimum:\n            self.result.is_success()\n        else:\n            self.result.is_failure(f\"Device uptime is {command_output['upTime']} seconds\")\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyUptime.Input","title":"Input","text":"

    Bases: Input

    Source code in anta/tests/system.py
    class Input(AntaTest.Input):  # pylint: disable=missing-class-docstring\n    minimum: conint(ge=0)  # type: ignore\n\"\"\"Minimum uptime in seconds\"\"\"\n
    "},{"location":"api/tests.system/#anta.tests.system.VerifyUptime.Input.minimum","title":"minimum instance-attribute","text":"
    minimum: conint(ge=0)\n

    Minimum uptime in seconds

    "},{"location":"api/tests.vxlan/","title":"VXLAN","text":""},{"location":"api/tests.vxlan/#anta-catalog-for-vxlan-tests","title":"ANTA catalog for VXLAN tests","text":"

    Test functions related to VXLAN

    "},{"location":"api/tests.vxlan/#anta.tests.vxlan.VerifyVxlan1Interface","title":"VerifyVxlan1Interface","text":"

    Bases: AntaTest

    This test verifies if the Vxlan1 interface is configured and \u2018up/up\u2019.

    Warning

    The name of this test has been updated from \u2018VerifyVxlan\u2019 for better representation.

    Expected Results
    • success: The test will pass if the Vxlan1 interface is configured with line protocol status and interface status \u2018up\u2019.
    • failure: The test will fail if the Vxlan1 interface line protocol status or interface status are not \u2018up\u2019.
    • skipped: The test will be skipped if the Vxlan1 interface is not configured.
    Source code in anta/tests/vxlan.py
    class VerifyVxlan1Interface(AntaTest):\n\"\"\"\n    This test verifies if the Vxlan1 interface is configured and 'up/up'.\n\n    !!! warning\n        The name of this test has been updated from 'VerifyVxlan' for better representation.\n\n    Expected Results:\n      * success: The test will pass if the Vxlan1 interface is configured with line protocol status and interface status 'up'.\n      * failure: The test will fail if the Vxlan1 interface line protocol status or interface status are not 'up'.\n      * skipped: The test will be skipped if the Vxlan1 interface is not configured.\n    \"\"\"\n\n    name = \"VerifyVxlan1Interface\"\n    description = \"This test verifies if the Vxlan1 interface is configured and 'up/up'.\"\n    categories = [\"vxlan\"]\n    commands = [AntaCommand(command=\"show interfaces description\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if \"Vxlan1\" not in command_output[\"interfaceDescriptions\"]:\n            self.result.is_skipped(\"Vxlan1 interface is not configured\")\n        elif (\n            command_output[\"interfaceDescriptions\"][\"Vxlan1\"][\"lineProtocolStatus\"] == \"up\"\n            and command_output[\"interfaceDescriptions\"][\"Vxlan1\"][\"interfaceStatus\"] == \"up\"\n        ):\n            self.result.is_success()\n        else:\n            self.result.is_failure(\n                f\"Vxlan1 interface is {command_output['interfaceDescriptions']['Vxlan1']['lineProtocolStatus']}\"\n                f\"/{command_output['interfaceDescriptions']['Vxlan1']['interfaceStatus']}\"\n            )\n
    "},{"location":"api/tests.vxlan/#anta.tests.vxlan.VerifyVxlanConfigSanity","title":"VerifyVxlanConfigSanity","text":"

    Bases: AntaTest

    This test verifies that no issues are detected with the VXLAN configuration.

    Expected Results
    • success: The test will pass if no issues are detected with the VXLAN configuration.
    • failure: The test will fail if issues are detected with the VXLAN configuration.
    • skipped: The test will be skipped if VXLAN is not configured on the device.
    Source code in anta/tests/vxlan.py
    class VerifyVxlanConfigSanity(AntaTest):\n\"\"\"\n    This test verifies that no issues are detected with the VXLAN configuration.\n\n    Expected Results:\n      * success: The test will pass if no issues are detected with the VXLAN configuration.\n      * failure: The test will fail if issues are detected with the VXLAN configuration.\n      * skipped: The test will be skipped if VXLAN is not configured on the device.\n    \"\"\"\n\n    name = \"VerifyVxlanConfigSanity\"\n    description = \"This test verifies that no issues are detected with the VXLAN configuration.\"\n    categories = [\"vxlan\"]\n    commands = [AntaCommand(command=\"show vxlan config-sanity\", ofmt=\"json\")]\n\n    @AntaTest.anta_test\n    def test(self) -> None:\n        command_output = self.instance_commands[0].json_output\n        if \"categories\" not in command_output or len(command_output[\"categories\"]) == 0:\n            self.result.is_skipped(\"VXLAN is not configured\")\n            return\n        failed_categories = {\n            category: content\n            for category, content in command_output[\"categories\"].items()\n            if category in [\"localVtep\", \"mlag\", \"pd\"] and content[\"allCheckPass\"] is not True\n        }\n        if len(failed_categories) > 0:\n            self.result.is_failure(f\"VXLAN config sanity check is not passing: {failed_categories}\")\n        else:\n            self.result.is_success()\n
    "},{"location":"api/types/","title":"Input Types","text":""},{"location":"api/types/#anta.custom_types","title":"anta.custom_types","text":"

    Module that provides predefined types for AntaTest.Input instances

    "},{"location":"api/types/#anta.custom_types.AAAAuthMethod","title":"AAAAuthMethod module-attribute","text":"
    AAAAuthMethod = Annotated[str, AfterValidator(aaa_group_prefix)]\n
    "},{"location":"api/types/#anta.custom_types.Afi","title":"Afi module-attribute","text":"
    Afi = Literal['ipv4', 'ipv6', 'vpn-ipv4', 'vpn-ipv6', 'evpn', 'rt-membership']\n
    "},{"location":"api/types/#anta.custom_types.Interface","title":"Interface module-attribute","text":"
    Interface = Annotated[str, Field(pattern='^(Ethernet|Fabric|Loopback|Management|Port-Channel|Tunnel|Vlan|Vxlan)[0-9]+(\\\\/[0-9]+)*$')]\n
    "},{"location":"api/types/#anta.custom_types.Safi","title":"Safi module-attribute","text":"
    Safi = Literal['unicast', 'multicast', 'labeled-unicast']\n
    "},{"location":"api/types/#anta.custom_types.TestStatus","title":"TestStatus module-attribute","text":"
    TestStatus = Literal['unset', 'success', 'failure', 'error', 'skipped']\n
    "},{"location":"api/types/#anta.custom_types.Vlan","title":"Vlan module-attribute","text":"
    Vlan = Annotated[int, Field(ge=0, le=4094)]\n
    "},{"location":"api/types/#anta.custom_types.aaa_group_prefix","title":"aaa_group_prefix","text":"
    aaa_group_prefix(v: str) -> str\n

    Prefix the AAA method with \u2018group\u2019 if it is known

    Source code in anta/custom_types.py
    def aaa_group_prefix(v: str) -> str:\n\"\"\"Prefix the AAA method with 'group' if it is known\"\"\"\n    built_in_methods = [\"local\", \"none\", \"logging\"]\n    return f\"group {v}\" if v not in built_in_methods and not v.startswith(\"group \") else v\n
    "},{"location":"cli/debug/","title":"Helpers","text":""},{"location":"cli/debug/#anta-debug-commands","title":"ANTA debug commands","text":"

    The ANTA CLI includes a set of debugging tools, making it easier to build and test ANTA content. This functionality is accessed via the debug subcommand and offers the following options:

    • Executing a command on a device from your inventory and retrieving the result.
    • Running a templated command on a device from your inventory and retrieving the result.

    These tools are especially helpful in building the tests, as they give a visual access to the output received from the eAPI. They also facilitate the extraction of output content for use in unit tests, as described in our contribution guide.

    Warning

    The debug tools require a device from your inventory. Thus, you MUST use a valid ANTA Inventory.

    "},{"location":"cli/debug/#executing-an-eos-command","title":"Executing an EOS command","text":"

    You can use the run-cmd entrypoint to run a command, which includes the following options:

    "},{"location":"cli/debug/#command-overview","title":"Command overview","text":"
    $ anta debug run-cmd --help\nUsage: anta debug run-cmd [OPTIONS]\n\nRun arbitrary command to an ANTA device\n\nOptions:\n  -c, --command TEXT        Command to run  [required]\n--ofmt [json|text]        EOS eAPI format to use. can be text or json\n  -v, --version [1|latest]  EOS eAPI version\n  -r, --revision INTEGER    eAPI command revision\n  -d, --device TEXT         Device from inventory to use  [required]\n--help                    Show this message and exit.\n
    "},{"location":"cli/debug/#example","title":"Example","text":"

    This example illustrates how to run the show interfaces description command with a JSON format (default):

    anta debug run-cmd --command \"show interfaces description\" --device DC1-SPINE1\nRun command show interfaces description on DC1-SPINE1\n{\n'interfaceDescriptions': {\n'Ethernet1': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-LEAF1A_Ethernet1', 'interfaceStatus': 'up'},\n        'Ethernet2': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-LEAF1B_Ethernet1', 'interfaceStatus': 'up'},\n        'Ethernet3': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-BL1_Ethernet1', 'interfaceStatus': 'up'},\n        'Ethernet4': {'lineProtocolStatus': 'up', 'description': 'P2P_LINK_TO_DC1-BL2_Ethernet1', 'interfaceStatus': 'up'},\n        'Loopback0': {'lineProtocolStatus': 'up', 'description': 'EVPN_Overlay_Peering', 'interfaceStatus': 'up'},\n        'Management0': {'lineProtocolStatus': 'up', 'description': 'oob_management', 'interfaceStatus': 'up'}\n}\n}\n
    "},{"location":"cli/debug/#executing-an-eos-command-using-templates","title":"Executing an EOS command using templates","text":"

    The run-template entrypoint allows the user to provide an f-string templated command. It is followed by a list of arguments (key-value pairs) that build a dictionary used as template parameters.

    "},{"location":"cli/debug/#command-overview_1","title":"Command overview","text":"
    $ anta debug run-template --help\nUsage: anta debug run-template [OPTIONS] PARAMS...\n\n  Run arbitrary templated command to an ANTA device.\n\n  Takes a list of arguments (keys followed by a value) to build a dictionary\n  used as template parameters. Example:\n\n  anta debug run-template -d leaf1a -t 'show vlan {vlan_id}' vlan_id 1\n\nOptions:\n  -t, --template TEXT       Command template to run. E.g. 'show vlan\n                            {vlan_id}'  [required]\n--ofmt [json|text]        EOS eAPI format to use. can be text or json\n  -v, --version [1|latest]  EOS eAPI version\n  -r, --revision INTEGER    eAPI command revision\n  -d, --device TEXT         Device from inventory to use  [required]\n--help                    Show this message and exit.\n
    "},{"location":"cli/debug/#example_1","title":"Example","text":"

    This example uses the show vlan {vlan_id} command in a JSON format:

    anta debug run-template --template \"show vlan {vlan_id}\" vlan_id 10 --device DC1-LEAF1A\nRun templated command 'show vlan {vlan_id}' with {'vlan_id': '10'} on DC1-LEAF1A\n{\n'vlans': {\n'10': {\n'name': 'VRFPROD_VLAN10',\n            'dynamic': False,\n            'status': 'active',\n            'interfaces': {\n'Cpu': {'privatePromoted': False, 'blocked': None},\n                'Port-Channel11': {'privatePromoted': False, 'blocked': None},\n                'Vxlan1': {'privatePromoted': False, 'blocked': None}\n}\n}\n},\n    'sourceDetail': ''\n}\n

    Warning

    If multiple arguments of the same key are provided, only the last argument value will be kept in the template parameters.

    "},{"location":"cli/debug/#example-of-multiple-arguments","title":"Example of multiple arguments","text":"
    anta --log DEBUG debug run-template --template \"ping {dst} source {src}\" dst \"8.8.8.8\" src Loopback0 --device DC1-SPINE1 \u00a0 \u00a0\n> {'dst': '8.8.8.8', 'src': 'Loopback0'}\n\nanta --log DEBUG debug run-template --template \"ping {dst} source {src}\" dst \"8.8.8.8\" src Loopback0 dst \"1.1.1.1\" src Loopback1 --device DC1-SPINE1 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\n> {'dst': '1.1.1.1', 'src': 'Loopback1'}\n# Notice how `src` and `dst` keep only the latest value\n
    "},{"location":"cli/exec/","title":"Execute commands","text":""},{"location":"cli/exec/#executing-commands-on-devices","title":"Executing Commands on Devices","text":"

    ANTA CLI provides a set of entrypoints to facilitate remote command execution on EOS devices.

    "},{"location":"cli/exec/#exec-command-overview","title":"EXEC Command overview","text":"
    anta exec --help\nUsage: anta exec [OPTIONS] COMMAND [ARGS]...\n\n  Execute commands to inventory devices\n\nOptions:\n  --help  Show this message and exit.\n\nCommands:\n  clear-counters        Clear counter statistics on EOS devices\n  collect-tech-support  Collect scheduled tech-support from EOS devices\n  snapshot              Collect commands output from devices in inventory\n
    "},{"location":"cli/exec/#clear-interfaces-counters","title":"Clear interfaces counters","text":"

    This command clears interface counters on EOS devices specified in your inventory.

    "},{"location":"cli/exec/#command-overview","title":"Command overview","text":"
    anta exec clear-counters --help\nUsage: anta exec clear-counters [OPTIONS]\n\nClear counter statistics on EOS devices\n\nOptions:\n  -t, --tags TEXT  List of tags using comma as separator: tag1,tag2,tag3\n  --help           Show this message and exit.\n
    "},{"location":"cli/exec/#example","title":"Example","text":"
    anta exec clear-counters --tags SPINE\n[20:19:13] INFO     Connecting to devices...                                                                                                                         utils.py:43\n           INFO     Clearing counters on remote devices...                                                                                                           utils.py:46\n           INFO     Cleared counters on DC1-SPINE2 (cEOSLab)                                                                                                         utils.py:41\n           INFO     Cleared counters on DC2-SPINE1 (cEOSLab)                                                                                                         utils.py:41\n           INFO     Cleared counters on DC1-SPINE1 (cEOSLab)                                                                                                         utils.py:41\n           INFO     Cleared counters on DC2-SPINE2 (cEOSLab)\n
    "},{"location":"cli/exec/#collect-a-set-of-commands","title":"Collect a set of commands","text":"

    This command collects all the commands specified in a commands-list file, which can be in either json or text format.

    "},{"location":"cli/exec/#command-overview_1","title":"Command overview","text":"
    anta exec snapshot --help\nUsage: anta exec snapshot [OPTIONS]\n\nCollect commands output from devices in inventory\n\nOptions:\n  -t, --tags TEXT           List of tags using comma as separator:\n                            tag1,tag2,tag3\n  -c, --commands-list FILE  File with list of commands to collect  [env var:\n                            ANTA_EXEC_SNAPSHOT_COMMANDS_LIST; required]\n-o, --output DIRECTORY    Directory to save commands output. Will have a\n                            suffix with the format _YEAR-MONTH-DAY_HOUR-\n                            MINUTES-SECONDS'  [env var:\n                            ANTA_EXEC_SNAPSHOT_OUTPUT; default: anta_snapshot]\n--help                    Show this message and exit.\n

    The commands-list file should follow this structure:

    ---\njson_format:\n- show version\ntext_format:\n- show bfd peers\n
    "},{"location":"cli/exec/#example_1","title":"Example","text":"
    anta exec snapshot --tags SPINE --commands-list ./commands.yaml --output ./\n[20:25:15] INFO     Connecting to devices...                                                                                                                         utils.py:78\n           INFO     Collecting commands from remote devices                                                                                                          utils.py:81\n           INFO     Collected command 'show version' from device DC2-SPINE1 (cEOSLab)                                                                                utils.py:76\n           INFO     Collected command 'show version' from device DC2-SPINE2 (cEOSLab)                                                                                utils.py:76\n           INFO     Collected command 'show version' from device DC1-SPINE1 (cEOSLab)                                                                                utils.py:76\n           INFO     Collected command 'show version' from device DC1-SPINE2 (cEOSLab)                                                                                utils.py:76\n[20:25:16] INFO     Collected command 'show bfd peers' from device DC2-SPINE2 (cEOSLab)                                                                              utils.py:76\n           INFO     Collected command 'show bfd peers' from device DC2-SPINE1 (cEOSLab)                                                                              utils.py:76\n           INFO     Collected command 'show bfd peers' from device DC1-SPINE1 (cEOSLab)                                                                              utils.py:76\n           INFO     Collected command 'show bfd peers' from device DC1-SPINE2 (cEOSLab)\n

    The results of the executed commands will be stored in the output directory specified during command execution:

    tree _2023-07-14_20_25_15\n_2023-07-14_20_25_15\n\u251c\u2500\u2500 DC1-SPINE1\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 json\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 text\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 show bfd peers.log\n\u251c\u2500\u2500 DC1-SPINE2\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 json\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 text\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 show bfd peers.log\n\u251c\u2500\u2500 DC2-SPINE1\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 json\n\u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 text\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 show bfd peers.log\n\u2514\u2500\u2500 DC2-SPINE2\n    \u251c\u2500\u2500 json\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 show version.json\n    \u2514\u2500\u2500 text\n        \u2514\u2500\u2500 show bfd peers.log\n\n12 directories, 8 files\n
    "},{"location":"cli/exec/#get-scheduled-tech-support","title":"Get Scheduled tech-support","text":"

    EOS offers a feature that automatically creates a tech-support archive every hour by default. These archives are stored under /mnt/flash/schedule/tech-support.

    leaf1#show schedule summary\nMaximum concurrent jobs  1\nPrepend host name to logfile: Yes\nName                 At Time       Last        Interval       Timeout        Max        Max     Logfile Location                  Status\n                                   Time         (mins)        (mins)         Log        Logs\n                                                                            Files       Size\n----------------- ------------- ----------- -------------- ------------- ----------- ---------- --------------------------------- ------\ntech-support           now         08:37          60            30           100         -      flash:schedule/tech-support/      Success\n\n\nleaf1#bash ls /mnt/flash/schedule/tech-support\nleaf1_tech-support_2023-03-09.1337.log.gz  leaf1_tech-support_2023-03-10.0837.log.gz  leaf1_tech-support_2023-03-11.0337.log.gz\n

    For Network Readiness for Use (NRFU) tests and to keep a comprehensive report of the system state before going live, ANTA provides a command-line interface that efficiently retrieves these files.

    "},{"location":"cli/exec/#command-overview_2","title":"Command overview","text":"
    anta exec collect-tech-support --help\nUsage: anta exec collect-tech-support [OPTIONS]\n\nCollect scheduled tech-support from EOS devices\n\nOptions:\n  -o, --output PATH              Path for tests catalog  [default: ./tech-\n                                 support]\n--latest INTEGER               Number of scheduled show-tech to retrieve\n  --configure        Ensure devices have 'aaa authorization exec default\n                     local' configured (required for SCP on EOS). THIS WILL\n                     CHANGE THE CONFIGURATION OF YOUR NETWORK.\n  -t, --tags TEXT                List of tags using comma as separator:\n                                 tag1,tag2,tag3\n  --help                         Show this message and exit.\n

    When executed, this command fetches tech-support files and downloads them locally into a device-specific subfolder within the designated folder. You can specify the output folder with the --output option.

    ANTA uses SCP to download files from devices and will not trust unknown SSH hosts by default. Add the SSH public keys of your devices to your known_hosts file or use the anta --insecure option to ignore SSH host keys validation.

    The configuration aaa authorization exec default must be present on devices to be able to use SCP. ANTA can automatically configure aaa authorization exec default local using the anta exec collect-tech-support --configure option. If you require specific AAA configuration for aaa authorization exec default, like aaa authorization exec default none or aaa authorization exec default group tacacs+, you will need to configure it manually.

    The --latest option allows retrieval of a specific number of the most recent tech-support files.

    Warning

    By default all the tech-support files present on the devices are retrieved.

    "},{"location":"cli/exec/#example_2","title":"Example","text":"
    anta --insecure exec collect-tech-support\n[15:27:19] INFO     Connecting to devices...\nINFO     Copying '/mnt/flash/schedule/tech-support/spine1_tech-support_2023-06-09.1315.log.gz' from device spine1 to 'tech-support/spine1' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf3_tech-support_2023-06-09.1315.log.gz' from device leaf3 to 'tech-support/leaf3' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf1_tech-support_2023-06-09.1315.log.gz' from device leaf1 to 'tech-support/leaf1' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf2_tech-support_2023-06-09.1315.log.gz' from device leaf2 to 'tech-support/leaf2' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/spine2_tech-support_2023-06-09.1315.log.gz' from device spine2 to 'tech-support/spine2' locally\nINFO     Copying '/mnt/flash/schedule/tech-support/leaf4_tech-support_2023-06-09.1315.log.gz' from device leaf4 to 'tech-support/leaf4' locally\nINFO     Collected 1 scheduled tech-support from leaf2\nINFO     Collected 1 scheduled tech-support from spine2\nINFO     Collected 1 scheduled tech-support from leaf3\nINFO     Collected 1 scheduled tech-support from spine1\nINFO     Collected 1 scheduled tech-support from leaf1\nINFO     Collected 1 scheduled tech-support from leaf4\n

    The output folder structure is as follows:

    tree tech-support/\ntech-support/\n\u251c\u2500\u2500 leaf1\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf1_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 leaf2\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf2_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 leaf3\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf3_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 leaf4\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 leaf4_tech-support_2023-06-09.1315.log.gz\n\u251c\u2500\u2500 spine1\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 spine1_tech-support_2023-06-09.1315.log.gz\n\u2514\u2500\u2500 spine2\n    \u2514\u2500\u2500 spine2_tech-support_2023-06-09.1315.log.gz\n\n6 directories, 6 files\n

    Each device has its own subdirectory containing the collected tech-support files.

    "},{"location":"cli/get-inventory-information/","title":"Get Inventory Information","text":""},{"location":"cli/get-inventory-information/#retrieving-inventory-information","title":"Retrieving Inventory Information","text":"

    The ANTA CLI offers multiple entrypoints to access data from your local inventory.

    "},{"location":"cli/get-inventory-information/#inventory-used-of-examples","title":"Inventory used of examples","text":"

    Let\u2019s consider the following inventory:

    ---\nanta_inventory:\nhosts:\n- host: 172.20.20.101\nname: DC1-SPINE1\ntags: [\"SPINE\", \"DC1\"]\n\n- host: 172.20.20.102\nname: DC1-SPINE2\ntags: [\"SPINE\", \"DC1\"]\n\n- host: 172.20.20.111\nname: DC1-LEAF1A\ntags: [\"LEAF\", \"DC1\"]\n\n- host: 172.20.20.112\nname: DC1-LEAF1B\ntags: [\"LEAF\", \"DC1\"]\n\n- host: 172.20.20.121\nname: DC1-BL1\ntags: [\"BL\", \"DC1\"]\n\n- host: 172.20.20.122\nname: DC1-BL2\ntags: [\"BL\", \"DC1\"]\n\n- host: 172.20.20.201\nname: DC2-SPINE1\ntags: [\"SPINE\", \"DC2\"]\n\n- host: 172.20.20.202\nname: DC2-SPINE2\ntags: [\"SPINE\", \"DC2\"]\n\n- host: 172.20.20.211\nname: DC2-LEAF1A\ntags: [\"LEAF\", \"DC2\"]\n\n- host: 172.20.20.212\nname: DC2-LEAF1B\ntags: [\"LEAF\", \"DC2\"]\n\n- host: 172.20.20.221\nname: DC2-BL1\ntags: [\"BL\", \"DC2\"]\n\n- host: 172.20.20.222\nname: DC2-BL2\ntags: [\"BL\", \"DC2\"]\n
    "},{"location":"cli/get-inventory-information/#obtaining-all-configured-tags","title":"Obtaining all configured tags","text":"

    As most of ANTA\u2019s commands accommodate tag filtering, this particular command is useful for enumerating all tags configured in the inventory. Running the anta get tags command will return a list of all tags that have been configured in the inventory.

    "},{"location":"cli/get-inventory-information/#command-overview","title":"Command overview","text":"
    anta get tags --help\nUsage: anta get tags [OPTIONS]\n\nGet list of configured tags in user inventory.\n\nOptions:\n  --help  Show this message and exit.\n
    "},{"location":"cli/get-inventory-information/#example","title":"Example","text":"

    To get the list of all configured tags in the inventory, run the following command:

    anta get tags\nTags found:\n[\n\"BL\",\n  \"DC1\",\n  \"DC2\",\n  \"LEAF\",\n  \"SPINE\",\n  \"all\"\n]\n\n* note that tag all has been added by anta\n

    Note

    Even if you haven\u2019t explicitly configured the all tag in the inventory, it is automatically added. This default tag allows to execute commands on all devices in the inventory when no tag is specified.

    "},{"location":"cli/get-inventory-information/#list-devices-in-inventory","title":"List devices in inventory","text":"

    This command will list all devices available in the inventory. Using the --tags option, you can filter this list to only include devices with specific tags. The --connected option allows to display only the devices where a connection has been established.

    "},{"location":"cli/get-inventory-information/#command-overview_1","title":"Command overview","text":"
    anta get inventory --help\nUsage: anta get inventory [OPTIONS]\n\nShow inventory loaded in ANTA.\n\nOptions:\n  -t, --tags TEXT                List of tags using comma as separator:\n                                 tag1,tag2,tag3\n  --connected / --not-connected  Display inventory after connection has been\n                                 created\n  --help                         Show this message and exit.\n

    Tip

    In its default mode, anta get inventory provides only information that doesn\u2019t rely on a device connection. If you are interested in obtaining connection-dependent details, like the hardware model, please use the --connected option.

    "},{"location":"cli/get-inventory-information/#example_1","title":"Example","text":"

    To retrieve a comprehensive list of all devices along with their details, execute the following command. It will provide all the data loaded into the ANTA inventory from your inventory file.

    anta get inventory --tags SPINE\nCurrent inventory content is:\n{\n'DC1-SPINE1': AsyncEOSDevice(\nname='DC1-SPINE1',\n        tags=['SPINE', 'DC1', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.101',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    ),\n    'DC1-SPINE2': AsyncEOSDevice(\nname='DC1-SPINE2',\n        tags=['SPINE', 'DC1', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.102',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    ),\n    'DC2-SPINE1': AsyncEOSDevice(\nname='DC2-SPINE1',\n        tags=['SPINE', 'DC2', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.201',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    ),\n    'DC2-SPINE2': AsyncEOSDevice(\nname='DC2-SPINE2',\n        tags=['SPINE', 'DC2', 'all'],\n        hw_model=None,\n        is_online=False,\n        established=False,\n        host='172.20.20.202',\n        eapi_port=443,\n        username='arista',\n        password='arista',\n        enable=True,\n        enable_password='arista',\n        insecure=False\n    )\n}\n
    "},{"location":"cli/inv-from-ansible/","title":"Inventory from Ansible","text":""},{"location":"cli/inv-from-ansible/#create-an-inventory-from-ansible-inventory","title":"Create an Inventory from Ansible inventory","text":"

    In large setups, it might be beneficial to construct your inventory based on your Ansible inventory. The from-ansible entrypoint of the get command enables the user to create an ANTA inventory from Ansible.

    "},{"location":"cli/inv-from-ansible/#command-overview","title":"Command overview","text":"
    anta get from-ansible --help\nUsage: anta get from-ansible [OPTIONS]\n\nBuild ANTA inventory from an ansible inventory YAML file\n\nOptions:\n  -g, --ansible-group TEXT        Ansible group to filter\n  -i, --ansible-inventory FILENAME\n                                  Path to your ansible inventory file to read\n-o, --output FILENAME           Path to save inventory file\n  -d, --inventory-directory PATH  Directory to save inventory file\n  --help                          Show this message and exit.\n

    The output is an inventory where the name of the container is added as a tag for each host:

    anta_inventory:\nhosts:\n- host: 10.73.252.41\nname: srv-pod01\n- host: 10.73.252.42\nname: srv-pod02\n- host: 10.73.252.43\nname: srv-pod03\n

    Warning

    The current implementation only considers devices directly attached to a specific Ansible group and does not support inheritence when using the --ansible-group option.

    host value is coming from the ansible_host key in your inventory while name is the name you defined for your host. Below is an ansible inventory example used to generate previous inventory:

    ---\ntooling:\nchildren:\nendpoints:\nhosts:\nsrv-pod01:\nansible_httpapi_port: 9023\nansible_port: 9023\nansible_host: 10.73.252.41\ntype: endpoint\nsrv-pod02:\nansible_httpapi_port: 9024\nansible_port: 9024\nansible_host: 10.73.252.42\ntype: endpoint\nsrv-pod03:\nansible_httpapi_port: 9025\nansible_port: 9025\nansible_host: 10.73.252.43\ntype: endpoint\n
    "},{"location":"cli/inv-from-cvp/","title":"Inventory from CVP","text":""},{"location":"cli/inv-from-cvp/#create-an-inventory-from-cloudvision","title":"Create an Inventory from CloudVision","text":"

    In large setups, it might be beneficial to construct your inventory based on CloudVision. The from-cvp entrypoint of the get command enables the user to create an ANTA inventory from CloudVision.

    "},{"location":"cli/inv-from-cvp/#command-overview","title":"Command overview","text":"
    anta get from-cvp --help\nUsage: anta get from-cvp [OPTIONS]\n\nBuild ANTA inventory from Cloudvision\n\nOptions:\n  -ip, --cvp-ip TEXT              CVP IP Address  [required]\n-u, --cvp-username TEXT         CVP Username  [required]\n-p, --cvp-password TEXT         CVP Password / token  [required]\n-c, --cvp-container TEXT        Container where devices are configured\n  -d, --inventory-directory PATH  Path to save inventory file\n  --help                          Show this message and exit.\n

    The output is an inventory where the name of the container is added as a tag for each host:

    anta_inventory:\nhosts:\n- host: 192.168.0.13\nname: leaf2\ntags:\n- pod1\n- host: 192.168.0.15\nname: leaf4\ntags:\n- pod2\n

    Warning

    The current implementation only considers devices directly attached to a specific container when using the --cvp-container option.

    "},{"location":"cli/inv-from-cvp/#creating-an-inventory-from-multiple-containers","title":"Creating an inventory from multiple containers","text":"

    If you need to create an inventory from multiple containers, you can use a bash command and then manually concatenate files to create a single inventory file:

    $ for container in pod01 pod02 spines; do anta get from-cvp -ip <cvp-ip> -u cvpadmin -p cvpadmin -c $container -d test-inventory; done\n\n[12:25:35] INFO     Getting auth token from cvp.as73.inetsix.net for user tom\n[12:25:36] INFO     Creating inventory folder /home/tom/Projects/arista/network-test-automation/test-inventory\n           WARNING  Using the new api_token parameter. This will override usage of the cvaas_token parameter if both are provided. This is because api_token and cvaas_token parameters\n                    are for the same use case and api_token is more generic\n           INFO     Connected to CVP cvp.as73.inetsix.net\n\n\n[12:25:37] INFO     Getting auth token from cvp.as73.inetsix.net for user tom\n[12:25:38] WARNING  Using the new api_token parameter. This will override usage of the cvaas_token parameter if both are provided. This is because api_token and cvaas_token parameters\n                    are for the same use case and api_token is more generic\n           INFO     Connected to CVP cvp.as73.inetsix.net\n\n\n[12:25:38] INFO     Getting auth token from cvp.as73.inetsix.net for user tom\n[12:25:39] WARNING  Using the new api_token parameter. This will override usage of the cvaas_token parameter if both are provided. This is because api_token and cvaas_token parameters\n                    are for the same use case and api_token is more generic\n           INFO     Connected to CVP cvp.as73.inetsix.net\n\n           INFO     Inventory file has been created in /home/tom/Projects/arista/network-test-automation/test-inventory/inventory-spines.yml\n
    "},{"location":"cli/nrfu/","title":"NRFU","text":""},{"location":"cli/nrfu/#execute-network-readiness-for-use-nrfu-testing","title":"Execute Network Readiness For Use (NRFU) Testing","text":"

    ANTA provides a set of commands for performing NRFU tests on devices. These commands are under the anta nrfu namespace and offer multiple output format options:

    • Text view
    • Table view
    • JSON view
    • Custom template view
    "},{"location":"cli/nrfu/#nrfu-command-overview","title":"NRFU Command overview","text":"
    anta nrfu --help\nUsage: anta nrfu [OPTIONS] COMMAND [ARGS]...\n\n  Run NRFU against inventory devices\n\nOptions:\n  -c, --catalog FILE  Path to the tests catalog YAML file  [env var:\n                      ANTA_NRFU_CATALOG; required]\n--help              Show this message and exit.\n\nCommands:\n  json        ANTA command to check network state with JSON result\n  table       ANTA command to check network states with table result\n  text        ANTA command to check network states with text result\n  tpl-report  ANTA command to check network state with templated report\n

    All commands under the anta nrfu namespace require a catalog yaml file specified with the --catalog option.

    "},{"location":"cli/nrfu/#performing-nrfu-with-text-rendering","title":"Performing NRFU with text rendering","text":"

    The text subcommand provides a straightforward text report for each test executed on all devices in your inventory.

    "},{"location":"cli/nrfu/#command-overview","title":"Command overview","text":"
    anta nrfu text --help\nUsage: anta nrfu text [OPTIONS]\n\nANTA command to check network states with text result\n\nOptions:\n  -t, --tags TEXT    List of tags using comma as separator: tag1,tag2,tag3\n  -s, --search TEXT  Regular expression to search in both name and test\n--skip-error       Hide tests in errors due to connectivity issue\n  --help             Show this message and exit.\n

    The --tags option allows to target specific devices in your inventory, while the --search option permits filtering based on a regular expression pattern in both the hostname and the test name.

    The --skip-error option can be used to exclude tests that failed due to connectivity issues or unsupported commands.

    "},{"location":"cli/nrfu/#example","title":"Example","text":"

    anta nrfu text --tags LEAF --search DC1-LEAF1A\n

    "},{"location":"cli/nrfu/#performing-nrfu-with-table-rendering","title":"Performing NRFU with table rendering","text":"

    The table command under the anta nrfu namespace offers a clear and organized table view of the test results, suitable for filtering. It also has its own set of options for better control over the output.

    "},{"location":"cli/nrfu/#command-overview_1","title":"Command overview","text":"
    anta nrfu table --help\nUsage: anta nrfu table [OPTIONS]\n\nANTA command to check network states with table result\n\nOptions:\n  --tags TEXT               List of tags using comma as separator:\n                            tag1,tag2,tag3\n  -d, --device TEXT         Show a summary for this device\n  -t, --test TEXT           Show a summary for this test\n--group-by [device|test]  Group result by test or host. default none\n  --help                    Show this message and exit.\n

    The --tags option can be used to target specific devices in your inventory.

    The --device and --test options show a summarized view of the test results for a specific host or test case, respectively.

    The --group-by option show a summarized view of the test results per host or per test.

    "},{"location":"cli/nrfu/#examples","title":"Examples","text":"

    anta nrfu table --tags LEAF\n

    For larger setups, you can also group the results by host or test to get a summarized view:

    anta nrfu table --group-by device\n

    anta nrfu table --group-by test\n

    To get more specific information, it is possible to filter on a single device or a single test:

    anta nrfu table --device spine1\n

    anta nrfu table --test VerifyZeroTouch\n

    "},{"location":"cli/nrfu/#performing-nrfu-with-json-rendering","title":"Performing NRFU with JSON rendering","text":"

    The JSON rendering command in NRFU testing is useful in generating a JSON output that can subsequently be passed on to another tool for reporting purposes.

    "},{"location":"cli/nrfu/#command-overview_2","title":"Command overview","text":"
    anta nrfu json --help\nUsage: anta nrfu json [OPTIONS]\n\nANTA command to check network state with JSON result\n\nOptions:\n  -t, --tags TEXT    List of tags using comma as separator: tag1,tag2,tag3\n  -o, --output FILE  Path to save report as a file  [env var:\n                     ANTA_NRFU_JSON_OUTPUT]\n--help             Show this message and exit.\n

    The --tags option can be used to target specific devices in your inventory.

    The --output option allows you to save the JSON report as a file.

    "},{"location":"cli/nrfu/#example_1","title":"Example","text":"

    anta nrfu json --tags LEAF\n

    "},{"location":"cli/nrfu/#performing-nrfu-with-custom-reports","title":"Performing NRFU with custom reports","text":"

    ANTA offers a CLI option for creating custom reports. This leverages the Jinja2 template system, allowing you to tailor reports to your specific needs.

    "},{"location":"cli/nrfu/#command-overview_3","title":"Command overview","text":"

    anta nrfu tpl-report --help\nUsage: anta nrfu tpl-report [OPTIONS]\n\nANTA command to check network state with templated report\n\nOptions:\n  -tpl, --template FILE  Path to the template to use for the report  [env var:\n                         ANTA_NRFU_TPL_REPORT_TEMPLATE; required]\n-o, --output FILE      Path to save report as a file  [env var:\n                         ANTA_NRFU_TPL_REPORT_OUTPUT]\n-t, --tags TEXT        List of tags using comma as separator: tag1,tag2,tag3\n  --help                 Show this message and exit.\n
    The --template option is used to specify the Jinja2 template file for generating the custom report.

    The --output option allows you to choose the path where the final report will be saved.

    The --tags option can be used to target specific devices in your inventory.

    "},{"location":"cli/nrfu/#example_2","title":"Example","text":"

    anta nrfu tpl-report --tags LEAF --template ./custom_template.j2\n

    The template ./custom_template.j2 is a simple Jinja2 template:

    {% for d in data %}\n* {{ d.test }} is [green]{{ d.result | upper}}[/green] for {{ d.name }}\n{% endfor %}\n

    The Jinja2 template has access to all TestResult elements and their values, as described in this documentation.

    You can also save the report result to a file using the --output option:

    anta nrfu tpl-report --tags LEAF --template ./custom_template.j2 --output nrfu-tpl-report.txt\n

    The resulting output might look like this:

    cat nrfu-tpl-report.txt\n* VerifyMlagStatus is [green]SUCCESS[/green] for DC1-LEAF1A\n* VerifyMlagInterfaces is [green]SUCCESS[/green] for DC1-LEAF1A\n* VerifyMlagConfigSanity is [green]SUCCESS[/green] for DC1-LEAF1A\n* VerifyMlagReloadDelay is [green]SUCCESS[/green] for DC1-LEAF1A\n
    "},{"location":"cli/overview/","title":"Overview","text":""},{"location":"cli/overview/#overview-of-antas-command-line-interface-cli","title":"Overview of ANTA\u2019s Command-Line Interface (CLI)","text":"

    ANTA provides a powerful Command-Line Interface (CLI) to perform a wide range of operations. This document provides a comprehensive overview of ANTA CLI usage and its commands.

    ANTA can also be used as a Python library, allowing you to build your own tools based on it. Visit this page for more details.

    To start using the ANTA CLI, open your terminal and type anta.

    "},{"location":"cli/overview/#invoking-anta-cli","title":"Invoking ANTA CLI","text":"
    $ anta --help\nUsage: anta [OPTIONS] COMMAND [ARGS]...\n\n  Arista Network Test Automation (ANTA) CLI\n\nOptions:\n  --version                       Show the version and exit.\n  --username TEXT                 Username to connect to EOS  [env var:\n                                  ANTA_USERNAME; required]\n--password TEXT                 Password to connect to EOS that must be\n                                  provided. It can be prompted using '--\n                                  prompt' option.  [env var: ANTA_PASSWORD]\n--enable-password TEXT          Password to access EOS Privileged EXEC mode.\n                                  It can be prompted using '--prompt' option.\n                                  Requires '--enable' option.  [env var:\n                                  ANTA_ENABLE_PASSWORD]\n--enable                        Some commands may require EOS Privileged\n                                  EXEC mode. This option tries to access this\n                                  mode before sending a command to the device.\n                                  [env var: ANTA_ENABLE]\n-P, --prompt                    Prompt for passwords if they are not\n                                  provided.\n  --timeout INTEGER               Global connection timeout  [env var:\n                                  ANTA_TIMEOUT; default: 30]\n--insecure                      Disable SSH Host Key validation  [env var:\n                                  ANTA_INSECURE]\n-i, --inventory FILE            Path to the inventory YAML file  [env var:\n                                  ANTA_INVENTORY; required]\n--log-file FILE                 Send the logs to a file. If logging level is\n                                  DEBUG, only INFO or higher will be sent to\n                                  stdout.  [env var: ANTA_LOG_FILE]\n--log-level, --log [CRITICAL|ERROR|WARNING|INFO|DEBUG]\nANTA logging level  [env var:\n                                  ANTA_LOG_LEVEL; default: INFO]\n--ignore-status                 Always exit with success  [env var:\n                                  ANTA_IGNORE_STATUS]\n--ignore-error                  Only report failures and not errors  [env\n                                  var: ANTA_IGNORE_ERROR]\n--help                          Show this message and exit.\n\nCommands:\n  debug  Debug commands for building ANTA\n  exec   Execute commands to inventory devices\n  get    Get data from/to ANTA\n  nrfu   Run NRFU against inventory devices\n
    "},{"location":"cli/overview/#anta-global-parameters","title":"ANTA Global Parameters","text":"

    Certain parameters are globally required and can be either passed to the ANTA CLI or set as an environment variable (ENV VAR).

    To pass the parameters via the CLI:

    anta --username tom --password arista123 --inventory inventory.yml <anta cli>\n

    To set them as ENV VAR:

    export ANTA_USERNAME=tom\nexport ANTA_PASSWORD=arista123\nexport ANTA_INVENTORY=inventory.yml\n

    Then, run the CLI:

    anta <anta cli>\n
    "},{"location":"cli/overview/#anta-exit-codes","title":"ANTA Exit Codes","text":"

    ANTA utilizes different exit codes to indicate the status of the test runs.

    For all subcommands, ANTA will return the exit code 0, indicating a successful operation, except for the nrfu command.

    For the nrfu command, ANTA uses the following exit codes:

    • Exit code 0 - All tests passed successfully.
    • Exit code 1 - Tests were run, but at least one test returned a failure.
    • Exit code 2 - Tests were run, but at least one test returned an error.
    • Exit code 3 - An internal error occurred while executing tests.

    To ignore the test status, use anta --ignore-status nrfu, and the exit code will always be 0.

    To ignore errors, use anta --ignore-error nrfu, and the exit code will be 0 if all tests succeeded or 1 if any test failed.

    "},{"location":"cli/overview/#shell-completion","title":"Shell Completion","text":"

    You can enable shell completion for the ANTA CLI:

    ZSHBASH

    If you use ZSH shell, add the following line in your ~/.zshrc:

    eval \"$(_ANTA_COMPLETE=zsh_source anta)\" > /dev/null\n

    With bash, add the following line in your ~/.bashrc:

    eval \"$(_ANTA_COMPLETE=bash_source anta)\" > /dev/null\n
    "},{"location":"imgs/animated-svg/","title":"Animated svg","text":"

    Repository: https://github.com/marionebl/svg-term-cli Command: cat anta-nrfu.cast | svg-term --height 10 --window --out anta.svg

    "}]} \ No newline at end of file diff --git a/main/sitemap.xml.gz b/main/sitemap.xml.gz index 519b2ad2c..eb7cbee04 100644 Binary files a/main/sitemap.xml.gz and b/main/sitemap.xml.gz differ