Adding upstream version 1.4.0.
Signed-off-by: Daniel Baumann <daniel@debian.org>
This commit is contained in:
parent
dc7df702ea
commit
7996c81031
166 changed files with 13787 additions and 11959 deletions
|
@ -1,10 +0,0 @@
|
||||||
# Arista Secret Scanner allow list
|
|
||||||
|
|
||||||
version: v1.0
|
|
||||||
allowed_secrets:
|
|
||||||
- secret_pattern: "https://ansible:ansible@192.168.0.2"
|
|
||||||
category: FALSE_POSITIVE
|
|
||||||
reason: Used as example in documentation
|
|
||||||
- secret_pattern: "https://ansible:ansible@192.168.0.17"
|
|
||||||
category: FALSE_POSITIVE
|
|
||||||
reason: Used as example in documentation
|
|
16
.github/release.md
vendored
16
.github/release.md
vendored
|
@ -14,11 +14,12 @@ Also, [Github CLI](https://cli.github.com/) can be helpful and is recommended
|
||||||
|
|
||||||
In a branch specific for this, use the `bumpver` tool.
|
In a branch specific for this, use the `bumpver` tool.
|
||||||
It is configured to update:
|
It is configured to update:
|
||||||
* pyproject.toml
|
- pyproject.toml
|
||||||
* docs/contribution.md
|
- docs/contribution.md
|
||||||
* docs/requirements-and-installation.md
|
- docs/requirements-and-installation.md
|
||||||
|
|
||||||
For instance to bump a patch version:
|
For instance to bump a patch version:
|
||||||
|
|
||||||
```
|
```
|
||||||
bumpver update --patch
|
bumpver update --patch
|
||||||
```
|
```
|
||||||
|
@ -54,35 +55,41 @@ This is to be executed at the top of the repo
|
||||||
```bash
|
```bash
|
||||||
git switch -c rel/vx.x.x
|
git switch -c rel/vx.x.x
|
||||||
```
|
```
|
||||||
|
|
||||||
3. [Optional] Clean dist if required
|
3. [Optional] Clean dist if required
|
||||||
4. Build the package locally
|
4. Build the package locally
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python -m build
|
python -m build
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Check the package with `twine` (replace with your vesion)
|
5. Check the package with `twine` (replace with your vesion)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
twine check dist/*
|
twine check dist/*
|
||||||
```
|
```
|
||||||
|
|
||||||
6. Upload the package to test.pypi
|
6. Upload the package to test.pypi
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
twine upload -r testpypi dist/anta-x.x.x.*
|
twine upload -r testpypi dist/anta-x.x.x.*
|
||||||
```
|
```
|
||||||
|
|
||||||
7. Verify the package by installing it in a local venv and checking it installs
|
7. Verify the package by installing it in a local venv and checking it installs
|
||||||
and run correctly (run the tests)
|
and run correctly (run the tests)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# In a brand new venv
|
# In a brand new venv
|
||||||
pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple --no-cache anta
|
pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple --no-cache anta[cli]
|
||||||
```
|
```
|
||||||
|
|
||||||
8. Push to anta repository and create a Pull Request
|
8. Push to anta repository and create a Pull Request
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git push origin HEAD
|
git push origin HEAD
|
||||||
gh pr create --title 'bump: ANTA vx.x.x'
|
gh pr create --title 'bump: ANTA vx.x.x'
|
||||||
```
|
```
|
||||||
|
|
||||||
9. Merge PR after review and wait for [workflow](https://github.com/aristanetworks/anta/actions/workflows/release.yml) to be executed.
|
9. Merge PR after review and wait for [workflow](https://github.com/aristanetworks/anta/actions/workflows/release.yml) to be executed.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -101,3 +108,4 @@ This is to be executed at the top of the repo
|
||||||
```bash
|
```bash
|
||||||
anta --version
|
anta --version
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
28
.github/workflows/code-testing.yml
vendored
28
.github/workflows/code-testing.yml
vendored
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
name: Linting and Testing Anta
|
name: Linting and Testing ANTA
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
branches:
|
branches:
|
||||||
|
@ -59,24 +59,10 @@ jobs:
|
||||||
pip install .
|
pip install .
|
||||||
- name: install dev requirements
|
- name: install dev requirements
|
||||||
run: pip install .[dev]
|
run: pip install .[dev]
|
||||||
# @gmuloc: commenting this out for now
|
|
||||||
#missing-documentation:
|
|
||||||
# name: "Warning documentation is missing"
|
|
||||||
# runs-on: ubuntu-latest
|
|
||||||
# needs: [file-changes]
|
|
||||||
# if: needs.file-changes.outputs.cli == 'true' && needs.file-changes.outputs.docs == 'false'
|
|
||||||
# steps:
|
|
||||||
# - name: Documentation is missing
|
|
||||||
# uses: GrantBirki/comment@v2.0.10
|
|
||||||
# with:
|
|
||||||
# body: |
|
|
||||||
# Please consider that documentation is missing under `docs/` folder.
|
|
||||||
# You should update documentation to reflect your change, or maybe not :)
|
|
||||||
lint-python:
|
lint-python:
|
||||||
name: Check the code style
|
name: Check the code style
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: file-changes
|
needs: file-changes
|
||||||
if: needs.file-changes.outputs.code == 'true'
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Setup Python
|
- name: Setup Python
|
||||||
|
@ -91,7 +77,6 @@ jobs:
|
||||||
name: Check typing
|
name: Check typing
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: file-changes
|
needs: file-changes
|
||||||
if: needs.file-changes.outputs.code == 'true'
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Setup Python
|
- name: Setup Python
|
||||||
|
@ -119,10 +104,20 @@ jobs:
|
||||||
run: pip install tox tox-gh-actions
|
run: pip install tox tox-gh-actions
|
||||||
- name: "Run pytest via tox for ${{ matrix.python }}"
|
- name: "Run pytest via tox for ${{ matrix.python }}"
|
||||||
run: tox
|
run: tox
|
||||||
|
- name: Upload coverage from pytest
|
||||||
|
# Coverage only runs as part of 3.11.
|
||||||
|
if: |
|
||||||
|
matrix.python == '3.11'
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: pytest-coverage
|
||||||
|
include-hidden-files: true
|
||||||
|
path: .coverage.xml
|
||||||
test-python-windows:
|
test-python-windows:
|
||||||
name: Pytest on 3.12 for windows
|
name: Pytest on 3.12 for windows
|
||||||
runs-on: windows-2022
|
runs-on: windows-2022
|
||||||
needs: [lint-python, type-python]
|
needs: [lint-python, type-python]
|
||||||
|
if: needs.file-changes.outputs.code == 'true'
|
||||||
env:
|
env:
|
||||||
# Required to prevent asyncssh to fail.
|
# Required to prevent asyncssh to fail.
|
||||||
USERNAME: WindowsUser
|
USERNAME: WindowsUser
|
||||||
|
@ -154,6 +149,7 @@ jobs:
|
||||||
name: Benchmark ANTA for Python 3.12
|
name: Benchmark ANTA for Python 3.12
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: [test-python]
|
needs: [test-python]
|
||||||
|
if: needs.file-changes.outputs.code == 'true'
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Setup Python
|
- name: Setup Python
|
||||||
|
|
15
.github/workflows/secret-scanner.yml
vendored
15
.github/workflows/secret-scanner.yml
vendored
|
@ -1,15 +0,0 @@
|
||||||
# Secret-scanner workflow from Arista Networks.
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
types: [synchronize]
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
name: Secret Scanner (go/secret-scanner)
|
|
||||||
jobs:
|
|
||||||
scan_secret:
|
|
||||||
name: Scan incoming changes
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Run scanner
|
|
||||||
uses: aristanetworks/secret-scanner-service-public@main
|
|
64
.github/workflows/sonar.yml
vendored
64
.github/workflows/sonar.yml
vendored
|
@ -1,15 +1,9 @@
|
||||||
---
|
---
|
||||||
name: Analysis with Sonarlint and publish to SonarCloud
|
name: Analysis with Sonarlint and publish to SonarCloud
|
||||||
on:
|
on:
|
||||||
push:
|
workflow_run:
|
||||||
branches:
|
workflows: ["Linting and Testing ANTA"]
|
||||||
- main
|
types: [completed]
|
||||||
# Need to do this to be able to have coverage on PR across forks.
|
|
||||||
pull_request_target:
|
|
||||||
|
|
||||||
# TODO this can be made better by running only coverage, it happens that today
|
|
||||||
# in tox gh-actions we have configured 3.11 to run the report side in
|
|
||||||
# pyproject.toml
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
sonarcloud:
|
sonarcloud:
|
||||||
|
@ -19,26 +13,50 @@ jobs:
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
ref: ${{ github.event.pull_request.head.sha }}
|
ref: ${{ github.event.workflow_run.head_sha }}
|
||||||
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
|
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
|
||||||
- name: Setup Python
|
- name: Download coverage from unit tests
|
||||||
uses: actions/setup-python@v5
|
continue-on-error: true
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
with:
|
with:
|
||||||
python-version: 3.11
|
name: pytest-coverage
|
||||||
- name: Install dependencies
|
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
run: pip install tox tox-gh-actions
|
run-id: ${{ github.event.workflow_run.id }}
|
||||||
- name: "Run pytest via tox for ${{ matrix.python }}"
|
merge-multiple: true
|
||||||
run: tox
|
|
||||||
- name: SonarCloud Scan
|
- name: Get PR context
|
||||||
uses: SonarSource/sonarqube-scan-action@v5.0.0
|
# Source: https://github.com/orgs/community/discussions/25220#discussioncomment-11316244
|
||||||
|
id: pr-context
|
||||||
|
if: github.event.workflow_run.event == 'pull_request'
|
||||||
|
env:
|
||||||
|
# Token required for GH CLI:
|
||||||
|
GH_TOKEN: ${{ github.token }}
|
||||||
|
# Best practice for scripts is to reference via ENV at runtime. Avoid using the expression syntax in the script content directly:
|
||||||
|
PR_TARGET_REPO: ${{ github.repository }}
|
||||||
|
# If the PR is from a fork, prefix it with `<owner-login>:`, otherwise only the PR branch name is relevant:
|
||||||
|
PR_BRANCH: |-
|
||||||
|
${{
|
||||||
|
(github.event.workflow_run.head_repository.owner.login != github.event.workflow_run.repository.owner.login)
|
||||||
|
&& format('{0}:{1}', github.event.workflow_run.head_repository.owner.login, github.event.workflow_run.head_branch)
|
||||||
|
|| github.event.workflow_run.head_branch
|
||||||
|
}}
|
||||||
|
# Query the PR number by repo + branch, then assign to step output:
|
||||||
|
run: |
|
||||||
|
gh pr view --repo "${PR_TARGET_REPO}" "${PR_BRANCH}" \
|
||||||
|
--json 'number,baseRefName' --jq '"number=\(.number)\nbase_ref=\(.baseRefName)"' \
|
||||||
|
>> "${GITHUB_OUTPUT}"
|
||||||
|
echo "pr_branch=${PR_BRANCH}" >> "${GITHUB_OUTPUT}"
|
||||||
|
|
||||||
|
- name: SonarQube Scan
|
||||||
|
uses: SonarSource/sonarqube-scan-action@v5.2.0
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
||||||
with:
|
with:
|
||||||
# Using ACTION_STEP_DEBUG to trigger verbose when debugging in Github Action
|
# Using ACTION_STEP_DEBUG to trigger verbose when debugging in Github Action
|
||||||
args: >
|
args: >
|
||||||
-Dsonar.scm.revision=${{ github.event.pull_request.head.sha }}
|
-Dsonar.scm.revision=${{ github.event.workflow_run.head_sha }}
|
||||||
-Dsonar.pullrequest.key=${{ github.event.number }}
|
-Dsonar.pullrequest.key=${{ steps.pr-context.outputs.number }}
|
||||||
-Dsonar.pullrequest.branch=${{ github.event.pull_request.head.ref }}
|
-Dsonar.pullrequest.branch=${{ steps.pr-context.outputs.pr_branch }}
|
||||||
-Dsonar.pullrequest.base=${{ github.event.pull_request.base.ref }}
|
-Dsonar.pullrequest.base=${{ steps.pr-context.outputs.base_ref }}
|
||||||
-Dsonar.verbose=${{ secrets.ACTIONS_STEP_DEBUG }}
|
-Dsonar.verbose=${{ secrets.ACTIONS_STEP_DEBUG }}
|
||||||
|
|
|
@ -47,7 +47,7 @@ repos:
|
||||||
- "<!--| ~| -->"
|
- "<!--| ~| -->"
|
||||||
|
|
||||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||||
rev: v0.10.0
|
rev: v0.11.9
|
||||||
hooks:
|
hooks:
|
||||||
- id: ruff
|
- id: ruff
|
||||||
name: Run Ruff linter
|
name: Run Ruff linter
|
||||||
|
@ -56,7 +56,7 @@ repos:
|
||||||
name: Run Ruff formatter
|
name: Run Ruff formatter
|
||||||
|
|
||||||
- repo: https://github.com/pycqa/pylint
|
- repo: https://github.com/pycqa/pylint
|
||||||
rev: "v3.3.5"
|
rev: "v3.3.7"
|
||||||
hooks:
|
hooks:
|
||||||
- id: pylint
|
- id: pylint
|
||||||
name: Check code style with pylint
|
name: Check code style with pylint
|
||||||
|
@ -75,6 +75,7 @@ repos:
|
||||||
- pytest
|
- pytest
|
||||||
- pytest-codspeed
|
- pytest-codspeed
|
||||||
- respx
|
- respx
|
||||||
|
- pydantic-settings
|
||||||
|
|
||||||
- repo: https://github.com/codespell-project/codespell
|
- repo: https://github.com/codespell-project/codespell
|
||||||
rev: v2.4.1
|
rev: v2.4.1
|
||||||
|
@ -124,6 +125,7 @@ repos:
|
||||||
pass_filenames: false
|
pass_filenames: false
|
||||||
additional_dependencies:
|
additional_dependencies:
|
||||||
- anta[cli]
|
- anta[cli]
|
||||||
|
- pydantic-settings
|
||||||
- id: doc-snippets
|
- id: doc-snippets
|
||||||
name: Generate doc snippets
|
name: Generate doc snippets
|
||||||
entry: >-
|
entry: >-
|
||||||
|
@ -135,3 +137,4 @@ repos:
|
||||||
pass_filenames: false
|
pass_filenames: false
|
||||||
additional_dependencies:
|
additional_dependencies:
|
||||||
- anta[cli]
|
- anta[cli]
|
||||||
|
- pydantic-settings
|
||||||
|
|
|
@ -38,7 +38,7 @@ LABEL "org.opencontainers.image.title"="anta" \
|
||||||
"org.opencontainers.artifact.description"="network-test-automation in a Python package and Python scripts to test Arista devices." \
|
"org.opencontainers.artifact.description"="network-test-automation in a Python package and Python scripts to test Arista devices." \
|
||||||
"org.opencontainers.image.description"="network-test-automation in a Python package and Python scripts to test Arista devices." \
|
"org.opencontainers.image.description"="network-test-automation in a Python package and Python scripts to test Arista devices." \
|
||||||
"org.opencontainers.image.source"="https://github.com/aristanetworks/anta" \
|
"org.opencontainers.image.source"="https://github.com/aristanetworks/anta" \
|
||||||
"org.opencontainers.image.url"="https://www.anta.ninja" \
|
"org.opencontainers.image.url"="https://anta.arista.com" \
|
||||||
"org.opencontainers.image.documentation"="https://anta.arista.com" \
|
"org.opencontainers.image.documentation"="https://anta.arista.com" \
|
||||||
"org.opencontainers.image.licenses"="Apache-2.0" \
|
"org.opencontainers.image.licenses"="Apache-2.0" \
|
||||||
"org.opencontainers.image.vendor"="Arista Networks" \
|
"org.opencontainers.image.vendor"="Arista Networks" \
|
||||||
|
|
491
anta/_runner.py
Normal file
491
anta/_runner.py
Normal file
|
@ -0,0 +1,491 @@
|
||||||
|
# Copyright (c) 2023-2025 Arista Networks, Inc.
|
||||||
|
# Use of this source code is governed by the Apache License 2.0
|
||||||
|
# that can be found in the LICENSE file.
|
||||||
|
"""ANTA runner classes."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from asyncio import Semaphore, gather
|
||||||
|
from collections import defaultdict
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime, timedelta, timezone
|
||||||
|
from inspect import getcoroutinelocals
|
||||||
|
from typing import TYPE_CHECKING, Any
|
||||||
|
|
||||||
|
from pydantic import BaseModel, ConfigDict
|
||||||
|
|
||||||
|
from anta import GITHUB_SUGGESTION
|
||||||
|
from anta.cli.console import console
|
||||||
|
from anta.inventory import AntaInventory
|
||||||
|
from anta.logger import anta_log_exception
|
||||||
|
from anta.models import AntaTest
|
||||||
|
from anta.result_manager import ResultManager
|
||||||
|
from anta.settings import AntaRunnerSettings
|
||||||
|
from anta.tools import Catchtime
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from collections.abc import Coroutine
|
||||||
|
|
||||||
|
from anta.catalog import AntaCatalog, AntaTestDefinition
|
||||||
|
from anta.device import AntaDevice
|
||||||
|
from anta.result_manager.models import TestResult
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
class AntaRunFilters(BaseModel):
|
||||||
|
"""Define filters for an ANTA run.
|
||||||
|
|
||||||
|
Filters determine which devices and tests to include in a run, and how to
|
||||||
|
filter them with tags. This class is used by the `AntaRunner.run()` method.
|
||||||
|
|
||||||
|
Attributes
|
||||||
|
----------
|
||||||
|
devices : set[str] | None, optional
|
||||||
|
Set of device names to run tests on. If `None`, includes all devices in
|
||||||
|
the inventory. Commonly set via the NRFU CLI `--device/-d` option.
|
||||||
|
tests : set[str] | None, optional
|
||||||
|
Set of test names to run. If `None`, runs all available tests in the
|
||||||
|
catalog. Commonly set via the NRFU CLI `--test/-t` option.
|
||||||
|
tags : set[str] | None, optional
|
||||||
|
Set of tags used to filter both devices and tests. A device or test
|
||||||
|
must match any of the provided tags to be included. Commonly set via
|
||||||
|
the NRFU CLI `--tags` option.
|
||||||
|
established_only : bool, default=True
|
||||||
|
When `True`, only includes devices with established connections in the
|
||||||
|
test run.
|
||||||
|
"""
|
||||||
|
|
||||||
|
model_config = ConfigDict(frozen=True, extra="forbid")
|
||||||
|
devices: set[str] | None = None
|
||||||
|
tests: set[str] | None = None
|
||||||
|
tags: set[str] | None = None
|
||||||
|
established_only: bool = True
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class AntaRunContext:
|
||||||
|
"""Store the complete context and results of an ANTA run.
|
||||||
|
|
||||||
|
A unique context is created and returned per ANTA run.
|
||||||
|
|
||||||
|
Attributes
|
||||||
|
----------
|
||||||
|
inventory: AntaInventory
|
||||||
|
Initial inventory of devices provided to the run.
|
||||||
|
catalog: AntaCatalog
|
||||||
|
Initial catalog of tests provided to the run.
|
||||||
|
manager: ResultManager
|
||||||
|
Manager with the final test results.
|
||||||
|
filters: AntaRunFilters
|
||||||
|
Provided filters to the run.
|
||||||
|
selected_inventory: AntaInventory
|
||||||
|
The final inventory of devices selected for testing.
|
||||||
|
selected_tests: defaultdict[AntaDevice, set[AntaTestDefinition]]
|
||||||
|
A mapping containing the final tests to be run per device.
|
||||||
|
devices_filtered_at_setup: list[str]
|
||||||
|
List of device names that were filtered during the inventory setup phase.
|
||||||
|
devices_unreachable_at_setup: list[str]
|
||||||
|
List of device names that were found unreachable during the inventory setup phase.
|
||||||
|
warnings_at_setup: list[str]
|
||||||
|
List of warnings caught during the setup phase.
|
||||||
|
start_time: datetime | None
|
||||||
|
Start time of the run. None if not set yet.
|
||||||
|
end_time: datetime | None
|
||||||
|
End time of the run. None if not set yet.
|
||||||
|
"""
|
||||||
|
|
||||||
|
inventory: AntaInventory
|
||||||
|
catalog: AntaCatalog
|
||||||
|
manager: ResultManager
|
||||||
|
filters: AntaRunFilters
|
||||||
|
dry_run: bool = False
|
||||||
|
|
||||||
|
# State populated during the run
|
||||||
|
selected_inventory: AntaInventory = field(default_factory=AntaInventory)
|
||||||
|
selected_tests: defaultdict[AntaDevice, set[AntaTestDefinition]] = field(default_factory=lambda: defaultdict(set))
|
||||||
|
devices_filtered_at_setup: list[str] = field(default_factory=list)
|
||||||
|
devices_unreachable_at_setup: list[str] = field(default_factory=list)
|
||||||
|
warnings_at_setup: list[str] = field(default_factory=list)
|
||||||
|
start_time: datetime | None = None
|
||||||
|
end_time: datetime | None = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def total_devices_in_inventory(self) -> int:
|
||||||
|
"""Total devices in the initial inventory provided to the run."""
|
||||||
|
return len(self.inventory)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def total_devices_filtered_by_tags(self) -> int:
|
||||||
|
"""Total devices filtered by tags at inventory setup."""
|
||||||
|
return len(self.devices_filtered_at_setup)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def total_devices_unreachable(self) -> int:
|
||||||
|
"""Total devices unreachable at inventory setup."""
|
||||||
|
return len(self.devices_unreachable_at_setup)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def total_devices_selected_for_testing(self) -> int:
|
||||||
|
"""Total devices selected for testing."""
|
||||||
|
return len(self.selected_inventory)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def total_tests_scheduled(self) -> int:
|
||||||
|
"""Total tests scheduled to run across all selected devices."""
|
||||||
|
return sum(len(tests) for tests in self.selected_tests.values())
|
||||||
|
|
||||||
|
@property
|
||||||
|
def duration(self) -> timedelta | None:
|
||||||
|
"""Calculate the duration of the run. Returns None if start or end time is not set."""
|
||||||
|
if self.start_time and self.end_time:
|
||||||
|
return self.end_time - self.start_time
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# pylint: disable=too-few-public-methods
|
||||||
|
class AntaRunner:
|
||||||
|
"""Run and manage ANTA test execution.
|
||||||
|
|
||||||
|
This class orchestrates the execution of ANTA tests across network devices. It handles
|
||||||
|
inventory filtering, test selection, concurrent test execution, and result collection.
|
||||||
|
An `AntaRunner` instance is stateless between runs. All necessary inputs like inventory
|
||||||
|
and catalog are provided to the `run()` method.
|
||||||
|
|
||||||
|
Attributes
|
||||||
|
----------
|
||||||
|
_settings : AntaRunnerSettings
|
||||||
|
Settings container for the runner. This can be provided during initialization;
|
||||||
|
otherwise, it is loaded from environment variables by default. See the
|
||||||
|
`AntaRunnerSettings` class definition in the `anta.settings` module for details.
|
||||||
|
|
||||||
|
Notes
|
||||||
|
-----
|
||||||
|
After initializing an `AntaRunner` instance, tests should only be executed through
|
||||||
|
the `run()` method. This method manages the complete test lifecycle including setup,
|
||||||
|
execution, and cleanup.
|
||||||
|
|
||||||
|
Examples
|
||||||
|
--------
|
||||||
|
```python
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
from anta._runner import AntaRunner, AntaRunFilters
|
||||||
|
from anta.catalog import AntaCatalog
|
||||||
|
from anta.inventory import AntaInventory
|
||||||
|
|
||||||
|
inventory = AntaInventory.parse(
|
||||||
|
filename="anta_inventory.yml",
|
||||||
|
username="arista",
|
||||||
|
password="arista",
|
||||||
|
)
|
||||||
|
catalog = AntaCatalog.parse(filename="anta_catalog.yml")
|
||||||
|
|
||||||
|
# Create an ANTA runner
|
||||||
|
runner = AntaRunner()
|
||||||
|
|
||||||
|
# Run all tests
|
||||||
|
first_run_results = asyncio.run(runner.run(inventory, catalog))
|
||||||
|
|
||||||
|
# Run with filters
|
||||||
|
second_run_results = asyncio.run(runner.run(inventory, catalog, filters=AntaRunFilters(tags={"leaf"})))
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, settings: AntaRunnerSettings | None = None) -> None:
|
||||||
|
"""Initialize AntaRunner."""
|
||||||
|
self._settings = settings if settings is not None else AntaRunnerSettings()
|
||||||
|
logger.debug("AntaRunner initialized with settings: %s", self._settings.model_dump())
|
||||||
|
|
||||||
|
async def run(
|
||||||
|
self,
|
||||||
|
inventory: AntaInventory,
|
||||||
|
catalog: AntaCatalog,
|
||||||
|
result_manager: ResultManager | None = None,
|
||||||
|
filters: AntaRunFilters | None = None,
|
||||||
|
*,
|
||||||
|
dry_run: bool = False,
|
||||||
|
) -> AntaRunContext:
|
||||||
|
"""Run ANTA.
|
||||||
|
|
||||||
|
Run workflow:
|
||||||
|
|
||||||
|
1. Build the context object for the run.
|
||||||
|
2. Set up the selected inventory, removing filtered/unreachable devices.
|
||||||
|
3. Set up the selected tests, removing filtered tests.
|
||||||
|
4. Prepare the `AntaTest` coroutines from the selected inventory and tests.
|
||||||
|
5. Run the test coroutines if it is not a dry run.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
inventory
|
||||||
|
Inventory of network devices to test.
|
||||||
|
catalog
|
||||||
|
Catalog of tests to run.
|
||||||
|
result_manager
|
||||||
|
Manager for collecting and storing test results. If `None`, a new manager
|
||||||
|
is returned for each run, otherwise the provided manager is used
|
||||||
|
and results from subsequent runs are appended to it.
|
||||||
|
filters
|
||||||
|
Filters for the ANTA run. If `None`, run all tests on all devices.
|
||||||
|
dry_run
|
||||||
|
Dry-run mode flag. If `True`, run all setup steps but do not execute tests.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
AntaRunContext
|
||||||
|
The complete context and results of this ANTA run.
|
||||||
|
"""
|
||||||
|
start_time = datetime.now(tz=timezone.utc)
|
||||||
|
logger.info("ANTA run starting ...")
|
||||||
|
|
||||||
|
ctx = AntaRunContext(
|
||||||
|
inventory=inventory,
|
||||||
|
catalog=catalog,
|
||||||
|
manager=result_manager if result_manager is not None else ResultManager(),
|
||||||
|
filters=filters if filters is not None else AntaRunFilters(),
|
||||||
|
dry_run=dry_run,
|
||||||
|
start_time=start_time,
|
||||||
|
)
|
||||||
|
|
||||||
|
if len(ctx.manager) > 0:
|
||||||
|
msg = (
|
||||||
|
f"Appending new results to the provided ResultManager which already holds {len(ctx.manager)} results. "
|
||||||
|
"Statistics in this run context are for the current execution only."
|
||||||
|
)
|
||||||
|
self._log_warning_msg(msg=msg, ctx=ctx)
|
||||||
|
|
||||||
|
if not ctx.catalog.tests:
|
||||||
|
self._log_warning_msg(msg="The list of tests is empty. Exiting ...", ctx=ctx)
|
||||||
|
ctx.end_time = datetime.now(tz=timezone.utc)
|
||||||
|
return ctx
|
||||||
|
|
||||||
|
with Catchtime(logger=logger, message="Preparing ANTA NRFU Run"):
|
||||||
|
# Set up inventory
|
||||||
|
setup_inventory_ok = await self._setup_inventory(ctx)
|
||||||
|
if not setup_inventory_ok:
|
||||||
|
ctx.end_time = datetime.now(tz=timezone.utc)
|
||||||
|
return ctx
|
||||||
|
|
||||||
|
# Set up tests
|
||||||
|
with Catchtime(logger=logger, message="Preparing Tests"):
|
||||||
|
setup_tests_ok = self._setup_tests(ctx)
|
||||||
|
if not setup_tests_ok:
|
||||||
|
ctx.end_time = datetime.now(tz=timezone.utc)
|
||||||
|
return ctx
|
||||||
|
|
||||||
|
# Get test coroutines
|
||||||
|
test_coroutines = self._get_test_coroutines(ctx)
|
||||||
|
|
||||||
|
self._log_run_information(ctx)
|
||||||
|
|
||||||
|
if ctx.dry_run:
|
||||||
|
logger.info("Dry-run mode, exiting before running the tests.")
|
||||||
|
self._close_test_coroutines(test_coroutines, ctx)
|
||||||
|
ctx.end_time = datetime.now(tz=timezone.utc)
|
||||||
|
return ctx
|
||||||
|
|
||||||
|
if AntaTest.progress is not None:
|
||||||
|
AntaTest.nrfu_task = AntaTest.progress.add_task("Running NRFU Tests...", total=ctx.total_tests_scheduled)
|
||||||
|
|
||||||
|
with Catchtime(logger=logger, message="Running Tests"):
|
||||||
|
sem = Semaphore(self._settings.max_concurrency)
|
||||||
|
|
||||||
|
async def run_with_sem(test_coro: Coroutine[Any, Any, TestResult]) -> TestResult:
|
||||||
|
"""Wrap the test coroutine with semaphore control."""
|
||||||
|
async with sem:
|
||||||
|
return await test_coro
|
||||||
|
|
||||||
|
results = await gather(*[run_with_sem(coro) for coro in test_coroutines])
|
||||||
|
for res in results:
|
||||||
|
ctx.manager.add(res)
|
||||||
|
|
||||||
|
self._log_cache_statistics(ctx)
|
||||||
|
|
||||||
|
ctx.end_time = datetime.now(tz=timezone.utc)
|
||||||
|
return ctx
|
||||||
|
|
||||||
|
async def _setup_inventory(self, ctx: AntaRunContext) -> bool:
|
||||||
|
"""Set up the inventory for the ANTA run.
|
||||||
|
|
||||||
|
Returns True if the inventory setup was successful, otherwise False.
|
||||||
|
"""
|
||||||
|
initial_device_names = set(ctx.inventory.keys())
|
||||||
|
|
||||||
|
if not initial_device_names:
|
||||||
|
self._log_warning_msg(msg="The initial inventory is empty. Exiting ...", ctx=ctx)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Filter the inventory based on the provided filters if any
|
||||||
|
filtered_inventory = (
|
||||||
|
ctx.inventory.get_inventory(tags=ctx.filters.tags, devices=ctx.filters.devices) if ctx.filters.tags or ctx.filters.devices else ctx.inventory
|
||||||
|
)
|
||||||
|
filtered_device_names = set(filtered_inventory.keys())
|
||||||
|
ctx.devices_filtered_at_setup = sorted(initial_device_names - filtered_device_names)
|
||||||
|
|
||||||
|
if not filtered_device_names:
|
||||||
|
msg_parts = ["The inventory is empty after filtering by tags/devices."]
|
||||||
|
if ctx.filters.devices:
|
||||||
|
msg_parts.append(f"Devices filter: {', '.join(sorted(ctx.filters.devices))}.")
|
||||||
|
if ctx.filters.tags:
|
||||||
|
msg_parts.append(f"Tags filter: {', '.join(sorted(ctx.filters.tags))}.")
|
||||||
|
msg_parts.append("Exiting ...")
|
||||||
|
self._log_warning_msg(msg=" ".join(msg_parts), ctx=ctx)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# In dry-run mode, set the selected inventory to the filtered inventory
|
||||||
|
if ctx.dry_run:
|
||||||
|
ctx.selected_inventory = filtered_inventory
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Attempt to connect to devices that passed filters
|
||||||
|
with Catchtime(logger=logger, message="Connecting to devices"):
|
||||||
|
await filtered_inventory.connect_inventory()
|
||||||
|
|
||||||
|
# Remove devices that are unreachable if required
|
||||||
|
ctx.selected_inventory = filtered_inventory.get_inventory(established_only=True) if ctx.filters.established_only else filtered_inventory
|
||||||
|
selected_device_names = set(ctx.selected_inventory.keys())
|
||||||
|
ctx.devices_unreachable_at_setup = sorted(filtered_device_names - selected_device_names)
|
||||||
|
|
||||||
|
if not selected_device_names:
|
||||||
|
msg = "No reachable devices found for testing after connectivity checks. Exiting ..."
|
||||||
|
self._log_warning_msg(msg=msg, ctx=ctx)
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _setup_tests(self, ctx: AntaRunContext) -> bool:
|
||||||
|
"""Set up tests for the ANTA run.
|
||||||
|
|
||||||
|
Returns True if the test setup was successful, otherwise False.
|
||||||
|
"""
|
||||||
|
# Build indexes for the catalog. If `ctx.filters.tests` is set, filter the indexes based on these tests
|
||||||
|
ctx.catalog.build_indexes(filtered_tests=ctx.filters.tests)
|
||||||
|
|
||||||
|
# Create the device to tests mapping from the tags
|
||||||
|
for device in ctx.selected_inventory.devices:
|
||||||
|
if ctx.filters.tags:
|
||||||
|
# If there are CLI tags, execute tests with matching tags for this device
|
||||||
|
if not (matching_tags := ctx.filters.tags.intersection(device.tags)):
|
||||||
|
# The device does not have any selected tag, skipping
|
||||||
|
# This should not never happen because the device will already be filtered by `_setup_inventory`
|
||||||
|
continue
|
||||||
|
ctx.selected_tests[device].update(ctx.catalog.get_tests_by_tags(matching_tags))
|
||||||
|
else:
|
||||||
|
# If there is no CLI tags, execute all tests that do not have any tags
|
||||||
|
ctx.selected_tests[device].update(ctx.catalog.tag_to_tests[None])
|
||||||
|
|
||||||
|
# Then add the tests with matching tags from device tags
|
||||||
|
ctx.selected_tests[device].update(ctx.catalog.get_tests_by_tags(device.tags))
|
||||||
|
|
||||||
|
if ctx.total_tests_scheduled == 0:
|
||||||
|
msg_parts = ["No tests scheduled to run after filtering by tags/tests."]
|
||||||
|
if ctx.filters.tests:
|
||||||
|
msg_parts.append(f"Tests filter: {', '.join(sorted(ctx.filters.tests))}.")
|
||||||
|
if ctx.filters.tags:
|
||||||
|
msg_parts.append(f"Tags filter: {', '.join(sorted(ctx.filters.tags))}.")
|
||||||
|
msg_parts.append("Exiting ...")
|
||||||
|
self._log_warning_msg(msg=" ".join(msg_parts), ctx=ctx)
|
||||||
|
return False
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
def _get_test_coroutines(self, ctx: AntaRunContext) -> list[Coroutine[Any, Any, TestResult]]:
|
||||||
|
"""Get the test coroutines for the ANTA run."""
|
||||||
|
coros = []
|
||||||
|
for device, test_definitions in ctx.selected_tests.items():
|
||||||
|
for test_def in test_definitions:
|
||||||
|
try:
|
||||||
|
coros.append(test_def.test(device=device, inputs=test_def.inputs).test())
|
||||||
|
except Exception as exc: # noqa: BLE001, PERF203
|
||||||
|
# An AntaTest instance is potentially user-defined code.
|
||||||
|
# We need to catch everything and exit gracefully with an error message.
|
||||||
|
msg = "\n".join(
|
||||||
|
[
|
||||||
|
f"There is an error when creating test {test_def.test.__module__}.{test_def.test.__name__}.",
|
||||||
|
f"If this is not a custom test implementation: {GITHUB_SUGGESTION}",
|
||||||
|
],
|
||||||
|
)
|
||||||
|
anta_log_exception(exc, msg, logger)
|
||||||
|
return coros
|
||||||
|
|
||||||
|
def _close_test_coroutines(self, coros: list[Coroutine[Any, Any, TestResult]], ctx: AntaRunContext) -> None:
|
||||||
|
"""Close the test coroutines. Used in dry-run."""
|
||||||
|
for coro in coros:
|
||||||
|
# Get the AntaTest instance from the coroutine locals, can be in `args` when decorated
|
||||||
|
coro_locals = getcoroutinelocals(coro)
|
||||||
|
test = coro_locals.get("self") or coro_locals.get("args", (None))[0]
|
||||||
|
if isinstance(test, AntaTest):
|
||||||
|
ctx.manager.add(test.result)
|
||||||
|
else:
|
||||||
|
logger.error("Coroutine %s does not have an AntaTest instance.", coro)
|
||||||
|
coro.close()
|
||||||
|
|
||||||
|
def _log_run_information(self, ctx: AntaRunContext) -> None:
|
||||||
|
"""Log ANTA run information and potential resource limit warnings."""
|
||||||
|
# 34 is an estimate of the combined length of timestamp, log level name, filename and spacing added by the Rich logger
|
||||||
|
width = min(int(console.width) - 34, len(" Potential connections needed: 100000000\n"))
|
||||||
|
|
||||||
|
# Build device information
|
||||||
|
device_lines = [
|
||||||
|
"Devices:",
|
||||||
|
f" Total in initial inventory: {ctx.total_devices_in_inventory}",
|
||||||
|
]
|
||||||
|
if ctx.total_devices_filtered_by_tags > 0:
|
||||||
|
device_lines.append(f" Excluded by tags: {ctx.total_devices_filtered_by_tags}")
|
||||||
|
if ctx.total_devices_unreachable > 0:
|
||||||
|
device_lines.append(f" Failed to connect: {ctx.total_devices_unreachable}")
|
||||||
|
device_lines.append(f" Selected for testing: {ctx.total_devices_selected_for_testing}")
|
||||||
|
joined_device_lines = "\n".join(device_lines)
|
||||||
|
|
||||||
|
# Build title
|
||||||
|
title = " ANTA NRFU Dry Run Information " if ctx.dry_run else " ANTA NRFU Run Information "
|
||||||
|
formatted_title_line = f"{title:-^{width}}"
|
||||||
|
|
||||||
|
# Log run information
|
||||||
|
run_info = "\n".join(
|
||||||
|
[
|
||||||
|
f"{formatted_title_line}",
|
||||||
|
f"{joined_device_lines}",
|
||||||
|
f"Total number of selected tests: {ctx.total_tests_scheduled}",
|
||||||
|
f"{'':-^{width}}",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
logger.info(run_info)
|
||||||
|
logger.debug("Max concurrent tests: %d", self._settings.max_concurrency)
|
||||||
|
if potential_connections := ctx.selected_inventory.max_potential_connections:
|
||||||
|
logger.debug("Potential connections needed: %d", potential_connections)
|
||||||
|
logger.debug("File descriptors limit: %d", self._settings.file_descriptor_limit)
|
||||||
|
|
||||||
|
# Log warnings for potential resource limits
|
||||||
|
if ctx.total_tests_scheduled > self._settings.max_concurrency:
|
||||||
|
msg = (
|
||||||
|
f"Tests count ({ctx.total_tests_scheduled}) exceeds concurrent limit ({self._settings.max_concurrency}). "
|
||||||
|
"Tests will be throttled. Please consult the ANTA FAQ."
|
||||||
|
)
|
||||||
|
self._log_warning_msg(msg=msg, ctx=ctx)
|
||||||
|
if potential_connections is not None and potential_connections > self._settings.file_descriptor_limit:
|
||||||
|
msg = (
|
||||||
|
f"Potential connections ({potential_connections}) exceeds file descriptor limit ({self._settings.file_descriptor_limit}). "
|
||||||
|
"Connection errors may occur. Please consult the ANTA FAQ."
|
||||||
|
)
|
||||||
|
self._log_warning_msg(msg=msg, ctx=ctx)
|
||||||
|
|
||||||
|
def _log_cache_statistics(self, ctx: AntaRunContext) -> None:
|
||||||
|
"""Log cache statistics for each device in the inventory."""
|
||||||
|
for device in ctx.selected_inventory.devices:
|
||||||
|
if device.cache_statistics is not None:
|
||||||
|
msg = (
|
||||||
|
f"Cache statistics for '{device.name}': "
|
||||||
|
f"{device.cache_statistics['cache_hits']} hits / {device.cache_statistics['total_commands_sent']} "
|
||||||
|
f"command(s) ({device.cache_statistics['cache_hit_ratio']})"
|
||||||
|
)
|
||||||
|
logger.debug(msg)
|
||||||
|
else:
|
||||||
|
logger.debug("Caching is not enabled on %s", device.name)
|
||||||
|
|
||||||
|
def _log_warning_msg(self, msg: str, ctx: AntaRunContext) -> None:
|
||||||
|
"""Log the provided message at WARNING level and add it to the context warnings_at_setup list."""
|
||||||
|
logger.warning(msg)
|
||||||
|
ctx.warnings_at_setup.append(msg)
|
|
@ -16,11 +16,14 @@ try:
|
||||||
|
|
||||||
except ImportError as exc:
|
except ImportError as exc:
|
||||||
|
|
||||||
def build_cli(exception: Exception) -> Callable[[], None]:
|
def build_cli(exception: ImportError) -> Callable[[], None]:
|
||||||
"""Build CLI function using the caught exception."""
|
"""Build CLI function using the caught exception."""
|
||||||
|
|
||||||
def wrap() -> None:
|
def wrap() -> None:
|
||||||
"""Error message if any CLI dependency is missing."""
|
"""Error message if any CLI dependency is missing."""
|
||||||
|
if not exception.name or "click" not in exception.name:
|
||||||
|
raise exception
|
||||||
|
|
||||||
print(
|
print(
|
||||||
"The ANTA command line client could not run because the required "
|
"The ANTA command line client could not run because the required "
|
||||||
"dependencies were not installed.\nMake sure you've installed "
|
"dependencies were not installed.\nMake sure you've installed "
|
||||||
|
|
|
@ -22,7 +22,7 @@ logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
@click.command
|
@click.command
|
||||||
@catalog_options
|
@catalog_options()
|
||||||
def catalog(catalog: AntaCatalog) -> None:
|
def catalog(catalog: AntaCatalog) -> None:
|
||||||
"""Check that the catalog is valid."""
|
"""Check that the catalog is valid."""
|
||||||
console.print(f"[bold][green]Catalog is valid: {catalog.filename}")
|
console.print(f"[bold][green]Catalog is valid: {catalog.filename}")
|
||||||
|
|
|
@ -18,3 +18,4 @@ get.add_command(commands.from_ansible)
|
||||||
get.add_command(commands.inventory)
|
get.add_command(commands.inventory)
|
||||||
get.add_command(commands.tags)
|
get.add_command(commands.tags)
|
||||||
get.add_command(commands.tests)
|
get.add_command(commands.tests)
|
||||||
|
get.add_command(commands.commands)
|
||||||
|
|
|
@ -10,7 +10,7 @@ import asyncio
|
||||||
import json
|
import json
|
||||||
import logging
|
import logging
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import TYPE_CHECKING, Any
|
from typing import TYPE_CHECKING, Any, Literal
|
||||||
|
|
||||||
import click
|
import click
|
||||||
import requests
|
import requests
|
||||||
|
@ -20,11 +20,21 @@ from rich.pretty import pretty_repr
|
||||||
|
|
||||||
from anta.cli.console import console
|
from anta.cli.console import console
|
||||||
from anta.cli.get.utils import inventory_output_options
|
from anta.cli.get.utils import inventory_output_options
|
||||||
from anta.cli.utils import ExitCode, inventory_options
|
from anta.cli.utils import ExitCode, catalog_options, inventory_options
|
||||||
|
|
||||||
from .utils import create_inventory_from_ansible, create_inventory_from_cvp, explore_package, get_cv_token
|
from .utils import (
|
||||||
|
_explore_package,
|
||||||
|
_filter_tests_via_catalog,
|
||||||
|
_get_unique_commands,
|
||||||
|
_print_commands,
|
||||||
|
create_inventory_from_ansible,
|
||||||
|
create_inventory_from_cvp,
|
||||||
|
get_cv_token,
|
||||||
|
print_tests,
|
||||||
|
)
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
from anta.catalog import AntaCatalog
|
||||||
from anta.inventory import AntaInventory
|
from anta.inventory import AntaInventory
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
@ -147,14 +157,53 @@ def tags(inventory: AntaInventory, **kwargs: Any) -> None:
|
||||||
def tests(ctx: click.Context, module: str, test: str | None, *, short: bool, count: bool) -> None:
|
def tests(ctx: click.Context, module: str, test: str | None, *, short: bool, count: bool) -> None:
|
||||||
"""Show all builtin ANTA tests with an example output retrieved from each test documentation."""
|
"""Show all builtin ANTA tests with an example output retrieved from each test documentation."""
|
||||||
try:
|
try:
|
||||||
tests_found = explore_package(module, test_name=test, short=short, count=count)
|
tests_found = _explore_package(module, test_name=test, short=short, count=count)
|
||||||
if tests_found == 0:
|
if len(tests_found) == 0:
|
||||||
console.print(f"""No test {f"'{test}' " if test else ""}found in '{module}'.""")
|
console.print(f"""No test {f"'{test}' " if test else ""}found in '{module}'.""")
|
||||||
elif count:
|
elif count:
|
||||||
if tests_found == 1:
|
if len(tests_found) == 1:
|
||||||
console.print(f"There is 1 test available in '{module}'.")
|
console.print(f"There is 1 test available in '{module}'.")
|
||||||
else:
|
else:
|
||||||
console.print(f"There are {tests_found} tests available in '{module}'.")
|
console.print(f"There are {len(tests_found)} tests available in '{module}'.")
|
||||||
|
else:
|
||||||
|
print_tests(tests_found, short=short)
|
||||||
|
except ValueError as e:
|
||||||
|
logger.error(str(e))
|
||||||
|
ctx.exit(ExitCode.USAGE_ERROR)
|
||||||
|
|
||||||
|
|
||||||
|
@click.command
|
||||||
|
@click.pass_context
|
||||||
|
@click.option("--module", help="Filter commands by module name.", default="anta.tests", show_default=True)
|
||||||
|
@click.option("--test", help="Filter by specific test name. If module is specified, searches only within that module.", type=str)
|
||||||
|
@catalog_options(required=False)
|
||||||
|
@click.option("--unique", help="Print only the unique commands.", is_flag=True, default=False)
|
||||||
|
def commands(
|
||||||
|
ctx: click.Context,
|
||||||
|
module: str,
|
||||||
|
test: str | None,
|
||||||
|
catalog: AntaCatalog,
|
||||||
|
catalog_format: Literal["yaml", "json"] = "yaml",
|
||||||
|
*,
|
||||||
|
unique: bool,
|
||||||
|
) -> None:
|
||||||
|
"""Print all EOS commands used by the selected ANTA tests.
|
||||||
|
|
||||||
|
It can be filtered by module, test or using a catalog.
|
||||||
|
If no filter is given, all built-in ANTA tests commands are retrieved.
|
||||||
|
"""
|
||||||
|
# TODO: implement catalog and catalog format
|
||||||
|
try:
|
||||||
|
tests_found = _explore_package(module, test_name=test)
|
||||||
|
if catalog:
|
||||||
|
tests_found = _filter_tests_via_catalog(tests_found, catalog)
|
||||||
|
if len(tests_found) == 0:
|
||||||
|
console.print(f"""No test {f"'{test}' " if test else ""}found in '{module}'{f" for catalog '{catalog.filename}'" if catalog else ""}.""")
|
||||||
|
if unique:
|
||||||
|
for command in _get_unique_commands(tests_found):
|
||||||
|
console.print(command)
|
||||||
|
else:
|
||||||
|
_print_commands(tests_found)
|
||||||
except ValueError as e:
|
except ValueError as e:
|
||||||
logger.error(str(e))
|
logger.error(str(e))
|
||||||
ctx.exit(ExitCode.USAGE_ERROR)
|
ctx.exit(ExitCode.USAGE_ERROR)
|
||||||
|
|
|
@ -16,21 +16,25 @@ import sys
|
||||||
import textwrap
|
import textwrap
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from sys import stdin
|
from sys import stdin
|
||||||
from typing import Any, Callable
|
from typing import TYPE_CHECKING, Any, Callable
|
||||||
|
|
||||||
import click
|
import click
|
||||||
import requests
|
import requests
|
||||||
import urllib3
|
import urllib3
|
||||||
import yaml
|
import yaml
|
||||||
|
from typing_extensions import deprecated
|
||||||
|
|
||||||
from anta.cli.console import console
|
from anta.cli.console import console
|
||||||
from anta.cli.utils import ExitCode
|
from anta.cli.utils import ExitCode
|
||||||
from anta.inventory import AntaInventory
|
from anta.inventory import AntaInventory
|
||||||
from anta.inventory.models import AntaInventoryHost, AntaInventoryInput
|
from anta.inventory.models import AntaInventoryHost, AntaInventoryInput
|
||||||
from anta.models import AntaTest
|
from anta.models import AntaCommand, AntaTest
|
||||||
|
|
||||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from anta.catalog import AntaCatalog
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@ -231,7 +235,240 @@ def create_inventory_from_ansible(inventory: Path, output: Path, ansible_group:
|
||||||
write_inventory_to_file(ansible_hosts, output)
|
write_inventory_to_file(ansible_hosts, output)
|
||||||
|
|
||||||
|
|
||||||
def explore_package(module_name: str, test_name: str | None = None, *, short: bool = False, count: bool = False) -> int:
|
def _explore_package(module_name: str, test_name: str | None = None, *, short: bool = False, count: bool = False) -> list[type[AntaTest]]:
|
||||||
|
"""Parse ANTA test submodules recursively and return a list of the found AntaTest.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
module_name
|
||||||
|
Name of the module to explore (e.g., 'anta.tests.routing.bgp').
|
||||||
|
test_name
|
||||||
|
If provided, only show tests starting with this name.
|
||||||
|
short
|
||||||
|
If True, only print test names without their inputs.
|
||||||
|
count
|
||||||
|
If True, only count the tests.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
list[type[AntaTest]]:
|
||||||
|
A list of the AntaTest found.
|
||||||
|
"""
|
||||||
|
result: list[type[AntaTest]] = []
|
||||||
|
try:
|
||||||
|
module_spec = importlib.util.find_spec(module_name)
|
||||||
|
except ModuleNotFoundError:
|
||||||
|
# Relying on module_spec check below.
|
||||||
|
module_spec = None
|
||||||
|
except ImportError as e:
|
||||||
|
msg = "`--module <module>` option does not support relative imports"
|
||||||
|
raise ValueError(msg) from e
|
||||||
|
|
||||||
|
# Giving a second chance adding CWD to PYTHONPATH
|
||||||
|
if module_spec is None:
|
||||||
|
try:
|
||||||
|
logger.info("Could not find module `%s`, injecting CWD in PYTHONPATH and retrying...", module_name)
|
||||||
|
sys.path = [str(Path.cwd()), *sys.path]
|
||||||
|
module_spec = importlib.util.find_spec(module_name)
|
||||||
|
except ImportError:
|
||||||
|
module_spec = None
|
||||||
|
|
||||||
|
if module_spec is None or module_spec.origin is None:
|
||||||
|
msg = f"Module `{module_name}` was not found!"
|
||||||
|
raise ValueError(msg)
|
||||||
|
|
||||||
|
if module_spec.submodule_search_locations:
|
||||||
|
for _, sub_module_name, ispkg in pkgutil.walk_packages(module_spec.submodule_search_locations):
|
||||||
|
qname = f"{module_name}.{sub_module_name}"
|
||||||
|
if ispkg:
|
||||||
|
result.extend(_explore_package(qname, test_name=test_name, short=short, count=count))
|
||||||
|
continue
|
||||||
|
result.extend(find_tests_in_module(qname, test_name))
|
||||||
|
|
||||||
|
else:
|
||||||
|
result.extend(find_tests_in_module(module_spec.name, test_name))
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def find_tests_in_module(qname: str, test_name: str | None) -> list[type[AntaTest]]:
|
||||||
|
"""Return the list of AntaTest in the passed module qname, potentially filtering on test_name.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
qname
|
||||||
|
Name of the module to explore (e.g., 'anta.tests.routing.bgp').
|
||||||
|
test_name
|
||||||
|
If provided, only show tests starting with this name.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
list[type[AntaTest]]:
|
||||||
|
A list of the AntaTest found in the module.
|
||||||
|
"""
|
||||||
|
results: list[type[AntaTest]] = []
|
||||||
|
try:
|
||||||
|
qname_module = importlib.import_module(qname)
|
||||||
|
except (AssertionError, ImportError) as e:
|
||||||
|
msg = f"Error when importing `{qname}` using importlib!"
|
||||||
|
raise ValueError(msg) from e
|
||||||
|
|
||||||
|
for _name, obj in inspect.getmembers(qname_module):
|
||||||
|
# Only retrieves the subclasses of AntaTest
|
||||||
|
if not inspect.isclass(obj) or not issubclass(obj, AntaTest) or obj == AntaTest:
|
||||||
|
continue
|
||||||
|
if test_name and not obj.name.startswith(test_name):
|
||||||
|
continue
|
||||||
|
results.append(obj)
|
||||||
|
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def _filter_tests_via_catalog(tests: list[type[AntaTest]], catalog: AntaCatalog) -> list[type[AntaTest]]:
|
||||||
|
"""Return the filtered list of tests present in the catalog.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
tests:
|
||||||
|
List of tests.
|
||||||
|
catalog:
|
||||||
|
The AntaCatalog to use as filtering.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
list[type[AntaTest]]:
|
||||||
|
The filtered list of tests containing uniquely the tests found in the catalog.
|
||||||
|
"""
|
||||||
|
catalog_test_names = {test.test.name for test in catalog.tests}
|
||||||
|
return [test for test in tests if test.name in catalog_test_names]
|
||||||
|
|
||||||
|
|
||||||
|
def print_tests(tests: list[type[AntaTest]], *, short: bool = False) -> None:
|
||||||
|
"""Print a list of AntaTest.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
tests
|
||||||
|
A list of AntaTest subclasses.
|
||||||
|
short
|
||||||
|
If True, only print test names without their inputs.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def module_name(test: type[AntaTest]) -> str:
|
||||||
|
"""Return the module name for the input test.
|
||||||
|
|
||||||
|
Used to group the test by module.
|
||||||
|
"""
|
||||||
|
return test.__module__
|
||||||
|
|
||||||
|
from itertools import groupby
|
||||||
|
|
||||||
|
for module, module_tests in groupby(tests, module_name):
|
||||||
|
console.print(f"{module}:")
|
||||||
|
for test in module_tests:
|
||||||
|
print_test(test, short=short)
|
||||||
|
|
||||||
|
|
||||||
|
def print_test(test: type[AntaTest], *, short: bool = False) -> None:
|
||||||
|
"""Print a single test.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
test
|
||||||
|
the representation of the AntaTest as returned by inspect.getmembers
|
||||||
|
short
|
||||||
|
If True, only print test names without their inputs.
|
||||||
|
"""
|
||||||
|
if not test.__doc__ or (example := extract_examples(test.__doc__)) is None:
|
||||||
|
msg = f"Test {test.name} in module {test.__module__} is missing an Example"
|
||||||
|
raise LookupError(msg)
|
||||||
|
# Picking up only the inputs in the examples
|
||||||
|
# Need to handle the fact that we nest the routing modules in Examples.
|
||||||
|
# This is a bit fragile.
|
||||||
|
inputs = example.split("\n")
|
||||||
|
test_name_lines = [i for i, input_entry in enumerate(inputs) if test.name in input_entry]
|
||||||
|
if not test_name_lines:
|
||||||
|
msg = f"Could not find the name of the test '{test.name}' in the Example section in the docstring."
|
||||||
|
raise ValueError(msg)
|
||||||
|
for list_index, line_index in enumerate(test_name_lines):
|
||||||
|
end = test_name_lines[list_index + 1] if list_index + 1 < len(test_name_lines) else -1
|
||||||
|
console.print(f" {inputs[line_index].strip()}")
|
||||||
|
# Injecting the description for the first example
|
||||||
|
if list_index == 0:
|
||||||
|
console.print(f" # {test.description}", soft_wrap=True)
|
||||||
|
if not short and len(inputs) > line_index + 2: # There are params
|
||||||
|
console.print(textwrap.indent(textwrap.dedent("\n".join(inputs[line_index + 1 : end])), " " * 6))
|
||||||
|
|
||||||
|
|
||||||
|
def extract_examples(docstring: str) -> str | None:
|
||||||
|
"""Extract the content of the Example section in a Numpy docstring.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
str | None
|
||||||
|
The content of the section if present, None if the section is absent or empty.
|
||||||
|
"""
|
||||||
|
pattern = r"Examples\s*--------\s*(.*)(?:\n\s*\n|\Z)"
|
||||||
|
match = re.search(pattern, docstring, flags=re.DOTALL)
|
||||||
|
return match[1].strip() if match and match[1].strip() != "" else None
|
||||||
|
|
||||||
|
|
||||||
|
def _print_commands(tests: list[type[AntaTest]]) -> None:
|
||||||
|
"""Print a list of commands per module and per test.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
tests
|
||||||
|
A list of AntaTest subclasses.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def module_name(test: type[AntaTest]) -> str:
|
||||||
|
"""Return the module name for the input test.
|
||||||
|
|
||||||
|
Used to group the test by module.
|
||||||
|
"""
|
||||||
|
return test.__module__
|
||||||
|
|
||||||
|
from itertools import groupby
|
||||||
|
|
||||||
|
for module, module_tests in groupby(tests, module_name):
|
||||||
|
console.print(f"{module}:")
|
||||||
|
for test in module_tests:
|
||||||
|
console.print(f" - {test.name}:")
|
||||||
|
for command in test.commands:
|
||||||
|
if isinstance(command, AntaCommand):
|
||||||
|
console.print(f" - {command.command}")
|
||||||
|
else: # isinstance(command, AntaTemplate):
|
||||||
|
console.print(f" - {command.template}")
|
||||||
|
|
||||||
|
|
||||||
|
def _get_unique_commands(tests: list[type[AntaTest]]) -> set[str]:
|
||||||
|
"""Return a set of unique commands used by the tests.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
tests
|
||||||
|
A list of AntaTest subclasses.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
set[str]
|
||||||
|
A set of commands or templates used by each test.
|
||||||
|
"""
|
||||||
|
result: set[str] = set()
|
||||||
|
|
||||||
|
for test in tests:
|
||||||
|
for command in test.commands:
|
||||||
|
if isinstance(command, AntaCommand):
|
||||||
|
result.add(command.command)
|
||||||
|
else: # isinstance(command, AntaTemplate):
|
||||||
|
result.add(command.template)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@deprecated("This function is deprecated, use `_explore_package`. This will be removed in ANTA v2.0.0.", category=DeprecationWarning)
|
||||||
|
def explore_package(module_name: str, test_name: str | None = None, *, short: bool = False, count: bool = False) -> int: # pragma: no cover
|
||||||
"""Parse ANTA test submodules recursively and print AntaTest examples.
|
"""Parse ANTA test submodules recursively and print AntaTest examples.
|
||||||
|
|
||||||
Parameters
|
Parameters
|
||||||
|
@ -287,7 +524,8 @@ def explore_package(module_name: str, test_name: str | None = None, *, short: bo
|
||||||
return tests_found
|
return tests_found
|
||||||
|
|
||||||
|
|
||||||
def find_tests_examples(qname: str, test_name: str | None, *, short: bool = False, count: bool = False) -> int:
|
@deprecated("This function is deprecated, use `find_tests_in_module`. This will be removed in ANTA v2.0.0.", category=DeprecationWarning)
|
||||||
|
def find_tests_examples(qname: str, test_name: str | None, *, short: bool = False, count: bool = False) -> int: # pragma: no cover
|
||||||
"""Print tests from `qname`, filtered by `test_name` if provided.
|
"""Print tests from `qname`, filtered by `test_name` if provided.
|
||||||
|
|
||||||
Parameters
|
Parameters
|
||||||
|
@ -331,47 +569,3 @@ def find_tests_examples(qname: str, test_name: str | None, *, short: bool = Fals
|
||||||
print_test(obj, short=short)
|
print_test(obj, short=short)
|
||||||
|
|
||||||
return tests_found
|
return tests_found
|
||||||
|
|
||||||
|
|
||||||
def print_test(test: type[AntaTest], *, short: bool = False) -> None:
|
|
||||||
"""Print a single test.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
----------
|
|
||||||
test
|
|
||||||
the representation of the AntaTest as returned by inspect.getmembers
|
|
||||||
short
|
|
||||||
If True, only print test names without their inputs.
|
|
||||||
"""
|
|
||||||
if not test.__doc__ or (example := extract_examples(test.__doc__)) is None:
|
|
||||||
msg = f"Test {test.name} in module {test.__module__} is missing an Example"
|
|
||||||
raise LookupError(msg)
|
|
||||||
# Picking up only the inputs in the examples
|
|
||||||
# Need to handle the fact that we nest the routing modules in Examples.
|
|
||||||
# This is a bit fragile.
|
|
||||||
inputs = example.split("\n")
|
|
||||||
test_name_lines = [i for i, input_entry in enumerate(inputs) if test.name in input_entry]
|
|
||||||
if not test_name_lines:
|
|
||||||
msg = f"Could not find the name of the test '{test.name}' in the Example section in the docstring."
|
|
||||||
raise ValueError(msg)
|
|
||||||
for list_index, line_index in enumerate(test_name_lines):
|
|
||||||
end = test_name_lines[list_index + 1] if list_index + 1 < len(test_name_lines) else -1
|
|
||||||
console.print(f" {inputs[line_index].strip()}")
|
|
||||||
# Injecting the description for the first example
|
|
||||||
if list_index == 0:
|
|
||||||
console.print(f" # {test.description}", soft_wrap=True)
|
|
||||||
if not short and len(inputs) > line_index + 2: # There are params
|
|
||||||
console.print(textwrap.indent(textwrap.dedent("\n".join(inputs[line_index + 1 : end])), " " * 6))
|
|
||||||
|
|
||||||
|
|
||||||
def extract_examples(docstring: str) -> str | None:
|
|
||||||
"""Extract the content of the Example section in a Numpy docstring.
|
|
||||||
|
|
||||||
Returns
|
|
||||||
-------
|
|
||||||
str | None
|
|
||||||
The content of the section if present, None if the section is absent or empty.
|
|
||||||
"""
|
|
||||||
pattern = r"Examples\s*--------\s*(.*)(?:\n\s*\n|\Z)"
|
|
||||||
match = re.search(pattern, docstring, flags=re.DOTALL)
|
|
||||||
return match[1].strip() if match and match[1].strip() != "" else None
|
|
||||||
|
|
|
@ -57,7 +57,7 @@ HIDE_STATUS.remove("unset")
|
||||||
@click.group(invoke_without_command=True, cls=IgnoreRequiredWithHelp)
|
@click.group(invoke_without_command=True, cls=IgnoreRequiredWithHelp)
|
||||||
@click.pass_context
|
@click.pass_context
|
||||||
@inventory_options
|
@inventory_options
|
||||||
@catalog_options
|
@catalog_options()
|
||||||
@click.option(
|
@click.option(
|
||||||
"--device",
|
"--device",
|
||||||
"-d",
|
"-d",
|
||||||
|
|
|
@ -50,6 +50,7 @@ def run_tests(ctx: click.Context) -> None:
|
||||||
|
|
||||||
print_settings(inventory, catalog)
|
print_settings(inventory, catalog)
|
||||||
with anta_progress_bar() as AntaTest.progress:
|
with anta_progress_bar() as AntaTest.progress:
|
||||||
|
# TODO: Use AntaRunner directly in ANTA v2.0.0
|
||||||
asyncio.run(
|
asyncio.run(
|
||||||
main(
|
main(
|
||||||
ctx.obj["result_manager"],
|
ctx.obj["result_manager"],
|
||||||
|
@ -65,9 +66,11 @@ def run_tests(ctx: click.Context) -> None:
|
||||||
ctx.exit()
|
ctx.exit()
|
||||||
|
|
||||||
|
|
||||||
def _get_result_manager(ctx: click.Context) -> ResultManager:
|
def _get_result_manager(ctx: click.Context, *, apply_hide_filter: bool = True) -> ResultManager:
|
||||||
"""Get a ResultManager instance based on Click context."""
|
"""Get a ResultManager instance based on Click context."""
|
||||||
return ctx.obj["result_manager"].filter(ctx.obj.get("hide")) if ctx.obj.get("hide") is not None else ctx.obj["result_manager"]
|
if apply_hide_filter:
|
||||||
|
return ctx.obj["result_manager"].filter(ctx.obj.get("hide")) if ctx.obj.get("hide") is not None else ctx.obj["result_manager"]
|
||||||
|
return ctx.obj["result_manager"]
|
||||||
|
|
||||||
|
|
||||||
def print_settings(
|
def print_settings(
|
||||||
|
@ -157,7 +160,10 @@ def save_markdown_report(ctx: click.Context, md_output: pathlib.Path) -> None:
|
||||||
Path to save the markdown report.
|
Path to save the markdown report.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
MDReportGenerator.generate(results=_get_result_manager(ctx).sort(["name", "categories", "test"]), md_filename=md_output)
|
manager = _get_result_manager(ctx, apply_hide_filter=False).sort(["name", "categories", "test"])
|
||||||
|
filtered_manager = _get_result_manager(ctx, apply_hide_filter=True).sort(["name", "categories", "test"])
|
||||||
|
sections = [(section, filtered_manager) if section.__name__ == "TestResults" else (section, manager) for section in MDReportGenerator.DEFAULT_SECTIONS]
|
||||||
|
MDReportGenerator.generate_sections(md_filename=md_output, sections=sections)
|
||||||
console.print(f"Markdown report saved to {md_output} ✅", style="cyan")
|
console.print(f"Markdown report saved to {md_output} ✅", style="cyan")
|
||||||
except OSError:
|
except OSError:
|
||||||
console.print(f"Failed to save Markdown report to {md_output} ❌", style="cyan")
|
console.print(f"Failed to save Markdown report to {md_output} ❌", style="cyan")
|
||||||
|
|
|
@ -163,6 +163,7 @@ def core_options(f: Callable[..., Any]) -> Callable[..., Any]:
|
||||||
show_envvar=True,
|
show_envvar=True,
|
||||||
envvar="ANTA_TIMEOUT",
|
envvar="ANTA_TIMEOUT",
|
||||||
show_default=True,
|
show_default=True,
|
||||||
|
type=float,
|
||||||
)
|
)
|
||||||
@click.option(
|
@click.option(
|
||||||
"--insecure",
|
"--insecure",
|
||||||
|
@ -290,50 +291,57 @@ def inventory_options(f: Callable[..., Any]) -> Callable[..., Any]:
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
def catalog_options(f: Callable[..., Any]) -> Callable[..., Any]:
|
def catalog_options(*, required: bool = True) -> Callable[..., Callable[..., Any]]:
|
||||||
"""Click common options when requiring a test catalog to execute ANTA tests."""
|
"""Click common options when requiring a test catalog to execute ANTA tests."""
|
||||||
|
|
||||||
@click.option(
|
def wrapper(f: Callable[..., Any]) -> Callable[..., Any]:
|
||||||
"--catalog",
|
"""Click common options when requiring a test catalog to execute ANTA tests."""
|
||||||
"-c",
|
|
||||||
envvar="ANTA_CATALOG",
|
@click.option(
|
||||||
show_envvar=True,
|
"--catalog",
|
||||||
help="Path to the test catalog file",
|
"-c",
|
||||||
type=click.Path(
|
envvar="ANTA_CATALOG",
|
||||||
file_okay=True,
|
show_envvar=True,
|
||||||
dir_okay=False,
|
help="Path to the test catalog file",
|
||||||
exists=True,
|
type=click.Path(
|
||||||
readable=True,
|
file_okay=True,
|
||||||
path_type=Path,
|
dir_okay=False,
|
||||||
),
|
exists=True,
|
||||||
required=True,
|
readable=True,
|
||||||
)
|
path_type=Path,
|
||||||
@click.option(
|
),
|
||||||
"--catalog-format",
|
required=required,
|
||||||
envvar="ANTA_CATALOG_FORMAT",
|
)
|
||||||
show_envvar=True,
|
@click.option(
|
||||||
help="Format of the catalog file, either 'yaml' or 'json'",
|
"--catalog-format",
|
||||||
default="yaml",
|
envvar="ANTA_CATALOG_FORMAT",
|
||||||
type=click.Choice(["yaml", "json"], case_sensitive=False),
|
show_envvar=True,
|
||||||
)
|
help="Format of the catalog file, either 'yaml' or 'json'",
|
||||||
@click.pass_context
|
default="yaml",
|
||||||
@functools.wraps(f)
|
type=click.Choice(["yaml", "json"], case_sensitive=False),
|
||||||
def wrapper(
|
)
|
||||||
ctx: click.Context,
|
@click.pass_context
|
||||||
*args: tuple[Any],
|
@functools.wraps(f)
|
||||||
catalog: Path,
|
def wrapper(
|
||||||
catalog_format: str,
|
ctx: click.Context,
|
||||||
**kwargs: dict[str, Any],
|
*args: tuple[Any],
|
||||||
) -> Any:
|
catalog: Path | None,
|
||||||
# If help is invoke somewhere, do not parse catalog
|
catalog_format: Literal["yaml", "json"],
|
||||||
if ctx.obj.get("_anta_help"):
|
**kwargs: dict[str, Any],
|
||||||
return f(*args, catalog=None, **kwargs)
|
) -> Any:
|
||||||
try:
|
# If help is invoke somewhere, do not parse catalog
|
||||||
file_format = catalog_format.lower()
|
if ctx.obj.get("_anta_help"):
|
||||||
c = AntaCatalog.parse(catalog, file_format=file_format) # type: ignore[arg-type]
|
return f(*args, catalog=None, **kwargs)
|
||||||
except (TypeError, ValueError, YAMLError, OSError) as e:
|
if not catalog and not required:
|
||||||
anta_log_exception(e, f"Failed to parse the catalog: {catalog}", logger)
|
return f(*args, catalog=None, **kwargs)
|
||||||
ctx.exit(ExitCode.USAGE_ERROR)
|
try:
|
||||||
return f(*args, catalog=c, **kwargs)
|
file_format = catalog_format.lower()
|
||||||
|
c = AntaCatalog.parse(catalog, file_format=file_format) # type: ignore[arg-type]
|
||||||
|
except (TypeError, ValueError, YAMLError, OSError) as e:
|
||||||
|
anta_log_exception(e, f"Failed to parse the catalog: {catalog}", logger)
|
||||||
|
ctx.exit(ExitCode.USAGE_ERROR)
|
||||||
|
return f(*args, catalog=c, **kwargs)
|
||||||
|
|
||||||
|
return wrapper
|
||||||
|
|
||||||
return wrapper
|
return wrapper
|
||||||
|
|
|
@ -14,12 +14,14 @@ REGEXP_PATH_MARKERS = r"[\\\/\s]"
|
||||||
"""Match directory path from string."""
|
"""Match directory path from string."""
|
||||||
REGEXP_INTERFACE_ID = r"\d+(\/\d+)*(\.\d+)?"
|
REGEXP_INTERFACE_ID = r"\d+(\/\d+)*(\.\d+)?"
|
||||||
"""Match Interface ID lilke 1/1.1."""
|
"""Match Interface ID lilke 1/1.1."""
|
||||||
REGEXP_TYPE_EOS_INTERFACE = r"^(Dps|Ethernet|Fabric|Loopback|Management|Port-Channel|Tunnel|Vlan|Vxlan)[0-9]+(\/[0-9]+)*(\.[0-9]+)?$"
|
REGEXP_TYPE_EOS_INTERFACE = r"^(Dps|Ethernet|Fabric|Loopback|Management|Port-Channel|Recirc-Channel|Tunnel|Vlan|Vxlan)[0-9]+(\/[0-9]+)*(\.[0-9]+)?$"
|
||||||
"""Match EOS interface types like Ethernet1/1, Vlan1, Loopback1, etc."""
|
"""Match EOS interface types like Ethernet1/1, Vlan1, Loopback1, etc."""
|
||||||
REGEXP_TYPE_VXLAN_SRC_INTERFACE = r"^(Loopback)([0-9]|[1-9][0-9]{1,2}|[1-7][0-9]{3}|8[01][0-9]{2}|819[01])$"
|
REGEXP_TYPE_VXLAN_SRC_INTERFACE = r"^(Loopback)([0-9]|[1-9][0-9]{1,2}|[1-7][0-9]{3}|8[01][0-9]{2}|819[01])$"
|
||||||
"""Match Vxlan source interface like Loopback10."""
|
"""Match Vxlan source interface like Loopback10."""
|
||||||
REGEX_TYPE_PORTCHANNEL = r"^Port-Channel[0-9]{1,6}$"
|
REGEX_TYPE_PORTCHANNEL = r"^Port-Channel[0-9]{1,6}$"
|
||||||
"""Match Port Channel interface like Port-Channel5."""
|
"""Match Port Channel interface like Port-Channel5."""
|
||||||
|
REGEXP_EOS_INTERFACE_TYPE = r"^(Dps|Ethernet|Fabric|Loopback|Management|Port-Channel|Recirc-Channel|Tunnel|Vlan|Vxlan)$"
|
||||||
|
"""Match an EOS interface type like Ethernet or Loopback."""
|
||||||
REGEXP_TYPE_HOSTNAME = r"^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$"
|
REGEXP_TYPE_HOSTNAME = r"^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$"
|
||||||
"""Match hostname like `my-hostname`, `my-hostname-1`, `my-hostname-1-2`."""
|
"""Match hostname like `my-hostname`, `my-hostname-1`, `my-hostname-1-2`."""
|
||||||
|
|
||||||
|
@ -187,9 +189,26 @@ def update_bgp_redistributed_proto_user(value: str) -> str:
|
||||||
return value
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
def convert_reload_cause(value: str) -> str:
|
||||||
|
"""Convert a reload cause abbreviation into its full descriptive string.
|
||||||
|
|
||||||
|
Examples
|
||||||
|
--------
|
||||||
|
```python
|
||||||
|
>>> convert_reload_cause("ZTP")
|
||||||
|
'System reloaded due to Zero Touch Provisioning'
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
reload_causes = {"ZTP": "System reloaded due to Zero Touch Provisioning", "USER": "Reload requested by the user.", "FPGA": "Reload requested after FPGA upgrade"}
|
||||||
|
if not reload_causes.get(value.upper()):
|
||||||
|
msg = f"Invalid reload cause: '{value}' - expected causes are {list(reload_causes)}"
|
||||||
|
raise ValueError(msg)
|
||||||
|
return reload_causes[value.upper()]
|
||||||
|
|
||||||
|
|
||||||
# AntaTest.Input types
|
# AntaTest.Input types
|
||||||
AAAAuthMethod = Annotated[str, AfterValidator(aaa_group_prefix)]
|
AAAAuthMethod = Annotated[str, AfterValidator(aaa_group_prefix)]
|
||||||
Vlan = Annotated[int, Field(ge=0, le=4094)]
|
VlanId = Annotated[int, Field(ge=0, le=4094)]
|
||||||
MlagPriority = Annotated[int, Field(ge=1, le=32767)]
|
MlagPriority = Annotated[int, Field(ge=1, le=32767)]
|
||||||
Vni = Annotated[int, Field(ge=1, le=16777215)]
|
Vni = Annotated[int, Field(ge=1, le=16777215)]
|
||||||
Interface = Annotated[
|
Interface = Annotated[
|
||||||
|
@ -216,6 +235,11 @@ PortChannelInterface = Annotated[
|
||||||
BeforeValidator(interface_autocomplete),
|
BeforeValidator(interface_autocomplete),
|
||||||
BeforeValidator(interface_case_sensitivity),
|
BeforeValidator(interface_case_sensitivity),
|
||||||
]
|
]
|
||||||
|
InterfaceType = Annotated[
|
||||||
|
str,
|
||||||
|
Field(pattern=REGEXP_EOS_INTERFACE_TYPE),
|
||||||
|
BeforeValidator(interface_case_sensitivity),
|
||||||
|
]
|
||||||
Afi = Literal["ipv4", "ipv6", "vpn-ipv4", "vpn-ipv6", "evpn", "rt-membership", "path-selection", "link-state"]
|
Afi = Literal["ipv4", "ipv6", "vpn-ipv4", "vpn-ipv6", "evpn", "rt-membership", "path-selection", "link-state"]
|
||||||
Safi = Literal["unicast", "multicast", "labeled-unicast", "sr-te"]
|
Safi = Literal["unicast", "multicast", "labeled-unicast", "sr-te"]
|
||||||
EncryptionAlgorithm = Literal["RSA", "ECDSA"]
|
EncryptionAlgorithm = Literal["RSA", "ECDSA"]
|
||||||
|
@ -396,3 +420,10 @@ RedistributedProtocol = Annotated[
|
||||||
]
|
]
|
||||||
RedistributedAfiSafi = Annotated[Literal["v4u", "v4m", "v6u", "v6m"], BeforeValidator(bgp_redistributed_route_proto_abbreviations)]
|
RedistributedAfiSafi = Annotated[Literal["v4u", "v4m", "v6u", "v6m"], BeforeValidator(bgp_redistributed_route_proto_abbreviations)]
|
||||||
NTPStratumLevel = Annotated[int, Field(ge=0, le=16)]
|
NTPStratumLevel = Annotated[int, Field(ge=0, le=16)]
|
||||||
|
PowerSupplyFanStatus = Literal["failed", "ok", "unknownHwStatus", "powerLoss", "unsupported"]
|
||||||
|
PowerSupplyStatus = Literal["ok", "unknown", "powerLoss", "failed"]
|
||||||
|
ReloadCause = Annotated[
|
||||||
|
Literal["System reloaded due to Zero Touch Provisioning", "Reload requested by the user.", "Reload requested after FPGA upgrade", "USER", "FPGA", "ZTP"],
|
||||||
|
BeforeValidator(convert_reload_cause),
|
||||||
|
]
|
||||||
|
BgpCommunity = Literal["standard", "extended", "large"]
|
||||||
|
|
|
@ -161,7 +161,7 @@ def skip_on_platforms(platforms: list[str]) -> Callable[[F], F]:
|
||||||
return anta_test.result
|
return anta_test.result
|
||||||
|
|
||||||
if anta_test.device.hw_model in platforms:
|
if anta_test.device.hw_model in platforms:
|
||||||
anta_test.result.is_skipped(f"{anta_test.__class__.__name__} test is not supported on {anta_test.device.hw_model}.")
|
anta_test.result.is_skipped(f"{anta_test.__class__.__name__} test is not supported on {anta_test.device.hw_model}")
|
||||||
AntaTest.update_progress()
|
AntaTest.update_progress()
|
||||||
return anta_test.result
|
return anta_test.result
|
||||||
|
|
||||||
|
|
|
@ -114,16 +114,19 @@ class AntaDevice(ABC):
|
||||||
True if the device IP is reachable and a port can be open.
|
True if the device IP is reachable and a port can be open.
|
||||||
established : bool
|
established : bool
|
||||||
True if remote command execution succeeds.
|
True if remote command execution succeeds.
|
||||||
hw_model : str
|
hw_model : str | None
|
||||||
Hardware model of the device.
|
Hardware model of the device.
|
||||||
tags : set[str]
|
tags : set[str]
|
||||||
Tags for this device.
|
Tags for this device.
|
||||||
cache : AntaCache | None
|
cache : AntaCache | None
|
||||||
In-memory cache for this device (None if cache is disabled).
|
In-memory cache for this device (None if cache is disabled).
|
||||||
cache_locks : dict
|
cache_locks : defaultdict[str, asyncio.Lock] | None
|
||||||
Dictionary mapping keys to asyncio locks to guarantee exclusive access to the cache if not disabled.
|
Dictionary mapping keys to asyncio locks to guarantee exclusive access to the cache if not disabled.
|
||||||
Deprecated, will be removed in ANTA v2.0.0, use self.cache.locks instead.
|
Deprecated, will be removed in ANTA v2.0.0, use self.cache.locks instead.
|
||||||
|
max_connections : int | None
|
||||||
|
For informational/logging purposes only. Can be used by the runner to verify that
|
||||||
|
the total potential connections of a run do not exceed the system file descriptor limit.
|
||||||
|
This does **not** affect the actual device configuration. None if not available.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, name: str, tags: set[str] | None = None, *, disable_cache: bool = False) -> None:
|
def __init__(self, name: str, tags: set[str] | None = None, *, disable_cache: bool = False) -> None:
|
||||||
|
@ -159,6 +162,11 @@ class AntaDevice(ABC):
|
||||||
def _keys(self) -> tuple[Any, ...]:
|
def _keys(self) -> tuple[Any, ...]:
|
||||||
"""Read-only property to implement hashing and equality for AntaDevice classes."""
|
"""Read-only property to implement hashing and equality for AntaDevice classes."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def max_connections(self) -> int | None:
|
||||||
|
"""Maximum number of concurrent connections allowed by the device. Can be overridden by subclasses, returns None if not available."""
|
||||||
|
return None
|
||||||
|
|
||||||
def __eq__(self, other: object) -> bool:
|
def __eq__(self, other: object) -> bool:
|
||||||
"""Implement equality for AntaDevice objects."""
|
"""Implement equality for AntaDevice objects."""
|
||||||
return self._keys == other._keys if isinstance(other, self.__class__) else False
|
return self._keys == other._keys if isinstance(other, self.__class__) else False
|
||||||
|
@ -302,9 +310,8 @@ class AntaDevice(ABC):
|
||||||
raise NotImplementedError(msg)
|
raise NotImplementedError(msg)
|
||||||
|
|
||||||
|
|
||||||
# pylint: disable=too-many-instance-attributes
|
|
||||||
class AsyncEOSDevice(AntaDevice):
|
class AsyncEOSDevice(AntaDevice):
|
||||||
"""Implementation of AntaDevice for EOS using aio-eapi.
|
"""Implementation of AntaDevice for EOS using the `asynceapi` library, which is built on HTTPX.
|
||||||
|
|
||||||
Attributes
|
Attributes
|
||||||
----------
|
----------
|
||||||
|
@ -318,7 +325,6 @@ class AsyncEOSDevice(AntaDevice):
|
||||||
Hardware model of the device.
|
Hardware model of the device.
|
||||||
tags : set[str]
|
tags : set[str]
|
||||||
Tags for this device.
|
Tags for this device.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__( # noqa: PLR0913
|
def __init__( # noqa: PLR0913
|
||||||
|
@ -329,7 +335,7 @@ class AsyncEOSDevice(AntaDevice):
|
||||||
name: str | None = None,
|
name: str | None = None,
|
||||||
enable_password: str | None = None,
|
enable_password: str | None = None,
|
||||||
port: int | None = None,
|
port: int | None = None,
|
||||||
ssh_port: int | None = 22,
|
ssh_port: int = 22,
|
||||||
tags: set[str] | None = None,
|
tags: set[str] | None = None,
|
||||||
timeout: float | None = None,
|
timeout: float | None = None,
|
||||||
proto: Literal["http", "https"] = "https",
|
proto: Literal["http", "https"] = "https",
|
||||||
|
@ -350,8 +356,6 @@ class AsyncEOSDevice(AntaDevice):
|
||||||
Password to connect to eAPI and SSH.
|
Password to connect to eAPI and SSH.
|
||||||
name
|
name
|
||||||
Device name.
|
Device name.
|
||||||
enable
|
|
||||||
Collect commands using privileged mode.
|
|
||||||
enable_password
|
enable_password
|
||||||
Password used to gain privileged access on EOS.
|
Password used to gain privileged access on EOS.
|
||||||
port
|
port
|
||||||
|
@ -361,14 +365,15 @@ class AsyncEOSDevice(AntaDevice):
|
||||||
tags
|
tags
|
||||||
Tags for this device.
|
Tags for this device.
|
||||||
timeout
|
timeout
|
||||||
Timeout value in seconds for outgoing API calls.
|
Global timeout value in seconds for outgoing eAPI calls. None means no timeout.
|
||||||
insecure
|
|
||||||
Disable SSH Host Key validation.
|
|
||||||
proto
|
proto
|
||||||
eAPI protocol. Value can be 'http' or 'https'.
|
eAPI protocol. Value can be 'http' or 'https'.
|
||||||
|
enable
|
||||||
|
Collect commands using privileged mode.
|
||||||
|
insecure
|
||||||
|
Disable SSH Host Key validation.
|
||||||
disable_cache
|
disable_cache
|
||||||
Disable caching for all commands for this device.
|
Disable caching for all commands for this device.
|
||||||
|
|
||||||
"""
|
"""
|
||||||
if host is None:
|
if host is None:
|
||||||
message = "'host' is required to create an AsyncEOSDevice"
|
message = "'host' is required to create an AsyncEOSDevice"
|
||||||
|
@ -417,6 +422,7 @@ class AsyncEOSDevice(AntaDevice):
|
||||||
_ssh_opts["kwargs"]["password"] = removed_pw
|
_ssh_opts["kwargs"]["password"] = removed_pw
|
||||||
yield ("_session", vars(self._session))
|
yield ("_session", vars(self._session))
|
||||||
yield ("_ssh_opts", _ssh_opts)
|
yield ("_ssh_opts", _ssh_opts)
|
||||||
|
yield ("max_connections", self.max_connections) if self.max_connections is not None else ("max_connections", "N/A")
|
||||||
|
|
||||||
def __repr__(self) -> str:
|
def __repr__(self) -> str:
|
||||||
"""Return a printable representation of an AsyncEOSDevice."""
|
"""Return a printable representation of an AsyncEOSDevice."""
|
||||||
|
@ -442,6 +448,14 @@ class AsyncEOSDevice(AntaDevice):
|
||||||
"""
|
"""
|
||||||
return (self._session.host, self._session.port)
|
return (self._session.host, self._session.port)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def max_connections(self) -> int | None:
|
||||||
|
"""Maximum number of concurrent connections allowed by the device. Returns None if not available."""
|
||||||
|
try:
|
||||||
|
return self._session._transport._pool._max_connections # type: ignore[attr-defined] # noqa: SLF001
|
||||||
|
except AttributeError:
|
||||||
|
return None
|
||||||
|
|
||||||
async def _get_semaphore(self) -> asyncio.Semaphore:
|
async def _get_semaphore(self) -> asyncio.Semaphore:
|
||||||
"""Return the semaphore, initializing it if needed.
|
"""Return the semaphore, initializing it if needed.
|
||||||
|
|
||||||
|
@ -539,29 +553,36 @@ class AsyncEOSDevice(AntaDevice):
|
||||||
"""Update attributes of an AsyncEOSDevice instance.
|
"""Update attributes of an AsyncEOSDevice instance.
|
||||||
|
|
||||||
This coroutine must update the following attributes of AsyncEOSDevice:
|
This coroutine must update the following attributes of AsyncEOSDevice:
|
||||||
- is_online: When a device IP is reachable and a port can be open
|
- is_online: When a device eAPI HTTP endpoint is accessible
|
||||||
- established: When a command execution succeeds
|
- established: When a command execution succeeds
|
||||||
- hw_model: The hardware model of the device
|
- hw_model: The hardware model of the device
|
||||||
"""
|
"""
|
||||||
logger.debug("Refreshing device %s", self.name)
|
logger.debug("Refreshing device %s", self.name)
|
||||||
self.is_online = await self._session.check_connection()
|
try:
|
||||||
if self.is_online:
|
self.is_online = await self._session.check_api_endpoint()
|
||||||
show_version = AntaCommand(command="show version")
|
except HTTPError as e:
|
||||||
await self._collect(show_version)
|
self.is_online = False
|
||||||
if not show_version.collected:
|
self.established = False
|
||||||
logger.warning("Cannot get hardware information from device %s", self.name)
|
logger.warning("Could not connect to device %s: %s", self.name, e)
|
||||||
else:
|
return
|
||||||
self.hw_model = show_version.json_output.get("modelName", None)
|
|
||||||
if self.hw_model is None:
|
|
||||||
logger.critical("Cannot parse 'show version' returned by device %s", self.name)
|
|
||||||
# in some cases it is possible that 'modelName' comes back empty
|
|
||||||
# and it is nice to get a meaninfule error message
|
|
||||||
elif self.hw_model == "":
|
|
||||||
logger.critical("Got an empty 'modelName' in the 'show version' returned by device %s", self.name)
|
|
||||||
else:
|
|
||||||
logger.warning("Could not connect to device %s: cannot open eAPI port", self.name)
|
|
||||||
|
|
||||||
self.established = bool(self.is_online and self.hw_model)
|
show_version = AntaCommand(command="show version")
|
||||||
|
await self._collect(show_version)
|
||||||
|
if not show_version.collected:
|
||||||
|
self.established = False
|
||||||
|
logger.warning("Cannot get hardware information from device %s", self.name)
|
||||||
|
return
|
||||||
|
|
||||||
|
self.hw_model = show_version.json_output.get("modelName", None)
|
||||||
|
if self.hw_model is None:
|
||||||
|
self.established = False
|
||||||
|
logger.critical("Cannot parse 'show version' returned by device %s", self.name)
|
||||||
|
# in some cases it is possible that 'modelName' comes back empty
|
||||||
|
elif self.hw_model == "":
|
||||||
|
self.established = False
|
||||||
|
logger.critical("Got an empty 'modelName' in the 'show version' returned by device %s", self.name)
|
||||||
|
else:
|
||||||
|
self.established = True
|
||||||
|
|
||||||
async def copy(self, sources: list[Path], destination: Path, direction: Literal["to", "from"] = "from") -> None:
|
async def copy(self, sources: list[Path], destination: Path, direction: Literal["to", "from"] = "from") -> None:
|
||||||
"""Copy files to and from the device using asyncssh.scp().
|
"""Copy files to and from the device using asyncssh.scp().
|
||||||
|
|
|
@ -20,8 +20,8 @@ class Host(BaseModel):
|
||||||
model_config = ConfigDict(extra="forbid")
|
model_config = ConfigDict(extra="forbid")
|
||||||
destination: IPv4Address | IPv6Address
|
destination: IPv4Address | IPv6Address
|
||||||
"""Destination address to ping."""
|
"""Destination address to ping."""
|
||||||
source: IPv4Address | IPv6Address | Interface
|
source: IPv4Address | IPv6Address | Interface | None = None
|
||||||
"""Source address IP or egress interface to use."""
|
"""Source address IP or egress interface to use. Can be provided in the `VerifyReachability` test."""
|
||||||
vrf: str = "default"
|
vrf: str = "default"
|
||||||
"""VRF context."""
|
"""VRF context."""
|
||||||
repeat: int = 2
|
repeat: int = 2
|
||||||
|
@ -38,10 +38,15 @@ class Host(BaseModel):
|
||||||
|
|
||||||
Examples
|
Examples
|
||||||
--------
|
--------
|
||||||
Host: 10.1.1.1 Source: 10.2.2.2 VRF: mgmt
|
- Host: 10.1.1.1 Source: 10.2.2.2 VRF: mgmt
|
||||||
|
- Host: 10.1.1.1 VRF: mgmt
|
||||||
|
|
||||||
"""
|
"""
|
||||||
return f"Host: {self.destination} Source: {self.source} VRF: {self.vrf}"
|
base_string = f"Host: {self.destination}"
|
||||||
|
if self.source:
|
||||||
|
base_string += f" Source: {self.source}"
|
||||||
|
base_string += f" VRF: {self.vrf}"
|
||||||
|
return base_string
|
||||||
|
|
||||||
|
|
||||||
class LLDPNeighbor(BaseModel):
|
class LLDPNeighbor(BaseModel):
|
||||||
|
|
|
@ -16,4 +16,6 @@ class CVXPeers(BaseModel):
|
||||||
"""Model for a CVX Cluster Peer."""
|
"""Model for a CVX Cluster Peer."""
|
||||||
|
|
||||||
peer_name: Hostname
|
peer_name: Hostname
|
||||||
|
"""The CVX Peer used communicate with a CVX server."""
|
||||||
registration_state: Literal["Connecting", "Connected", "Registration error", "Registration complete", "Unexpected peer state"] = "Registration complete"
|
registration_state: Literal["Connecting", "Connected", "Registration error", "Registration complete", "Unexpected peer state"] = "Registration complete"
|
||||||
|
"""The CVX registration state."""
|
||||||
|
|
65
anta/input_models/evpn.py
Normal file
65
anta/input_models/evpn.py
Normal file
|
@ -0,0 +1,65 @@
|
||||||
|
# Copyright (c) 2023-2025 Arista Networks, Inc.
|
||||||
|
# Use of this source code is governed by the Apache License 2.0
|
||||||
|
# that can be found in the LICENSE file.
|
||||||
|
"""Module containing input models for EVPN tests."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from ipaddress import IPv4Interface, IPv6Interface
|
||||||
|
from typing import Literal
|
||||||
|
|
||||||
|
from pydantic import BaseModel, ConfigDict
|
||||||
|
|
||||||
|
from anta.custom_types import Vni
|
||||||
|
|
||||||
|
|
||||||
|
class EVPNType5Prefix(BaseModel):
|
||||||
|
"""Model for an EVPN Type-5 prefix."""
|
||||||
|
|
||||||
|
model_config = ConfigDict(extra="forbid")
|
||||||
|
address: IPv4Interface | IPv6Interface
|
||||||
|
"""IPv4 or IPv6 prefix address to verify."""
|
||||||
|
vni: Vni
|
||||||
|
"""VNI associated with the prefix."""
|
||||||
|
routes: list[EVPNRoute] | None = None
|
||||||
|
"""Specific EVPN routes to verify for this prefix."""
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
"""Return a human-readable string representation of the EVPNType5Prefix for reporting."""
|
||||||
|
return f"Prefix: {self.address} VNI: {self.vni}"
|
||||||
|
|
||||||
|
|
||||||
|
class EVPNRoute(BaseModel):
|
||||||
|
"""Model for an EVPN Type-5 route for a prefix."""
|
||||||
|
|
||||||
|
model_config = ConfigDict(extra="forbid")
|
||||||
|
rd: str
|
||||||
|
"""Expected route distinguisher `<admin>:<local assignment>` of the route."""
|
||||||
|
domain: Literal["local", "remote"] = "local"
|
||||||
|
"""EVPN domain. Can be remote on gateway nodes in a multi-domain EVPN VXLAN fabric."""
|
||||||
|
paths: list[EVPNPath] | None = None
|
||||||
|
"""Specific paths to verify for this route."""
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
"""Return a human-readable string representation of the EVPNRoute for reporting."""
|
||||||
|
value = f"RD: {self.rd}"
|
||||||
|
if self.domain == "remote":
|
||||||
|
value += " Domain: remote"
|
||||||
|
return value
|
||||||
|
|
||||||
|
|
||||||
|
class EVPNPath(BaseModel):
|
||||||
|
"""Model for an EVPN Type-5 path for a route."""
|
||||||
|
|
||||||
|
model_config = ConfigDict(extra="forbid")
|
||||||
|
nexthop: str
|
||||||
|
"""Expected next-hop IPv4 or IPv6 address. Can be an empty string for local paths."""
|
||||||
|
route_targets: list[str] | None = None
|
||||||
|
"""List of expected RTs following the `ASN(asplain):nn` or `ASN(asdot):nn` or `IP-address:nn` format."""
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
"""Return a human-readable string representation of the EVPNPath for reporting."""
|
||||||
|
value = f"Nexthop: {self.nexthop}"
|
||||||
|
if self.route_targets:
|
||||||
|
value += f" RTs: {', '.join(self.route_targets)}"
|
||||||
|
return value
|
|
@ -5,14 +5,14 @@
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
from ipaddress import IPv4Address, IPv4Network, IPv6Address
|
from ipaddress import IPv4Address, IPv4Network, IPv6Address, IPv6Network
|
||||||
from typing import TYPE_CHECKING, Any, Literal
|
from typing import TYPE_CHECKING, Any, Literal
|
||||||
from warnings import warn
|
from warnings import warn
|
||||||
|
|
||||||
from pydantic import BaseModel, ConfigDict, Field, PositiveInt, model_validator
|
from pydantic import BaseModel, ConfigDict, Field, PositiveInt, model_validator
|
||||||
from pydantic_extra_types.mac_address import MacAddress
|
from pydantic_extra_types.mac_address import MacAddress
|
||||||
|
|
||||||
from anta.custom_types import Afi, BgpDropStats, BgpUpdateError, MultiProtocolCaps, RedistributedAfiSafi, RedistributedProtocol, Safi, Vni
|
from anta.custom_types import Afi, BgpCommunity, BgpDropStats, BgpUpdateError, Interface, MultiProtocolCaps, RedistributedAfiSafi, RedistributedProtocol, Safi, Vni
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
import sys
|
import sys
|
||||||
|
@ -150,26 +150,30 @@ class BgpAfi(BgpAddressFamily): # pragma: no cover
|
||||||
class BgpPeer(BaseModel):
|
class BgpPeer(BaseModel):
|
||||||
"""Model for a BGP peer.
|
"""Model for a BGP peer.
|
||||||
|
|
||||||
Only IPv4 peers are supported for now.
|
Supports IPv4, IPv6 and IPv6 link-local neighbors.
|
||||||
|
|
||||||
|
Also supports RFC5549 by providing the interface to be used for session establishment.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
model_config = ConfigDict(extra="forbid")
|
model_config = ConfigDict(extra="forbid")
|
||||||
peer_address: IPv4Address
|
peer_address: IPv4Address | IPv6Address | None = None
|
||||||
"""IPv4 address of the BGP peer."""
|
"""IP address of the BGP peer. Optional only if using `interface` for BGP RFC5549."""
|
||||||
|
interface: Interface | None = None
|
||||||
|
"""Interface to be used for BGP RFC5549 session establishment."""
|
||||||
vrf: str = "default"
|
vrf: str = "default"
|
||||||
"""Optional VRF for the BGP peer. Defaults to `default`."""
|
"""VRF for the BGP peer."""
|
||||||
peer_group: str | None = None
|
peer_group: str | None = None
|
||||||
"""Peer group of the BGP peer. Required field in the `VerifyBGPPeerGroup` test."""
|
"""Peer group of the BGP peer. Required field in the `VerifyBGPPeerGroup` test."""
|
||||||
advertised_routes: list[IPv4Network] | None = None
|
advertised_routes: list[IPv4Network | IPv6Network] | None = None
|
||||||
"""List of advertised routes in CIDR format. Required field in the `VerifyBGPExchangedRoutes` test."""
|
"""List of advertised routes in CIDR format. Required field in the `VerifyBGPExchangedRoutes` test."""
|
||||||
received_routes: list[IPv4Network] | None = None
|
received_routes: list[IPv4Network | IPv6Network] | None = None
|
||||||
"""List of received routes in CIDR format. Required field in the `VerifyBGPExchangedRoutes` test."""
|
"""List of received routes in CIDR format. Required field in the `VerifyBGPExchangedRoutes` test."""
|
||||||
capabilities: list[MultiProtocolCaps] | None = None
|
capabilities: list[MultiProtocolCaps] | None = None
|
||||||
"""List of BGP multiprotocol capabilities. Required field in the `VerifyBGPPeerMPCaps`, `VerifyBGPNlriAcceptance` tests."""
|
"""List of BGP multiprotocol capabilities. Required field in the `VerifyBGPPeerMPCaps`, `VerifyBGPNlriAcceptance` tests."""
|
||||||
strict: bool = False
|
strict: bool = False
|
||||||
"""If True, requires exact match of the provided BGP multiprotocol capabilities.
|
"""If True, requires exact match of the provided BGP multiprotocol capabilities.
|
||||||
|
|
||||||
Optional field in the `VerifyBGPPeerMPCaps` test. Defaults to False."""
|
Optional field in the `VerifyBGPPeerMPCaps` test."""
|
||||||
hold_time: int | None = Field(default=None, ge=3, le=7200)
|
hold_time: int | None = Field(default=None, ge=3, le=7200)
|
||||||
"""BGP hold time in seconds. Required field in the `VerifyBGPTimers` test."""
|
"""BGP hold time in seconds. Required field in the `VerifyBGPTimers` test."""
|
||||||
keep_alive_time: int | None = Field(default=None, ge=0, le=3600)
|
keep_alive_time: int | None = Field(default=None, ge=0, le=3600)
|
||||||
|
@ -183,11 +187,11 @@ class BgpPeer(BaseModel):
|
||||||
|
|
||||||
Optional field in the `VerifyBGPPeerUpdateErrors` test. If not provided, the test will verifies all the update error counters."""
|
Optional field in the `VerifyBGPPeerUpdateErrors` test. If not provided, the test will verifies all the update error counters."""
|
||||||
inbound_route_map: str | None = None
|
inbound_route_map: str | None = None
|
||||||
"""Inbound route map applied, defaults to None. Required field in the `VerifyBgpRouteMaps` test."""
|
"""Inbound route map applied to the peer. Optional field in the `VerifyBgpRouteMaps` test. If not provided, `outbound_route_map` must be provided."""
|
||||||
outbound_route_map: str | None = None
|
outbound_route_map: str | None = None
|
||||||
"""Outbound route map applied, defaults to None. Required field in the `VerifyBgpRouteMaps` test."""
|
"""Outbound route map applied to the peer. Optional field in the `VerifyBgpRouteMaps` test. If not provided, `inbound_route_map` must be provided."""
|
||||||
maximum_routes: int | None = Field(default=None, ge=0, le=4294967294)
|
maximum_routes: int | None = Field(default=None, ge=0, le=4294967294)
|
||||||
"""The maximum allowable number of BGP routes. `0` means unlimited. Required field in the `VerifyBGPPeerRouteLimit` test"""
|
"""The maximum allowable number of BGP routes. `0` means unlimited. Required field in the `VerifyBGPPeerRouteLimit` test."""
|
||||||
warning_limit: int | None = Field(default=None, ge=0, le=4294967294)
|
warning_limit: int | None = Field(default=None, ge=0, le=4294967294)
|
||||||
"""The warning limit for the maximum routes. `0` means no warning.
|
"""The warning limit for the maximum routes. `0` means no warning.
|
||||||
|
|
||||||
|
@ -196,10 +200,26 @@ class BgpPeer(BaseModel):
|
||||||
"""The Time-To-Live (TTL). Required field in the `VerifyBGPPeerTtlMultiHops` test."""
|
"""The Time-To-Live (TTL). Required field in the `VerifyBGPPeerTtlMultiHops` test."""
|
||||||
max_ttl_hops: int | None = Field(default=None, ge=1, le=255)
|
max_ttl_hops: int | None = Field(default=None, ge=1, le=255)
|
||||||
"""The Max TTL hops. Required field in the `VerifyBGPPeerTtlMultiHops` test."""
|
"""The Max TTL hops. Required field in the `VerifyBGPPeerTtlMultiHops` test."""
|
||||||
|
advertised_communities: list[BgpCommunity] = Field(default=["standard", "extended", "large"])
|
||||||
|
"""List of advertised communities to be verified.
|
||||||
|
|
||||||
|
Optional field in the `VerifyBGPAdvCommunities` test. If not provided, the test will verify that all communities are advertised."""
|
||||||
|
|
||||||
|
@model_validator(mode="after")
|
||||||
|
def validate_inputs(self) -> Self:
|
||||||
|
"""Validate the inputs provided to the BgpPeer class.
|
||||||
|
|
||||||
|
Either `peer_address` or `interface` must be provided, not both.
|
||||||
|
"""
|
||||||
|
if (self.peer_address is None) == (self.interface is None):
|
||||||
|
msg = "Exactly one of 'peer_address' or 'interface' must be provided"
|
||||||
|
raise ValueError(msg)
|
||||||
|
return self
|
||||||
|
|
||||||
def __str__(self) -> str:
|
def __str__(self) -> str:
|
||||||
"""Return a human-readable string representation of the BgpPeer for reporting."""
|
"""Return a human-readable string representation of the BgpPeer for reporting."""
|
||||||
return f"Peer: {self.peer_address} VRF: {self.vrf}"
|
identifier = f"Peer: {self.peer_address}" if self.peer_address is not None else f"Interface: {self.interface}"
|
||||||
|
return f"{identifier} VRF: {self.vrf}"
|
||||||
|
|
||||||
|
|
||||||
class BgpNeighbor(BgpPeer): # pragma: no cover
|
class BgpNeighbor(BgpPeer): # pragma: no cover
|
||||||
|
@ -344,7 +364,7 @@ class AddressFamilyConfig(BaseModel):
|
||||||
Following table shows the supported redistributed routes for each address family.
|
Following table shows the supported redistributed routes for each address family.
|
||||||
|
|
||||||
| IPv4 Unicast | IPv6 Unicast | IPv4 Multicast | IPv6 Multicast |
|
| IPv4 Unicast | IPv6 Unicast | IPv4 Multicast | IPv6 Multicast |
|
||||||
| ------------------------|-------------------------|------------------------|------------------------|
|
|-------------------------|-------------------------|------------------------|------------------------|
|
||||||
| AttachedHost | AttachedHost | AttachedHost | Connected |
|
| AttachedHost | AttachedHost | AttachedHost | Connected |
|
||||||
| Bgp | Bgp | Connected | IS-IS |
|
| Bgp | Bgp | Connected | IS-IS |
|
||||||
| Connected | Connected | IS-IS | OSPF Internal |
|
| Connected | Connected | IS-IS | OSPF Internal |
|
||||||
|
|
|
@ -26,6 +26,10 @@ class ISISInstance(BaseModel):
|
||||||
"""Configured SR data-plane for the IS-IS instance."""
|
"""Configured SR data-plane for the IS-IS instance."""
|
||||||
segments: list[Segment] | None = None
|
segments: list[Segment] | None = None
|
||||||
"""List of IS-IS SR segments associated with the instance. Required field in the `VerifyISISSegmentRoutingAdjacencySegments` test."""
|
"""List of IS-IS SR segments associated with the instance. Required field in the `VerifyISISSegmentRoutingAdjacencySegments` test."""
|
||||||
|
graceful_restart: bool = False
|
||||||
|
"""Graceful restart status."""
|
||||||
|
graceful_restart_helper: bool = True
|
||||||
|
"""Graceful restart helper status."""
|
||||||
|
|
||||||
def __str__(self) -> str:
|
def __str__(self) -> str:
|
||||||
"""Return a human-readable string representation of the ISISInstance for reporting."""
|
"""Return a human-readable string representation of the ISISInstance for reporting."""
|
||||||
|
|
26
anta/input_models/vlan.py
Normal file
26
anta/input_models/vlan.py
Normal file
|
@ -0,0 +1,26 @@
|
||||||
|
# Copyright (c) 2023-2025 Arista Networks, Inc.
|
||||||
|
# Use of this source code is governed by the Apache License 2.0
|
||||||
|
# that can be found in the LICENSE file.
|
||||||
|
"""Module containing input models for VLAN tests."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Literal
|
||||||
|
|
||||||
|
from pydantic import BaseModel, ConfigDict
|
||||||
|
|
||||||
|
from anta.custom_types import VlanId
|
||||||
|
|
||||||
|
|
||||||
|
class Vlan(BaseModel):
|
||||||
|
"""Model for a VLAN."""
|
||||||
|
|
||||||
|
model_config = ConfigDict(extra="forbid")
|
||||||
|
vlan_id: VlanId
|
||||||
|
"""The VLAN ID."""
|
||||||
|
status: Literal["active", "suspended", "inactive"]
|
||||||
|
"""The VLAN administrative status."""
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
"""Representation of the VLAN model."""
|
||||||
|
return f"VLAN: Vlan{self.vlan_id}"
|
|
@ -200,7 +200,7 @@ class AntaInventory(dict[str, AntaDevice]):
|
||||||
enable_password
|
enable_password
|
||||||
Enable password to use if required.
|
Enable password to use if required.
|
||||||
timeout
|
timeout
|
||||||
Timeout value in seconds for outgoing API calls.
|
Global timeout value in seconds for outgoing eAPI calls. None means no timeout.
|
||||||
file_format
|
file_format
|
||||||
Whether the inventory file is in JSON or YAML.
|
Whether the inventory file is in JSON or YAML.
|
||||||
enable
|
enable
|
||||||
|
@ -265,6 +265,11 @@ class AntaInventory(dict[str, AntaDevice]):
|
||||||
"""List of AntaDevice in this inventory."""
|
"""List of AntaDevice in this inventory."""
|
||||||
return list(self.values())
|
return list(self.values())
|
||||||
|
|
||||||
|
@property
|
||||||
|
def max_potential_connections(self) -> int | None:
|
||||||
|
"""Max potential connections of this inventory."""
|
||||||
|
return self._get_potential_connections()
|
||||||
|
|
||||||
###########################################################################
|
###########################################################################
|
||||||
# Public methods
|
# Public methods
|
||||||
###########################################################################
|
###########################################################################
|
||||||
|
@ -305,6 +310,29 @@ class AntaInventory(dict[str, AntaDevice]):
|
||||||
result.add_device(device)
|
result.add_device(device)
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
def _get_potential_connections(self) -> int | None:
|
||||||
|
"""Calculate the total potential concurrent connections for the current inventory.
|
||||||
|
|
||||||
|
This method sums the maximum concurrent connections allowed for each
|
||||||
|
AntaDevice in the inventory.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
int | None
|
||||||
|
The total sum of the `max_connections` attribute for all AntaDevice objects
|
||||||
|
in the inventory. Returns None if any AntaDevice does not have a `max_connections`
|
||||||
|
attribute or if its value is None, as the total count cannot be determined.
|
||||||
|
"""
|
||||||
|
potential_connections = 0
|
||||||
|
all_have_connections = True
|
||||||
|
for device in self.devices:
|
||||||
|
if device.max_connections is None:
|
||||||
|
all_have_connections = False
|
||||||
|
logger.debug("Device %s 'max_connections' is not available", device.name)
|
||||||
|
break
|
||||||
|
potential_connections += device.max_connections
|
||||||
|
return None if not all_have_connections else potential_connections
|
||||||
|
|
||||||
###########################################################################
|
###########################################################################
|
||||||
# SET methods
|
# SET methods
|
||||||
###########################################################################
|
###########################################################################
|
||||||
|
|
|
@ -29,7 +29,7 @@ class ReportTable:
|
||||||
"""TableReport Generate a Table based on TestResult."""
|
"""TableReport Generate a Table based on TestResult."""
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class Headers: # pylint: disable=too-many-instance-attributes
|
class Headers:
|
||||||
"""Headers for the table report."""
|
"""Headers for the table report."""
|
||||||
|
|
||||||
device: str = "Device"
|
device: str = "Device"
|
||||||
|
|
|
@ -21,49 +21,10 @@ if TYPE_CHECKING:
|
||||||
|
|
||||||
from anta.result_manager import ResultManager
|
from anta.result_manager import ResultManager
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
# pylint: disable=too-few-public-methods
|
|
||||||
class MDReportGenerator:
|
|
||||||
"""Class responsible for generating a Markdown report based on the provided `ResultManager` object.
|
|
||||||
|
|
||||||
It aggregates different report sections, each represented by a subclass of `MDReportBase`,
|
|
||||||
and sequentially generates their content into a markdown file.
|
|
||||||
|
|
||||||
The `generate` class method will loop over all the section subclasses and call their `generate_section` method.
|
|
||||||
The final report will be generated in the same order as the `sections` list of the method.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def generate(cls, results: ResultManager, md_filename: Path) -> None:
|
|
||||||
"""Generate and write the various sections of the markdown report.
|
|
||||||
|
|
||||||
Parameters
|
|
||||||
----------
|
|
||||||
results
|
|
||||||
The ResultsManager instance containing all test results.
|
|
||||||
md_filename
|
|
||||||
The path to the markdown file to write the report into.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
with md_filename.open("w", encoding="utf-8") as mdfile:
|
|
||||||
sections: list[MDReportBase] = [
|
|
||||||
ANTAReport(mdfile, results),
|
|
||||||
TestResultsSummary(mdfile, results),
|
|
||||||
SummaryTotals(mdfile, results),
|
|
||||||
SummaryTotalsDeviceUnderTest(mdfile, results),
|
|
||||||
SummaryTotalsPerCategory(mdfile, results),
|
|
||||||
TestResults(mdfile, results),
|
|
||||||
]
|
|
||||||
for section in sections:
|
|
||||||
section.generate_section()
|
|
||||||
except OSError as exc:
|
|
||||||
message = f"OSError caught while writing the Markdown file '{md_filename.resolve()}'."
|
|
||||||
anta_log_exception(exc, message, logger)
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
class MDReportBase(ABC):
|
class MDReportBase(ABC):
|
||||||
"""Base class for all sections subclasses.
|
"""Base class for all sections subclasses.
|
||||||
|
|
||||||
|
@ -178,10 +139,7 @@ class MDReportBase(ABC):
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
# Replace newlines with <br> to preserve line breaks in HTML
|
# Replace newlines with <br> to preserve line breaks in HTML
|
||||||
text = text.replace("\n", "<br>")
|
return text.replace("\n", "<br>")
|
||||||
|
|
||||||
# Replace backticks with single quotes
|
|
||||||
return text.replace("`", "'")
|
|
||||||
|
|
||||||
|
|
||||||
class ANTAReport(MDReportBase):
|
class ANTAReport(MDReportBase):
|
||||||
|
@ -297,3 +255,68 @@ class TestResults(MDReportBase):
|
||||||
"""Generate the `## Test Results` section of the markdown report."""
|
"""Generate the `## Test Results` section of the markdown report."""
|
||||||
self.write_heading(heading_level=2)
|
self.write_heading(heading_level=2)
|
||||||
self.write_table(table_heading=self.TABLE_HEADING, last_table=True)
|
self.write_table(table_heading=self.TABLE_HEADING, last_table=True)
|
||||||
|
|
||||||
|
|
||||||
|
# pylint: disable=too-few-public-methods
|
||||||
|
class MDReportGenerator:
|
||||||
|
"""Class responsible for generating a Markdown report based on the provided `ResultManager` object.
|
||||||
|
|
||||||
|
It aggregates different report sections, each represented by a subclass of `MDReportBase`,
|
||||||
|
and sequentially generates their content into a markdown file.
|
||||||
|
|
||||||
|
This class provides two methods for generating the report:
|
||||||
|
|
||||||
|
- `generate`: Uses a single result manager instance to generate all sections defined in the `DEFAULT_SECTIONS` class variable list.
|
||||||
|
|
||||||
|
- `generate_sections`: A custom list of sections is provided. Each section uses its own dedicated result manager instance,
|
||||||
|
allowing greater flexibility or isolation between section generations.
|
||||||
|
"""
|
||||||
|
|
||||||
|
DEFAULT_SECTIONS: ClassVar[list[type[MDReportBase]]] = [
|
||||||
|
ANTAReport,
|
||||||
|
TestResultsSummary,
|
||||||
|
SummaryTotals,
|
||||||
|
SummaryTotalsDeviceUnderTest,
|
||||||
|
SummaryTotalsPerCategory,
|
||||||
|
TestResults,
|
||||||
|
]
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def generate(cls, results: ResultManager, md_filename: Path) -> None:
|
||||||
|
"""Generate the sections of the markdown report defined in DEFAULT_SECTIONS using a single result manager instance for all sections.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
results
|
||||||
|
The ResultsManager instance containing all test results.
|
||||||
|
md_filename
|
||||||
|
The path to the markdown file to write the report into.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
with md_filename.open("w", encoding="utf-8") as mdfile:
|
||||||
|
for section in cls.DEFAULT_SECTIONS:
|
||||||
|
section(mdfile, results).generate_section()
|
||||||
|
except OSError as exc:
|
||||||
|
message = f"OSError caught while writing the Markdown file '{md_filename.resolve()}'."
|
||||||
|
anta_log_exception(exc, message, logger)
|
||||||
|
raise
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def generate_sections(cls, sections: list[tuple[type[MDReportBase], ResultManager]], md_filename: Path) -> None:
|
||||||
|
"""Generate the different sections of the markdown report provided in the sections argument with each section using its own result manager instance.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
sections
|
||||||
|
A list of tuples, where each tuple contains a subclass of `MDReportBase` and an instance of `ResultManager`.
|
||||||
|
md_filename
|
||||||
|
The path to the markdown file to write the report into.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
with md_filename.open("w", encoding="utf-8") as md_file:
|
||||||
|
for section, rm in sections:
|
||||||
|
section(md_file, rm).generate_section()
|
||||||
|
except OSError as exc:
|
||||||
|
message = f"OSError caught while writing the Markdown file '{md_filename.resolve()}'."
|
||||||
|
anta_log_exception(exc, message, logger)
|
||||||
|
raise
|
||||||
|
|
|
@ -21,7 +21,6 @@ from .models import CategoryStats, DeviceStats, TestStats
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
# pylint: disable=too-many-instance-attributes
|
|
||||||
class ResultManager:
|
class ResultManager:
|
||||||
"""Manager of ANTA Results.
|
"""Manager of ANTA Results.
|
||||||
|
|
||||||
|
@ -253,7 +252,7 @@ class ResultManager:
|
||||||
if not set(sort_by).issubset(set(accepted_fields)):
|
if not set(sort_by).issubset(set(accepted_fields)):
|
||||||
msg = f"Invalid sort_by fields: {sort_by}. Accepted fields are: {list(accepted_fields)}"
|
msg = f"Invalid sort_by fields: {sort_by}. Accepted fields are: {list(accepted_fields)}"
|
||||||
raise ValueError(msg)
|
raise ValueError(msg)
|
||||||
results = sorted(results, key=lambda result: [getattr(result, field) for field in sort_by])
|
results = sorted(results, key=lambda result: [getattr(result, field) or "" for field in sort_by])
|
||||||
|
|
||||||
return results
|
return results
|
||||||
|
|
||||||
|
@ -295,7 +294,7 @@ class ResultManager:
|
||||||
if not set(sort_by).issubset(set(accepted_fields)):
|
if not set(sort_by).issubset(set(accepted_fields)):
|
||||||
msg = f"Invalid sort_by fields: {sort_by}. Accepted fields are: {list(accepted_fields)}"
|
msg = f"Invalid sort_by fields: {sort_by}. Accepted fields are: {list(accepted_fields)}"
|
||||||
raise ValueError(msg)
|
raise ValueError(msg)
|
||||||
self._result_entries.sort(key=lambda result: [getattr(result, field) for field in sort_by])
|
self._result_entries.sort(key=lambda result: [getattr(result, field) or "" for field in sort_by])
|
||||||
return self
|
return self
|
||||||
|
|
||||||
def filter(self, hide: set[AntaTestStatus]) -> ResultManager:
|
def filter(self, hide: set[AntaTestStatus]) -> ResultManager:
|
||||||
|
@ -316,6 +315,25 @@ class ResultManager:
|
||||||
manager.results = self.get_results(possible_statuses - hide)
|
manager.results = self.get_results(possible_statuses - hide)
|
||||||
return manager
|
return manager
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def merge_results(cls, results_managers: list[ResultManager]) -> ResultManager:
|
||||||
|
"""Merge multiple ResultManager instances.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
results_managers
|
||||||
|
A list of ResultManager instances to merge.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
ResultManager
|
||||||
|
A new ResultManager instance containing the results of all the input ResultManagers.
|
||||||
|
"""
|
||||||
|
combined_results = list(chain(*(rm.results for rm in results_managers)))
|
||||||
|
merged_manager = cls()
|
||||||
|
merged_manager.results = combined_results
|
||||||
|
return merged_manager
|
||||||
|
|
||||||
@deprecated("This method is deprecated. This will be removed in ANTA v2.0.0.", category=DeprecationWarning)
|
@deprecated("This method is deprecated. This will be removed in ANTA v2.0.0.", category=DeprecationWarning)
|
||||||
def filter_by_tests(self, tests: set[str]) -> ResultManager:
|
def filter_by_tests(self, tests: set[str]) -> ResultManager:
|
||||||
"""Get a filtered ResultManager that only contains specific tests.
|
"""Get a filtered ResultManager that only contains specific tests.
|
||||||
|
|
|
@ -122,8 +122,6 @@ class TestResult(BaseModel):
|
||||||
return f"Test '{self.test}' (on '{self.name}'): Result '{self.result}'\nMessages: {self.messages}"
|
return f"Test '{self.test}' (on '{self.name}'): Result '{self.result}'\nMessages: {self.messages}"
|
||||||
|
|
||||||
|
|
||||||
# Pylint does not treat dataclasses differently: https://github.com/pylint-dev/pylint/issues/9058
|
|
||||||
# pylint: disable=too-many-instance-attributes
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class DeviceStats:
|
class DeviceStats:
|
||||||
"""Device statistics for a run of tests."""
|
"""Device statistics for a run of tests."""
|
||||||
|
|
|
@ -5,16 +5,16 @@
|
||||||
|
|
||||||
from __future__ import annotations
|
from __future__ import annotations
|
||||||
|
|
||||||
import asyncio
|
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import sys
|
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
from typing import TYPE_CHECKING, Any
|
from typing import TYPE_CHECKING, Any
|
||||||
|
|
||||||
|
from typing_extensions import deprecated
|
||||||
|
|
||||||
from anta import GITHUB_SUGGESTION
|
from anta import GITHUB_SUGGESTION
|
||||||
|
from anta._runner import AntaRunFilters, AntaRunner
|
||||||
from anta.logger import anta_log_exception, exc_to_str
|
from anta.logger import anta_log_exception, exc_to_str
|
||||||
from anta.models import AntaTest
|
|
||||||
from anta.tools import Catchtime, cprofile
|
from anta.tools import Catchtime, cprofile
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -31,6 +31,7 @@ if os.name == "posix":
|
||||||
|
|
||||||
DEFAULT_NOFILE = 16384
|
DEFAULT_NOFILE = 16384
|
||||||
|
|
||||||
|
@deprecated("This function is deprecated and will be removed in ANTA v2.0.0. Use AntaRunner class instead.", category=DeprecationWarning)
|
||||||
def adjust_rlimit_nofile() -> tuple[int, int]:
|
def adjust_rlimit_nofile() -> tuple[int, int]:
|
||||||
"""Adjust the maximum number of open file descriptors for the ANTA process.
|
"""Adjust the maximum number of open file descriptors for the ANTA process.
|
||||||
|
|
||||||
|
@ -53,13 +54,17 @@ if os.name == "posix":
|
||||||
logger.debug("Initial limit numbers for open file descriptors for the current ANTA process: Soft Limit: %s | Hard Limit: %s", limits[0], limits[1])
|
logger.debug("Initial limit numbers for open file descriptors for the current ANTA process: Soft Limit: %s | Hard Limit: %s", limits[0], limits[1])
|
||||||
nofile = min(limits[1], nofile)
|
nofile = min(limits[1], nofile)
|
||||||
logger.debug("Setting soft limit for open file descriptors for the current ANTA process to %s", nofile)
|
logger.debug("Setting soft limit for open file descriptors for the current ANTA process to %s", nofile)
|
||||||
resource.setrlimit(resource.RLIMIT_NOFILE, (nofile, limits[1]))
|
try:
|
||||||
|
resource.setrlimit(resource.RLIMIT_NOFILE, (nofile, limits[1]))
|
||||||
|
except ValueError as exception:
|
||||||
|
logger.warning("Failed to set soft limit for open file descriptors for the current ANTA process: %s", exc_to_str(exception))
|
||||||
return resource.getrlimit(resource.RLIMIT_NOFILE)
|
return resource.getrlimit(resource.RLIMIT_NOFILE)
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
@deprecated("This function is deprecated and will be removed in ANTA v2.0.0. Use AntaRunner class instead.", category=DeprecationWarning)
|
||||||
def log_cache_statistics(devices: list[AntaDevice]) -> None:
|
def log_cache_statistics(devices: list[AntaDevice]) -> None:
|
||||||
"""Log cache statistics for each device in the inventory.
|
"""Log cache statistics for each device in the inventory.
|
||||||
|
|
||||||
|
@ -80,6 +85,7 @@ def log_cache_statistics(devices: list[AntaDevice]) -> None:
|
||||||
logger.info("Caching is not enabled on %s", device.name)
|
logger.info("Caching is not enabled on %s", device.name)
|
||||||
|
|
||||||
|
|
||||||
|
@deprecated("This function is deprecated and will be removed in ANTA v2.0.0. Use AntaRunner class instead.", category=DeprecationWarning)
|
||||||
async def setup_inventory(inventory: AntaInventory, tags: set[str] | None, devices: set[str] | None, *, established_only: bool) -> AntaInventory | None:
|
async def setup_inventory(inventory: AntaInventory, tags: set[str] | None, devices: set[str] | None, *, established_only: bool) -> AntaInventory | None:
|
||||||
"""Set up the inventory for the ANTA run.
|
"""Set up the inventory for the ANTA run.
|
||||||
|
|
||||||
|
@ -122,6 +128,7 @@ async def setup_inventory(inventory: AntaInventory, tags: set[str] | None, devic
|
||||||
return selected_inventory
|
return selected_inventory
|
||||||
|
|
||||||
|
|
||||||
|
@deprecated("This function is deprecated and will be removed in ANTA v2.0.0. Use AntaRunner class instead.", category=DeprecationWarning)
|
||||||
def prepare_tests(
|
def prepare_tests(
|
||||||
inventory: AntaInventory, catalog: AntaCatalog, tests: set[str] | None, tags: set[str] | None
|
inventory: AntaInventory, catalog: AntaCatalog, tests: set[str] | None, tags: set[str] | None
|
||||||
) -> defaultdict[AntaDevice, set[AntaTestDefinition]] | None:
|
) -> defaultdict[AntaDevice, set[AntaTestDefinition]] | None:
|
||||||
|
@ -178,6 +185,7 @@ def prepare_tests(
|
||||||
return device_to_tests
|
return device_to_tests
|
||||||
|
|
||||||
|
|
||||||
|
@deprecated("This function is deprecated and will be removed in ANTA v2.0.0. Use AntaRunner class instead.", category=DeprecationWarning)
|
||||||
def get_coroutines(selected_tests: defaultdict[AntaDevice, set[AntaTestDefinition]], manager: ResultManager | None = None) -> list[Coroutine[Any, Any, TestResult]]:
|
def get_coroutines(selected_tests: defaultdict[AntaDevice, set[AntaTestDefinition]], manager: ResultManager | None = None) -> list[Coroutine[Any, Any, TestResult]]:
|
||||||
"""Get the coroutines for the ANTA run.
|
"""Get the coroutines for the ANTA run.
|
||||||
|
|
||||||
|
@ -250,62 +258,11 @@ async def main(
|
||||||
dry_run
|
dry_run
|
||||||
Build the list of coroutine to run and stop before test execution.
|
Build the list of coroutine to run and stop before test execution.
|
||||||
"""
|
"""
|
||||||
if not catalog.tests:
|
runner = AntaRunner()
|
||||||
logger.info("The list of tests is empty, exiting")
|
filters = AntaRunFilters(
|
||||||
return
|
devices=devices,
|
||||||
|
tests=tests,
|
||||||
with Catchtime(logger=logger, message="Preparing ANTA NRFU Run"):
|
tags=tags,
|
||||||
# Setup the inventory
|
established_only=established_only,
|
||||||
selected_inventory = inventory if dry_run else await setup_inventory(inventory, tags, devices, established_only=established_only)
|
)
|
||||||
if selected_inventory is None:
|
await runner.run(inventory, catalog, manager, filters, dry_run=dry_run)
|
||||||
return
|
|
||||||
|
|
||||||
with Catchtime(logger=logger, message="Preparing the tests"):
|
|
||||||
selected_tests = prepare_tests(selected_inventory, catalog, tests, tags)
|
|
||||||
if selected_tests is None:
|
|
||||||
return
|
|
||||||
final_tests_count = sum(len(tests) for tests in selected_tests.values())
|
|
||||||
|
|
||||||
run_info = (
|
|
||||||
"--- ANTA NRFU Run Information ---\n"
|
|
||||||
f"Number of devices: {len(inventory)} ({len(selected_inventory)} established)\n"
|
|
||||||
f"Total number of selected tests: {final_tests_count}\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
if os.name == "posix":
|
|
||||||
# Adjust the maximum number of open file descriptors for the ANTA process
|
|
||||||
limits = adjust_rlimit_nofile()
|
|
||||||
run_info += f"Maximum number of open file descriptors for the current ANTA process: {limits[0]}\n"
|
|
||||||
else:
|
|
||||||
# Running on non-Posix system, cannot manage the resource.
|
|
||||||
limits = (sys.maxsize, sys.maxsize)
|
|
||||||
run_info += "Running on a non-POSIX system, cannot adjust the maximum number of file descriptors.\n"
|
|
||||||
|
|
||||||
run_info += "---------------------------------"
|
|
||||||
|
|
||||||
logger.info(run_info)
|
|
||||||
|
|
||||||
if final_tests_count > limits[0]:
|
|
||||||
logger.warning(
|
|
||||||
"The number of concurrent tests is higher than the open file descriptors limit for this ANTA process.\n"
|
|
||||||
"Errors may occur while running the tests.\n"
|
|
||||||
"Please consult the ANTA FAQ."
|
|
||||||
)
|
|
||||||
|
|
||||||
coroutines = get_coroutines(selected_tests, manager if dry_run else None)
|
|
||||||
|
|
||||||
if dry_run:
|
|
||||||
logger.info("Dry-run mode, exiting before running the tests.")
|
|
||||||
for coro in coroutines:
|
|
||||||
coro.close()
|
|
||||||
return
|
|
||||||
|
|
||||||
if AntaTest.progress is not None:
|
|
||||||
AntaTest.nrfu_task = AntaTest.progress.add_task("Running NRFU Tests...", total=len(coroutines))
|
|
||||||
|
|
||||||
with Catchtime(logger=logger, message="Running ANTA tests"):
|
|
||||||
results = await asyncio.gather(*coroutines)
|
|
||||||
for result in results:
|
|
||||||
manager.add(result)
|
|
||||||
|
|
||||||
log_cache_statistics(selected_inventory.devices)
|
|
||||||
|
|
86
anta/settings.py
Normal file
86
anta/settings.py
Normal file
|
@ -0,0 +1,86 @@
|
||||||
|
# Copyright (c) 2023-2025 Arista Networks, Inc.
|
||||||
|
# Use of this source code is governed by the Apache License 2.0
|
||||||
|
# that can be found in the LICENSE file.
|
||||||
|
"""Settings for ANTA."""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from pydantic import Field, PositiveInt
|
||||||
|
from pydantic_settings import BaseSettings, SettingsConfigDict
|
||||||
|
|
||||||
|
from anta.logger import exc_to_str
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
DEFAULT_MAX_CONCURRENCY = 50000
|
||||||
|
"""Default value for the maximum number of concurrent tests in the event loop."""
|
||||||
|
|
||||||
|
DEFAULT_NOFILE = 16384
|
||||||
|
"""Default value for the maximum number of open file descriptors for the ANTA process."""
|
||||||
|
|
||||||
|
|
||||||
|
class AntaRunnerSettings(BaseSettings):
|
||||||
|
"""Environment variables for configuring the ANTA runner.
|
||||||
|
|
||||||
|
When initialized, relevant environment variables are loaded. If not set, default values are used.
|
||||||
|
|
||||||
|
On POSIX systems, also adjusts the process soft limit based on the `ANTA_NOFILE` environment variable
|
||||||
|
while respecting the system hard limit, meaning the new soft limit cannot exceed the system's hard limit.
|
||||||
|
|
||||||
|
On non-POSIX systems (Windows), sets the limit to `sys.maxsize`.
|
||||||
|
|
||||||
|
The adjusted limit is available with the `file_descriptor_limit` property after initialization.
|
||||||
|
|
||||||
|
Attributes
|
||||||
|
----------
|
||||||
|
nofile : PositiveInt
|
||||||
|
Environment variable: ANTA_NOFILE
|
||||||
|
|
||||||
|
The maximum number of open file descriptors for the ANTA process. Defaults to 16384.
|
||||||
|
|
||||||
|
max_concurrency : PositiveInt
|
||||||
|
Environment variable: ANTA_MAX_CONCURRENCY
|
||||||
|
|
||||||
|
The maximum number of concurrent tests that can run in the event loop. Defaults to 50000.
|
||||||
|
"""
|
||||||
|
|
||||||
|
model_config = SettingsConfigDict(env_prefix="ANTA_")
|
||||||
|
|
||||||
|
nofile: PositiveInt = Field(default=DEFAULT_NOFILE)
|
||||||
|
max_concurrency: PositiveInt = Field(default=DEFAULT_MAX_CONCURRENCY)
|
||||||
|
|
||||||
|
# Computed in post-init
|
||||||
|
_file_descriptor_limit: PositiveInt
|
||||||
|
|
||||||
|
# pylint: disable=arguments-differ
|
||||||
|
def model_post_init(self, _context: Any) -> None: # noqa: ANN401
|
||||||
|
"""Post-initialization method to set the file descriptor limit for the current ANTA process."""
|
||||||
|
if os.name != "posix":
|
||||||
|
logger.warning("Running on a non-POSIX system, cannot adjust the maximum number of file descriptors.")
|
||||||
|
self._file_descriptor_limit = sys.maxsize
|
||||||
|
return
|
||||||
|
|
||||||
|
import resource
|
||||||
|
|
||||||
|
limits = resource.getrlimit(resource.RLIMIT_NOFILE)
|
||||||
|
logger.debug("Initial file descriptor limits for the current ANTA process: Soft Limit: %s | Hard Limit: %s", limits[0], limits[1])
|
||||||
|
|
||||||
|
# Set new soft limit to minimum of requested and hard limit
|
||||||
|
new_soft_limit = min(limits[1], self.nofile)
|
||||||
|
logger.debug("Setting file descriptor soft limit to %s", new_soft_limit)
|
||||||
|
try:
|
||||||
|
resource.setrlimit(resource.RLIMIT_NOFILE, (new_soft_limit, limits[1]))
|
||||||
|
except ValueError as exception:
|
||||||
|
logger.warning("Failed to set file descriptor soft limit for the current ANTA process: %s", exc_to_str(exception))
|
||||||
|
|
||||||
|
self._file_descriptor_limit = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
|
||||||
|
|
||||||
|
@property
|
||||||
|
def file_descriptor_limit(self) -> PositiveInt:
|
||||||
|
"""The maximum number of file descriptors available to the process."""
|
||||||
|
return self._file_descriptor_limit
|
|
@ -38,8 +38,7 @@ class VerifyReachability(AntaTest):
|
||||||
df_bit: True
|
df_bit: True
|
||||||
size: 100
|
size: 100
|
||||||
reachable: true
|
reachable: true
|
||||||
- source: Management0
|
- destination: 8.8.8.8
|
||||||
destination: 8.8.8.8
|
|
||||||
vrf: MGMT
|
vrf: MGMT
|
||||||
df_bit: True
|
df_bit: True
|
||||||
size: 100
|
size: 100
|
||||||
|
@ -55,7 +54,7 @@ class VerifyReachability(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["connectivity"]
|
categories: ClassVar[list[str]] = ["connectivity"]
|
||||||
# Template uses '{size}{df_bit}' without space since df_bit includes leading space when enabled
|
# Template uses '{size}{df_bit}' without space since df_bit includes leading space when enabled
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [
|
||||||
AntaTemplate(template="ping vrf {vrf} {destination} source {source} size {size}{df_bit} repeat {repeat}", revision=1)
|
AntaTemplate(template="ping vrf {vrf} {destination}{source} size {size}{df_bit} repeat {repeat}", revision=1)
|
||||||
]
|
]
|
||||||
|
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
|
@ -71,7 +70,7 @@ class VerifyReachability(AntaTest):
|
||||||
def validate_hosts(cls, hosts: list[T]) -> list[T]:
|
def validate_hosts(cls, hosts: list[T]) -> list[T]:
|
||||||
"""Validate the 'destination' and 'source' IP address family in each host."""
|
"""Validate the 'destination' and 'source' IP address family in each host."""
|
||||||
for host in hosts:
|
for host in hosts:
|
||||||
if not isinstance(host.source, str) and host.destination.version != host.source.version:
|
if host.source and not isinstance(host.source, str) and host.destination.version != host.source.version:
|
||||||
msg = f"{host} IP address family for destination does not match source"
|
msg = f"{host} IP address family for destination does not match source"
|
||||||
raise ValueError(msg)
|
raise ValueError(msg)
|
||||||
return hosts
|
return hosts
|
||||||
|
@ -80,7 +79,12 @@ class VerifyReachability(AntaTest):
|
||||||
"""Render the template for each host in the input list."""
|
"""Render the template for each host in the input list."""
|
||||||
return [
|
return [
|
||||||
template.render(
|
template.render(
|
||||||
destination=host.destination, source=host.source, vrf=host.vrf, repeat=host.repeat, size=host.size, df_bit=" df-bit" if host.df_bit else ""
|
destination=host.destination,
|
||||||
|
source=f" source {host.source}" if host.source else "",
|
||||||
|
vrf=host.vrf,
|
||||||
|
repeat=host.repeat,
|
||||||
|
size=host.size,
|
||||||
|
df_bit=" df-bit" if host.df_bit else "",
|
||||||
)
|
)
|
||||||
for host in self.inputs.hosts
|
for host in self.inputs.hosts
|
||||||
]
|
]
|
||||||
|
|
|
@ -149,7 +149,7 @@ class VerifyMcsServerMounts(AntaTest):
|
||||||
active_count = 0
|
active_count = 0
|
||||||
|
|
||||||
if not (connections := command_output.get("connections")):
|
if not (connections := command_output.get("connections")):
|
||||||
self.result.is_failure("CVX connections are not available.")
|
self.result.is_failure("CVX connections are not available")
|
||||||
return
|
return
|
||||||
|
|
||||||
for connection in connections:
|
for connection in connections:
|
||||||
|
|
185
anta/tests/evpn.py
Normal file
185
anta/tests/evpn.py
Normal file
|
@ -0,0 +1,185 @@
|
||||||
|
# Copyright (c) 2023-2025 Arista Networks, Inc.
|
||||||
|
# Use of this source code is governed by the Apache License 2.0
|
||||||
|
# that can be found in the LICENSE file.
|
||||||
|
"""Module related to EVPN tests."""
|
||||||
|
|
||||||
|
# mypy: disable-error-code=attr-defined
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from typing import Any, ClassVar
|
||||||
|
|
||||||
|
from anta.input_models.evpn import EVPNPath, EVPNRoute, EVPNType5Prefix
|
||||||
|
from anta.models import AntaCommand, AntaTemplate, AntaTest
|
||||||
|
|
||||||
|
|
||||||
|
class VerifyEVPNType5Routes(AntaTest):
|
||||||
|
"""Verifies EVPN Type-5 routes for given IP prefixes and VNIs.
|
||||||
|
|
||||||
|
It supports multiple levels of verification based on the provided input:
|
||||||
|
|
||||||
|
1. **Prefix/VNI only:** Verifies there is at least one 'active' and 'valid' path across all
|
||||||
|
Route Distinguishers (RDs) learning the given prefix and VNI.
|
||||||
|
2. **Specific Routes (RD/Domain):** Verifies that routes matching the specified RDs and domains
|
||||||
|
exist for the prefix/VNI. For each specified route, it checks if at least one of its paths
|
||||||
|
is 'active' and 'valid'.
|
||||||
|
3. **Specific Paths (Nexthop/Route Targets):** Verifies that specific paths exist within a
|
||||||
|
specified route (RD/Domain). For each specified path criteria (nexthop and optional route targets),
|
||||||
|
it finds all matching paths received from the peer and checks if at least one of these
|
||||||
|
matching paths is 'active' and 'valid'. The route targets check ensures all specified RTs
|
||||||
|
are present in the path's extended communities (subset check).
|
||||||
|
|
||||||
|
Expected Results
|
||||||
|
----------------
|
||||||
|
* Success:
|
||||||
|
- If only prefix/VNI is provided: The prefix/VNI exists in the EVPN table
|
||||||
|
and has at least one active and valid path across all RDs.
|
||||||
|
- If specific routes are provided: All specified routes (by RD/Domain) are found,
|
||||||
|
and each has at least one active and valid path (if paths are not specified for the route).
|
||||||
|
- If specific paths are provided: All specified routes are found, and for each specified path criteria (nexthop/RTs),
|
||||||
|
at least one matching path exists and is active and valid.
|
||||||
|
* Failure:
|
||||||
|
- No EVPN Type-5 routes are found for the given prefix/VNI.
|
||||||
|
- A specified route (RD/Domain) is not found.
|
||||||
|
- No active and valid path is found when required (either globally for the prefix, per specified route, or per specified path criteria).
|
||||||
|
- A specified path criteria (nexthop/RTs) does not match any received paths for the route.
|
||||||
|
|
||||||
|
Examples
|
||||||
|
--------
|
||||||
|
```yaml
|
||||||
|
anta.tests.evpn:
|
||||||
|
- VerifyEVPNType5Routes:
|
||||||
|
prefixes:
|
||||||
|
# At least one active/valid path across all RDs
|
||||||
|
- address: 192.168.10.0/24
|
||||||
|
vni: 10
|
||||||
|
# Specific routes each has at least one active/valid path
|
||||||
|
- address: 192.168.20.0/24
|
||||||
|
vni: 20
|
||||||
|
routes:
|
||||||
|
- rd: "10.0.0.1:20"
|
||||||
|
domain: local
|
||||||
|
- rd: "10.0.0.2:20"
|
||||||
|
domain: remote
|
||||||
|
# At least one active/valid path matching the nexthop
|
||||||
|
- address: 192.168.30.0/24
|
||||||
|
vni: 30
|
||||||
|
routes:
|
||||||
|
- rd: "10.0.0.1:30"
|
||||||
|
domain: local
|
||||||
|
paths:
|
||||||
|
- nexthop: 10.1.1.1
|
||||||
|
# At least one active/valid path matching nexthop and specific RTs
|
||||||
|
- address: 192.168.40.0/24
|
||||||
|
vni: 40
|
||||||
|
routes:
|
||||||
|
- rd: "10.0.0.1:40"
|
||||||
|
domain: local
|
||||||
|
paths:
|
||||||
|
- nexthop: 10.1.1.1
|
||||||
|
route_targets:
|
||||||
|
- "40:40"
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
categories: ClassVar[list[str]] = ["bgp"]
|
||||||
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaTemplate(template="show bgp evpn route-type ip-prefix {address} vni {vni}", revision=2)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyEVPNType5Routes test."""
|
||||||
|
|
||||||
|
prefixes: list[EVPNType5Prefix]
|
||||||
|
"""List of EVPN Type-5 prefixes to verify."""
|
||||||
|
|
||||||
|
def render(self, template: AntaTemplate) -> list[AntaCommand]:
|
||||||
|
"""Render the template for each EVPN Type-5 prefix in the input list."""
|
||||||
|
return [template.render(address=str(prefix.address), vni=prefix.vni) for prefix in self.inputs.prefixes]
|
||||||
|
|
||||||
|
# NOTE: The following static methods can be moved at the module level if needed for other EVPN tests
|
||||||
|
@staticmethod
|
||||||
|
def _get_all_paths(evpn_routes_data: dict[str, Any]) -> list[dict[str, Any]]:
|
||||||
|
"""Extract all 'evpnRoutePaths' from the entire 'evpnRoutes' dictionary."""
|
||||||
|
all_paths = []
|
||||||
|
for route_data in evpn_routes_data.values():
|
||||||
|
all_paths.extend(route_data["evpnRoutePaths"])
|
||||||
|
return all_paths
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _find_route(evpn_routes_data: dict[str, Any], rd_to_find: str, domain_to_find: str) -> dict[str, Any] | None:
|
||||||
|
"""Find the specific route block for a given RD and domain."""
|
||||||
|
for route_data in evpn_routes_data.values():
|
||||||
|
if route_data["routeKeyDetail"].get("rd") == rd_to_find and route_data["routeKeyDetail"].get("domain") == domain_to_find:
|
||||||
|
return route_data
|
||||||
|
return None
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _find_paths(paths: list[dict[str, Any]], nexthop: str, route_targets: list[str] | None = None) -> list[dict[str, Any]]:
|
||||||
|
"""Find all matching paths for a given nexthop and RTs."""
|
||||||
|
route_targets = [f"Route-Target-AS:{rt}" for rt in route_targets] if route_targets is not None else []
|
||||||
|
return [path for path in paths if path["nextHop"] == nexthop and set(route_targets).issubset(set(path["routeDetail"]["extCommunities"]))]
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _has_active_valid_path(paths: list[dict[str, Any]]) -> bool:
|
||||||
|
"""Check if any path in the list is active and valid."""
|
||||||
|
return any(path["routeType"]["active"] and path["routeType"]["valid"] for path in paths)
|
||||||
|
|
||||||
|
@AntaTest.anta_test
|
||||||
|
def test(self) -> None:
|
||||||
|
"""Main test function for VerifyEVPNType5Routes."""
|
||||||
|
self.result.is_success()
|
||||||
|
|
||||||
|
for command, prefix_input in zip(self.instance_commands, self.inputs.prefixes):
|
||||||
|
# Verify that the prefix is in the BGP EVPN table
|
||||||
|
evpn_routes_data = command.json_output.get("evpnRoutes")
|
||||||
|
if not evpn_routes_data:
|
||||||
|
self.result.is_failure(f"{prefix_input} - No EVPN Type-5 routes found")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Delegate verification logic for this prefix
|
||||||
|
self._verify_routes_for_prefix(prefix_input, evpn_routes_data)
|
||||||
|
|
||||||
|
def _verify_routes_for_prefix(self, prefix_input: EVPNType5Prefix, evpn_routes_data: dict[str, Any]) -> None:
|
||||||
|
"""Verify EVPN routes for an input prefix."""
|
||||||
|
# Case: routes not provided for the prefix, check that at least one EVPN Type-5 route
|
||||||
|
# has at least one active and valid path across all learned routes from all RDs combined
|
||||||
|
if prefix_input.routes is None:
|
||||||
|
all_paths = self._get_all_paths(evpn_routes_data)
|
||||||
|
if not self._has_active_valid_path(all_paths):
|
||||||
|
self.result.is_failure(f"{prefix_input} - No active and valid path found across all RDs")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Case: routes *is* provided, check each specified route
|
||||||
|
for route_input in prefix_input.routes:
|
||||||
|
# Try to find a route with matching RD and domain
|
||||||
|
route_data = self._find_route(evpn_routes_data, route_input.rd, route_input.domain)
|
||||||
|
if route_data is None:
|
||||||
|
self.result.is_failure(f"{prefix_input} {route_input} - Route not found")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Route found, now check its paths based on route_input criteria
|
||||||
|
self._verify_paths_for_route(prefix_input, route_input, route_data)
|
||||||
|
|
||||||
|
def _verify_paths_for_route(self, prefix_input: EVPNType5Prefix, route_input: EVPNRoute, route_data: dict[str, Any]) -> None:
|
||||||
|
"""Verify paths for a specific EVPN route (route_data) based on route_input criteria."""
|
||||||
|
route_paths = route_data["evpnRoutePaths"]
|
||||||
|
|
||||||
|
# Case: paths not provided for the route, check that at least one path is active/valid
|
||||||
|
if route_input.paths is None:
|
||||||
|
if not self._has_active_valid_path(route_paths):
|
||||||
|
self.result.is_failure(f"{prefix_input} {route_input} - No active and valid path found")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Case: paths *is* provided, check each specified path criteria
|
||||||
|
for path_input in route_input.paths:
|
||||||
|
self._verify_single_path(prefix_input, route_input, path_input, route_paths)
|
||||||
|
|
||||||
|
def _verify_single_path(self, prefix_input: EVPNType5Prefix, route_input: EVPNRoute, path_input: EVPNPath, available_paths: list[dict[str, Any]]) -> None:
|
||||||
|
"""Verify if at least one active/valid path exists among available_paths matching the path_input criteria."""
|
||||||
|
# Try to find all paths matching nexthop and RTs criteria from the available paths for this route
|
||||||
|
matching_paths = self._find_paths(available_paths, path_input.nexthop, path_input.route_targets)
|
||||||
|
if not matching_paths:
|
||||||
|
self.result.is_failure(f"{prefix_input} {route_input} {path_input} - Path not found")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Check that at least one matching path is active/valid
|
||||||
|
if not self._has_active_valid_path(matching_paths):
|
||||||
|
self.result.is_failure(f"{prefix_input} {route_input} {path_input} - No active and valid path found")
|
|
@ -38,7 +38,7 @@ class VerifyFieldNotice44Resolution(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["field notices"]
|
categories: ClassVar[list[str]] = ["field notices"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show version detail", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show version detail", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyFieldNotice44Resolution."""
|
"""Main test function for VerifyFieldNotice44Resolution."""
|
||||||
|
@ -142,7 +142,7 @@ class VerifyFieldNotice72Resolution(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["field notices"]
|
categories: ClassVar[list[str]] = ["field notices"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show version detail", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show version detail", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyFieldNotice72Resolution."""
|
"""Main test function for VerifyFieldNotice72Resolution."""
|
||||||
|
|
|
@ -109,7 +109,7 @@ class VerifyHardwareFlowTrackerStatus(AntaTest):
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
# Check if hardware flow tracking is configured
|
# Check if hardware flow tracking is configured
|
||||||
if not command_output.get("running"):
|
if not command_output.get("running"):
|
||||||
self.result.is_failure("Hardware flow tracking is not running.")
|
self.result.is_failure("Hardware flow tracking is not running")
|
||||||
return
|
return
|
||||||
|
|
||||||
for tracker in self.inputs.trackers:
|
for tracker in self.inputs.trackers:
|
||||||
|
|
|
@ -9,6 +9,7 @@ from __future__ import annotations
|
||||||
|
|
||||||
from typing import TYPE_CHECKING, ClassVar
|
from typing import TYPE_CHECKING, ClassVar
|
||||||
|
|
||||||
|
from anta.custom_types import PowerSupplyFanStatus, PowerSupplyStatus
|
||||||
from anta.decorators import skip_on_platforms
|
from anta.decorators import skip_on_platforms
|
||||||
from anta.models import AntaCommand, AntaTest
|
from anta.models import AntaCommand, AntaTest
|
||||||
|
|
||||||
|
@ -45,7 +46,7 @@ class VerifyTransceiversManufacturers(AntaTest):
|
||||||
manufacturers: list[str]
|
manufacturers: list[str]
|
||||||
"""List of approved transceivers manufacturers."""
|
"""List of approved transceivers manufacturers."""
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyTransceiversManufacturers."""
|
"""Main test function for VerifyTransceiversManufacturers."""
|
||||||
|
@ -78,7 +79,7 @@ class VerifyTemperature(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["hardware"]
|
categories: ClassVar[list[str]] = ["hardware"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show system environment temperature", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show system environment temperature", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyTemperature."""
|
"""Main test function for VerifyTemperature."""
|
||||||
|
@ -108,7 +109,7 @@ class VerifyTransceiversTemperature(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["hardware"]
|
categories: ClassVar[list[str]] = ["hardware"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show system environment temperature transceiver", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show system environment temperature transceiver", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyTransceiversTemperature."""
|
"""Main test function for VerifyTransceiversTemperature."""
|
||||||
|
@ -141,7 +142,7 @@ class VerifyEnvironmentSystemCooling(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["hardware"]
|
categories: ClassVar[list[str]] = ["hardware"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show system environment cooling", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show system environment cooling", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyEnvironmentSystemCooling."""
|
"""Main test function for VerifyEnvironmentSystemCooling."""
|
||||||
|
@ -176,10 +177,10 @@ class VerifyEnvironmentCooling(AntaTest):
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifyEnvironmentCooling test."""
|
"""Input model for the VerifyEnvironmentCooling test."""
|
||||||
|
|
||||||
states: list[str]
|
states: list[PowerSupplyFanStatus]
|
||||||
"""List of accepted states of fan status."""
|
"""List of accepted states of fan status."""
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyEnvironmentCooling."""
|
"""Main test function for VerifyEnvironmentCooling."""
|
||||||
|
@ -225,10 +226,10 @@ class VerifyEnvironmentPower(AntaTest):
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifyEnvironmentPower test."""
|
"""Input model for the VerifyEnvironmentPower test."""
|
||||||
|
|
||||||
states: list[str]
|
states: list[PowerSupplyStatus]
|
||||||
"""List of accepted states list of power supplies status."""
|
"""List of accepted states list of power supplies status."""
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyEnvironmentPower."""
|
"""Main test function for VerifyEnvironmentPower."""
|
||||||
|
@ -259,7 +260,7 @@ class VerifyAdverseDrops(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["hardware"]
|
categories: ClassVar[list[str]] = ["hardware"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show hardware counter drop", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show hardware counter drop", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyAdverseDrops."""
|
"""Main test function for VerifyAdverseDrops."""
|
||||||
|
|
|
@ -13,7 +13,7 @@ from typing import ClassVar, TypeVar
|
||||||
from pydantic import Field, field_validator
|
from pydantic import Field, field_validator
|
||||||
from pydantic_extra_types.mac_address import MacAddress
|
from pydantic_extra_types.mac_address import MacAddress
|
||||||
|
|
||||||
from anta.custom_types import Interface, Percent, PositiveInteger
|
from anta.custom_types import Interface, InterfaceType, Percent, PortChannelInterface, PositiveInteger
|
||||||
from anta.decorators import skip_on_platforms
|
from anta.decorators import skip_on_platforms
|
||||||
from anta.input_models.interfaces import InterfaceDetail, InterfaceState
|
from anta.input_models.interfaces import InterfaceDetail, InterfaceState
|
||||||
from anta.models import AntaCommand, AntaTemplate, AntaTest
|
from anta.models import AntaCommand, AntaTemplate, AntaTest
|
||||||
|
@ -25,17 +25,62 @@ BPS_GBPS_CONVERSIONS = 1000000000
|
||||||
T = TypeVar("T", bound=InterfaceState)
|
T = TypeVar("T", bound=InterfaceState)
|
||||||
|
|
||||||
|
|
||||||
|
def _is_interface_ignored(interface: str, ignored_interfaces: list[str] | None = None) -> bool | None:
|
||||||
|
"""Verify if an interface is present in the ignored interfaces list.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
interface
|
||||||
|
This is a string containing the interface name.
|
||||||
|
ignored_interfaces
|
||||||
|
A list containing the interfaces or interface types to ignore.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
bool
|
||||||
|
True if the interface is in the list of ignored interfaces, false otherwise.
|
||||||
|
Example
|
||||||
|
-------
|
||||||
|
```python
|
||||||
|
>>> _is_interface_ignored(interface="Ethernet1", ignored_interfaces=["Ethernet", "Port-Channel1"])
|
||||||
|
True
|
||||||
|
>>> _is_interface_ignored(interface="Ethernet2", ignored_interfaces=["Ethernet1", "Port-Channel"])
|
||||||
|
False
|
||||||
|
>>> _is_interface_ignored(interface="Port-Channel1", ignored_interfaces=["Ethernet1", "Port-Channel"])
|
||||||
|
True
|
||||||
|
>>> _is_interface_ignored(interface="Ethernet1/1", ignored_interfaces: ["Ethernet1/1", "Port-Channel"])
|
||||||
|
True
|
||||||
|
>>> _is_interface_ignored(interface="Ethernet1/1", ignored_interfaces: ["Ethernet1", "Port-Channel"])
|
||||||
|
False
|
||||||
|
>>> _is_interface_ignored(interface="Ethernet1.100", ignored_interfaces: ["Ethernet1.100", "Port-Channel"])
|
||||||
|
True
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
interface_prefix = re.findall(r"^[a-zA-Z-]+", interface, re.IGNORECASE)[0]
|
||||||
|
interface_exact_match = False
|
||||||
|
if ignored_interfaces:
|
||||||
|
for ignored_interface in ignored_interfaces:
|
||||||
|
if interface == ignored_interface:
|
||||||
|
interface_exact_match = True
|
||||||
|
break
|
||||||
|
return bool(any([interface_exact_match, interface_prefix in ignored_interfaces]))
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
class VerifyInterfaceUtilization(AntaTest):
|
class VerifyInterfaceUtilization(AntaTest):
|
||||||
"""Verifies that the utilization of interfaces is below a certain threshold.
|
"""Verifies that the utilization of interfaces is below a certain threshold.
|
||||||
|
|
||||||
Load interval (default to 5 minutes) is defined in device configuration.
|
Load interval (default to 5 minutes) is defined in device configuration.
|
||||||
This test has been implemented for full-duplex interfaces only.
|
|
||||||
|
!!! warning
|
||||||
|
This test has been implemented for full-duplex interfaces only.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
* Success: The test will pass if all interfaces have a usage below the threshold.
|
* Success: The test will pass if all interfaces have a usage below the threshold.
|
||||||
* Failure: The test will fail if one or more interfaces have a usage above the threshold.
|
* Failure: If any of the following occur:
|
||||||
* Error: The test will error out if the device has at least one non full-duplex interface.
|
- One or more interfaces have a usage above the threshold.
|
||||||
|
- The device has at least one non full-duplex interface.
|
||||||
|
|
||||||
Examples
|
Examples
|
||||||
--------
|
--------
|
||||||
|
@ -43,6 +88,9 @@ class VerifyInterfaceUtilization(AntaTest):
|
||||||
anta.tests.interfaces:
|
anta.tests.interfaces:
|
||||||
- VerifyInterfaceUtilization:
|
- VerifyInterfaceUtilization:
|
||||||
threshold: 70.0
|
threshold: 70.0
|
||||||
|
ignored_interfaces:
|
||||||
|
- Ethernet1
|
||||||
|
- Port-Channel1
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -56,7 +104,9 @@ class VerifyInterfaceUtilization(AntaTest):
|
||||||
"""Input model for the VerifyInterfaceUtilization test."""
|
"""Input model for the VerifyInterfaceUtilization test."""
|
||||||
|
|
||||||
threshold: Percent = 75.0
|
threshold: Percent = 75.0
|
||||||
"""Interface utilization threshold above which the test will fail. Defaults to 75%."""
|
"""Interface utilization threshold above which the test will fail."""
|
||||||
|
ignored_interfaces: list[InterfaceType | Interface] | None = None
|
||||||
|
"""A list of interfaces or interface types like Management which will ignore all Management interfaces."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
|
@ -67,12 +117,23 @@ class VerifyInterfaceUtilization(AntaTest):
|
||||||
interfaces = self.instance_commands[1].json_output
|
interfaces = self.instance_commands[1].json_output
|
||||||
|
|
||||||
for intf, rate in rates["interfaces"].items():
|
for intf, rate in rates["interfaces"].items():
|
||||||
|
interface_data = []
|
||||||
|
# Verification is skipped if the interface is in the ignored interfaces list.
|
||||||
|
if _is_interface_ignored(intf, self.inputs.ignored_interfaces):
|
||||||
|
continue
|
||||||
|
|
||||||
# The utilization logic has been implemented for full-duplex interfaces only
|
# The utilization logic has been implemented for full-duplex interfaces only
|
||||||
if ((duplex := (interface := interfaces["interfaces"][intf]).get("duplex", None)) is not None and duplex != duplex_full) or (
|
if not all([duplex := (interface := interfaces["interfaces"][intf]).get("duplex", None), duplex == duplex_full]):
|
||||||
(members := interface.get("memberInterfaces", None)) is not None and any(stats["duplex"] != duplex_full for stats in members.values())
|
if (members := interface.get("memberInterfaces", None)) is None:
|
||||||
):
|
self.result.is_failure(f"Interface: {intf} - Test not implemented for non-full-duplex interfaces - Expected: {duplex_full} Actual: {duplex}")
|
||||||
self.result.is_failure(f"Interface {intf} or one of its member interfaces is not Full-Duplex. VerifyInterfaceUtilization has not been implemented.")
|
continue
|
||||||
return
|
interface_data = [(member_interface, state) for member_interface, stats in members.items() if (state := stats["duplex"]) != duplex_full]
|
||||||
|
|
||||||
|
for member_interface in interface_data:
|
||||||
|
self.result.is_failure(
|
||||||
|
f"Interface: {intf} Member Interface: {member_interface[0]} - Test not implemented for non-full-duplex interfaces - Expected: {duplex_full}"
|
||||||
|
f" Actual: {member_interface[1]}"
|
||||||
|
)
|
||||||
|
|
||||||
if (bandwidth := interfaces["interfaces"][intf]["bandwidth"]) == 0:
|
if (bandwidth := interfaces["interfaces"][intf]["bandwidth"]) == 0:
|
||||||
self.logger.debug("Interface %s has been ignored due to null bandwidth value", intf)
|
self.logger.debug("Interface %s has been ignored due to null bandwidth value", intf)
|
||||||
|
@ -106,12 +167,21 @@ class VerifyInterfaceErrors(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["interfaces"]
|
categories: ClassVar[list[str]] = ["interfaces"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show interfaces counters errors", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show interfaces counters errors", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyInterfaceErrors test."""
|
||||||
|
|
||||||
|
ignored_interfaces: list[InterfaceType | Interface] | None = None
|
||||||
|
"""A list of interfaces or interface types like Management which will ignore all Management interfaces."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyInterfaceErrors."""
|
"""Main test function for VerifyInterfaceErrors."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
for interface, counters in command_output["interfaceErrorCounters"].items():
|
for interface, counters in command_output["interfaceErrorCounters"].items():
|
||||||
|
# Verification is skipped if the interface is in the ignored interfaces list.
|
||||||
|
if _is_interface_ignored(interface, self.inputs.ignored_interfaces):
|
||||||
|
continue
|
||||||
counters_data = [f"{counter}: {value}" for counter, value in counters.items() if value > 0]
|
counters_data = [f"{counter}: {value}" for counter, value in counters.items() if value > 0]
|
||||||
if counters_data:
|
if counters_data:
|
||||||
self.result.is_failure(f"Interface: {interface} - Non-zero error counter(s) - {', '.join(counters_data)}")
|
self.result.is_failure(f"Interface: {interface} - Non-zero error counter(s) - {', '.join(counters_data)}")
|
||||||
|
@ -130,18 +200,30 @@ class VerifyInterfaceDiscards(AntaTest):
|
||||||
```yaml
|
```yaml
|
||||||
anta.tests.interfaces:
|
anta.tests.interfaces:
|
||||||
- VerifyInterfaceDiscards:
|
- VerifyInterfaceDiscards:
|
||||||
|
ignored_interfaces:
|
||||||
|
- Ethernet
|
||||||
|
- Port-Channel1
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
categories: ClassVar[list[str]] = ["interfaces"]
|
categories: ClassVar[list[str]] = ["interfaces"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show interfaces counters discards", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show interfaces counters discards", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyInterfaceDiscards test."""
|
||||||
|
|
||||||
|
ignored_interfaces: list[InterfaceType | Interface] | None = None
|
||||||
|
"""A list of interfaces or interface types like Management which will ignore all Management interfaces."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyInterfaceDiscards."""
|
"""Main test function for VerifyInterfaceDiscards."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
for interface, interface_data in command_output["interfaces"].items():
|
for interface, interface_data in command_output["interfaces"].items():
|
||||||
|
# Verification is skipped if the interface is in the ignored interfaces list.
|
||||||
|
if _is_interface_ignored(interface, self.inputs.ignored_interfaces):
|
||||||
|
continue
|
||||||
counters_data = [f"{counter}: {value}" for counter, value in interface_data.items() if value > 0]
|
counters_data = [f"{counter}: {value}" for counter, value in interface_data.items() if value > 0]
|
||||||
if counters_data:
|
if counters_data:
|
||||||
self.result.is_failure(f"Interface: {interface} - Non-zero discard counter(s): {', '.join(counters_data)}")
|
self.result.is_failure(f"Interface: {interface} - Non-zero discard counter(s): {', '.join(counters_data)}")
|
||||||
|
@ -164,16 +246,22 @@ class VerifyInterfaceErrDisabled(AntaTest):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
categories: ClassVar[list[str]] = ["interfaces"]
|
categories: ClassVar[list[str]] = ["interfaces"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show interfaces status", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show interfaces status errdisabled", revision=1)]
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyInterfaceErrDisabled."""
|
"""Main test function for VerifyInterfaceErrDisabled."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
for interface, value in command_output["interfaceStatuses"].items():
|
if not (interface_details := get_value(command_output, "interfaceStatuses")):
|
||||||
if value["linkStatus"] == "errdisabled":
|
return
|
||||||
self.result.is_failure(f"Interface: {interface} - Link status Error disabled")
|
|
||||||
|
for interface, value in interface_details.items():
|
||||||
|
if causes := value.get("causes"):
|
||||||
|
msg = f"Interface: {interface} - Error disabled - Causes: {', '.join(causes)}"
|
||||||
|
self.result.is_failure(msg)
|
||||||
|
continue
|
||||||
|
self.result.is_failure(f"Interface: {interface} - Error disabled")
|
||||||
|
|
||||||
|
|
||||||
class VerifyInterfacesStatus(AntaTest):
|
class VerifyInterfacesStatus(AntaTest):
|
||||||
|
@ -276,7 +364,7 @@ class VerifyStormControlDrops(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["interfaces"]
|
categories: ClassVar[list[str]] = ["interfaces"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show storm-control", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show storm-control", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyStormControlDrops."""
|
"""Main test function for VerifyStormControlDrops."""
|
||||||
|
@ -305,18 +393,31 @@ class VerifyPortChannels(AntaTest):
|
||||||
```yaml
|
```yaml
|
||||||
anta.tests.interfaces:
|
anta.tests.interfaces:
|
||||||
- VerifyPortChannels:
|
- VerifyPortChannels:
|
||||||
|
ignored_interfaces:
|
||||||
|
- Port-Channel1
|
||||||
|
- Port-Channel2
|
||||||
|
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
categories: ClassVar[list[str]] = ["interfaces"]
|
categories: ClassVar[list[str]] = ["interfaces"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show port-channel", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show port-channel", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyPortChannels test."""
|
||||||
|
|
||||||
|
ignored_interfaces: list[PortChannelInterface] | None = None
|
||||||
|
"""A list of port-channel interfaces to ignore."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyPortChannels."""
|
"""Main test function for VerifyPortChannels."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
for port_channel, port_channel_details in command_output["portChannels"].items():
|
for port_channel, port_channel_details in command_output["portChannels"].items():
|
||||||
|
# Verification is skipped if the interface is in the ignored interfaces list.
|
||||||
|
if _is_interface_ignored(port_channel, self.inputs.ignored_interfaces):
|
||||||
|
continue
|
||||||
# Verify that the no inactive ports in all port channels.
|
# Verify that the no inactive ports in all port channels.
|
||||||
if inactive_ports := port_channel_details["inactivePorts"]:
|
if inactive_ports := port_channel_details["inactivePorts"]:
|
||||||
self.result.is_failure(f"{port_channel} - Inactive port(s) - {', '.join(inactive_ports.keys())}")
|
self.result.is_failure(f"{port_channel} - Inactive port(s) - {', '.join(inactive_ports.keys())}")
|
||||||
|
@ -335,18 +436,30 @@ class VerifyIllegalLACP(AntaTest):
|
||||||
```yaml
|
```yaml
|
||||||
anta.tests.interfaces:
|
anta.tests.interfaces:
|
||||||
- VerifyIllegalLACP:
|
- VerifyIllegalLACP:
|
||||||
|
ignored_interfaces:
|
||||||
|
- Port-Channel1
|
||||||
|
- Port-Channel2
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
categories: ClassVar[list[str]] = ["interfaces"]
|
categories: ClassVar[list[str]] = ["interfaces"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show lacp counters all-ports", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show lacp counters all-ports", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyIllegalLACP test."""
|
||||||
|
|
||||||
|
ignored_interfaces: list[PortChannelInterface] | None = None
|
||||||
|
"""A list of port-channel interfaces to ignore."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyIllegalLACP."""
|
"""Main test function for VerifyIllegalLACP."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
for port_channel, port_channel_dict in command_output["portChannels"].items():
|
for port_channel, port_channel_dict in command_output["portChannels"].items():
|
||||||
|
# Verification is skipped if the interface is in the ignored interfaces list.
|
||||||
|
if _is_interface_ignored(port_channel, self.inputs.ignored_interfaces):
|
||||||
|
continue
|
||||||
for interface, interface_details in port_channel_dict["interfaces"].items():
|
for interface, interface_details in port_channel_dict["interfaces"].items():
|
||||||
# Verify that the no illegal LACP packets in all port channels.
|
# Verify that the no illegal LACP packets in all port channels.
|
||||||
if interface_details["illegalRxCount"] != 0:
|
if interface_details["illegalRxCount"] != 0:
|
||||||
|
@ -431,11 +544,9 @@ class VerifySVI(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyL3MTU(AntaTest):
|
class VerifyL3MTU(AntaTest):
|
||||||
"""Verifies the global layer 3 Maximum Transfer Unit (MTU) for all L3 interfaces.
|
"""Verifies the L3 MTU of routed interfaces.
|
||||||
|
|
||||||
Test that L3 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces.
|
Test that layer 3 (routed) interfaces are configured with the correct MTU.
|
||||||
|
|
||||||
You can define a global MTU to check, or an MTU per interface and you can also ignored some interfaces.
|
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -449,9 +560,11 @@ class VerifyL3MTU(AntaTest):
|
||||||
- VerifyL3MTU:
|
- VerifyL3MTU:
|
||||||
mtu: 1500
|
mtu: 1500
|
||||||
ignored_interfaces:
|
ignored_interfaces:
|
||||||
- Vxlan1
|
- Management # Ignore all Management interfaces
|
||||||
|
- Ethernet2.100
|
||||||
|
- Ethernet1/1
|
||||||
specific_mtu:
|
specific_mtu:
|
||||||
- Ethernet1: 2500
|
- Ethernet10: 9200
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -463,33 +576,31 @@ class VerifyL3MTU(AntaTest):
|
||||||
"""Input model for the VerifyL3MTU test."""
|
"""Input model for the VerifyL3MTU test."""
|
||||||
|
|
||||||
mtu: int = 1500
|
mtu: int = 1500
|
||||||
"""Default MTU we should have configured on all non-excluded interfaces. Defaults to 1500."""
|
"""Expected L3 MTU configured on all non-excluded interfaces."""
|
||||||
ignored_interfaces: list[str] = Field(default=["Management", "Loopback", "Vxlan", "Tunnel"])
|
ignored_interfaces: list[InterfaceType | Interface] = Field(default=["Dps", "Fabric", "Loopback", "Management", "Recirc-Channel", "Tunnel", "Vxlan"])
|
||||||
"""A list of L3 interfaces to ignore"""
|
"""A list of L3 interfaces or interfaces types like Loopback, Tunnel which will ignore all Loopback and Tunnel interfaces.
|
||||||
specific_mtu: list[dict[str, int]] = Field(default=[])
|
|
||||||
"""A list of dictionary of L3 interfaces with their specific MTU configured"""
|
Takes precedence over the `specific_mtu` field."""
|
||||||
|
specific_mtu: list[dict[Interface, int]] = Field(default=[])
|
||||||
|
"""A list of dictionary of L3 interfaces with their expected L3 MTU configured."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyL3MTU."""
|
"""Main test function for VerifyL3MTU."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
# Set list of interfaces with specific settings
|
specific_interfaces = {intf: mtu for intf_mtu in self.inputs.specific_mtu for intf, mtu in intf_mtu.items()}
|
||||||
specific_interfaces: list[str] = []
|
|
||||||
if self.inputs.specific_mtu:
|
for interface, details in command_output["interfaces"].items():
|
||||||
for d in self.inputs.specific_mtu:
|
# Verification is skipped if the interface is in the ignored interfaces list
|
||||||
specific_interfaces.extend(d)
|
if _is_interface_ignored(interface, self.inputs.ignored_interfaces) or details["forwardingModel"] != "routed":
|
||||||
for interface, values in command_output["interfaces"].items():
|
continue
|
||||||
if re.findall(r"[a-z]+", interface, re.IGNORECASE)[0] not in self.inputs.ignored_interfaces and values["forwardingModel"] == "routed":
|
|
||||||
if interface in specific_interfaces:
|
actual_mtu = details["mtu"]
|
||||||
invalid_mtu = next(
|
expected_mtu = specific_interfaces.get(interface, self.inputs.mtu)
|
||||||
(values["mtu"] for custom_data in self.inputs.specific_mtu if values["mtu"] != (expected_mtu := custom_data[interface])), None
|
|
||||||
)
|
if (actual_mtu := details["mtu"]) != expected_mtu:
|
||||||
if invalid_mtu:
|
self.result.is_failure(f"Interface: {interface} - Incorrect MTU - Expected: {expected_mtu} Actual: {actual_mtu}")
|
||||||
self.result.is_failure(f"Interface: {interface} - Incorrect MTU - Expected: {expected_mtu} Actual: {invalid_mtu}")
|
|
||||||
# Comparison with generic setting
|
|
||||||
elif values["mtu"] != self.inputs.mtu:
|
|
||||||
self.result.is_failure(f"Interface: {interface} - Incorrect MTU - Expected: {self.inputs.mtu} Actual: {values['mtu']}")
|
|
||||||
|
|
||||||
|
|
||||||
class VerifyIPProxyARP(AntaTest):
|
class VerifyIPProxyARP(AntaTest):
|
||||||
|
@ -536,10 +647,9 @@ class VerifyIPProxyARP(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyL2MTU(AntaTest):
|
class VerifyL2MTU(AntaTest):
|
||||||
"""Verifies the global layer 2 Maximum Transfer Unit (MTU) for all L2 interfaces.
|
"""Verifies the L2 MTU of bridged interfaces.
|
||||||
|
|
||||||
Test that L2 interfaces are configured with the correct MTU. It supports Ethernet, Port Channel and VLAN interfaces.
|
Test that layer 2 (bridged) interfaces are configured with the correct MTU.
|
||||||
You can define a global MTU to check and also an MTU per interface and also ignored some interfaces.
|
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -551,10 +661,10 @@ class VerifyL2MTU(AntaTest):
|
||||||
```yaml
|
```yaml
|
||||||
anta.tests.interfaces:
|
anta.tests.interfaces:
|
||||||
- VerifyL2MTU:
|
- VerifyL2MTU:
|
||||||
mtu: 1500
|
mtu: 9214
|
||||||
ignored_interfaces:
|
ignored_interfaces:
|
||||||
- Management1
|
- Ethernet2/1
|
||||||
- Vxlan1
|
- Port-Channel # Ignore all Port-Channel interfaces
|
||||||
specific_mtu:
|
specific_mtu:
|
||||||
- Ethernet1/1: 1500
|
- Ethernet1/1: 1500
|
||||||
```
|
```
|
||||||
|
@ -568,28 +678,31 @@ class VerifyL2MTU(AntaTest):
|
||||||
"""Input model for the VerifyL2MTU test."""
|
"""Input model for the VerifyL2MTU test."""
|
||||||
|
|
||||||
mtu: int = 9214
|
mtu: int = 9214
|
||||||
"""Default MTU we should have configured on all non-excluded interfaces. Defaults to 9214."""
|
"""Expected L2 MTU configured on all non-excluded interfaces."""
|
||||||
ignored_interfaces: list[str] = Field(default=["Management", "Loopback", "Vxlan", "Tunnel"])
|
ignored_interfaces: list[InterfaceType | Interface] = Field(default=["Dps", "Fabric", "Loopback", "Management", "Recirc-Channel", "Tunnel", "Vlan", "Vxlan"])
|
||||||
"""A list of L2 interfaces to ignore. Defaults to ["Management", "Loopback", "Vxlan", "Tunnel"]"""
|
"""A list of L2 interfaces or interface types like Ethernet, Port-Channel which will ignore all Ethernet and Port-Channel interfaces.
|
||||||
|
|
||||||
|
Takes precedence over the `specific_mtu` field."""
|
||||||
specific_mtu: list[dict[Interface, int]] = Field(default=[])
|
specific_mtu: list[dict[Interface, int]] = Field(default=[])
|
||||||
"""A list of dictionary of L2 interfaces with their specific MTU configured"""
|
"""A list of dictionary of L2 interfaces with their expected L2 MTU configured."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyL2MTU."""
|
"""Main test function for VerifyL2MTU."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
interface_output = self.instance_commands[0].json_output["interfaces"]
|
interface_output = self.instance_commands[0].json_output["interfaces"]
|
||||||
specific_interfaces = {key: value for details in self.inputs.specific_mtu for key, value in details.items()}
|
specific_interfaces = {intf: mtu for intf_mtu in self.inputs.specific_mtu for intf, mtu in intf_mtu.items()}
|
||||||
|
|
||||||
for interface, details in interface_output.items():
|
for interface, details in interface_output.items():
|
||||||
catch_interface = re.findall(r"^[e,p][a-zA-Z]+[-,a-zA-Z]*\d+\/*\d*", interface, re.IGNORECASE)
|
# Verification is skipped if the interface is in the ignored interfaces list
|
||||||
if catch_interface and catch_interface not in self.inputs.ignored_interfaces and details["forwardingModel"] == "bridged":
|
if _is_interface_ignored(interface, self.inputs.ignored_interfaces) or details["forwardingModel"] != "bridged":
|
||||||
if interface in specific_interfaces:
|
continue
|
||||||
if (mtu := specific_interfaces[interface]) != (act_mtu := details["mtu"]):
|
|
||||||
self.result.is_failure(f"Interface: {interface} - Incorrect MTU configured - Expected: {mtu} Actual: {act_mtu}")
|
|
||||||
|
|
||||||
elif (act_mtu := details["mtu"]) != self.inputs.mtu:
|
actual_mtu = details["mtu"]
|
||||||
self.result.is_failure(f"Interface: {interface} - Incorrect MTU configured - Expected: {self.inputs.mtu} Actual: {act_mtu}")
|
expected_mtu = specific_interfaces.get(interface, self.inputs.mtu)
|
||||||
|
|
||||||
|
if (actual_mtu := details["mtu"]) != expected_mtu:
|
||||||
|
self.result.is_failure(f"Interface: {interface} - Incorrect MTU - Expected: {expected_mtu} Actual: {actual_mtu}")
|
||||||
|
|
||||||
|
|
||||||
class VerifyInterfaceIPv4(AntaTest):
|
class VerifyInterfaceIPv4(AntaTest):
|
||||||
|
|
|
@ -34,7 +34,7 @@ class VerifyLANZ(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["lanz"]
|
categories: ClassVar[list[str]] = ["lanz"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show queue-monitor length status", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show queue-monitor length status", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyLANZ."""
|
"""Main test function for VerifyLANZ."""
|
||||||
|
|
|
@ -433,7 +433,7 @@ class VerifyLoggingErrors(AntaTest):
|
||||||
if len(command_output) == 0:
|
if len(command_output) == 0:
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
else:
|
else:
|
||||||
self.result.is_failure("Device has reported syslog messages with a severity of ERRORS or higher")
|
self.result.is_failure(f"Device has reported syslog messages with a severity of ERRORS or higher:\n{command_output}")
|
||||||
|
|
||||||
|
|
||||||
class VerifyLoggingEntries(AntaTest):
|
class VerifyLoggingEntries(AntaTest):
|
||||||
|
@ -450,10 +450,10 @@ class VerifyLoggingEntries(AntaTest):
|
||||||
anta.tests.logging:
|
anta.tests.logging:
|
||||||
- VerifyLoggingEntries:
|
- VerifyLoggingEntries:
|
||||||
logging_entries:
|
logging_entries:
|
||||||
- regex_match: ".ACCOUNTING-5-EXEC: cvpadmin ssh."
|
- regex_match: ".*ACCOUNTING-5-EXEC: cvpadmin ssh.*"
|
||||||
last_number_messages: 30
|
last_number_messages: 30
|
||||||
severity_level: alerts
|
severity_level: alerts
|
||||||
- regex_match: ".SPANTREE-6-INTERFACE_ADD:."
|
- regex_match: ".*SPANTREE-6-INTERFACE_ADD:.*"
|
||||||
last_number_messages: 10
|
last_number_messages: 10
|
||||||
severity_level: critical
|
severity_level: critical
|
||||||
```
|
```
|
||||||
|
@ -482,5 +482,5 @@ class VerifyLoggingEntries(AntaTest):
|
||||||
output = command_output.text_output
|
output = command_output.text_output
|
||||||
if not re.search(logging_entry.regex_match, output):
|
if not re.search(logging_entry.regex_match, output):
|
||||||
self.result.is_failure(
|
self.result.is_failure(
|
||||||
f"Pattern: {logging_entry.regex_match} - Not found in last {logging_entry.last_number_messages} {logging_entry.severity_level} log entries"
|
f"Pattern: `{logging_entry.regex_match}` - Not found in last {logging_entry.last_number_messages} {logging_entry.severity_level} log entries"
|
||||||
)
|
)
|
||||||
|
|
|
@ -9,7 +9,7 @@ from __future__ import annotations
|
||||||
|
|
||||||
from typing import TYPE_CHECKING, ClassVar
|
from typing import TYPE_CHECKING, ClassVar
|
||||||
|
|
||||||
from anta.custom_types import Vlan
|
from anta.custom_types import VlanId
|
||||||
from anta.models import AntaCommand, AntaTest
|
from anta.models import AntaCommand, AntaTest
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
|
@ -41,7 +41,7 @@ class VerifyIGMPSnoopingVlans(AntaTest):
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifyIGMPSnoopingVlans test."""
|
"""Input model for the VerifyIGMPSnoopingVlans test."""
|
||||||
|
|
||||||
vlans: dict[Vlan, bool]
|
vlans: dict[VlanId, bool]
|
||||||
"""Dictionary with VLAN ID and whether IGMP snooping must be enabled (True) or disabled (False)."""
|
"""Dictionary with VLAN ID and whether IGMP snooping must be enabled (True) or disabled (False)."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
|
|
@ -43,7 +43,7 @@ class VerifyUnifiedForwardingTableMode(AntaTest):
|
||||||
mode: Literal[0, 1, 2, 3, 4, "flexible"]
|
mode: Literal[0, 1, 2, 3, 4, "flexible"]
|
||||||
"""Expected UFT mode. Valid values are 0, 1, 2, 3, 4, or "flexible"."""
|
"""Expected UFT mode. Valid values are 0, 1, 2, 3, 4, or "flexible"."""
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyUnifiedForwardingTableMode."""
|
"""Main test function for VerifyUnifiedForwardingTableMode."""
|
||||||
|
@ -81,7 +81,7 @@ class VerifyTcamProfile(AntaTest):
|
||||||
profile: str
|
profile: str
|
||||||
"""Expected TCAM profile."""
|
"""Expected TCAM profile."""
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyTcamProfile."""
|
"""Main test function for VerifyTcamProfile."""
|
||||||
|
|
|
@ -36,7 +36,7 @@ class VerifyPtpModeStatus(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["ptp"]
|
categories: ClassVar[list[str]] = ["ptp"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyPtpModeStatus."""
|
"""Main test function for VerifyPtpModeStatus."""
|
||||||
|
@ -81,7 +81,7 @@ class VerifyPtpGMStatus(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["ptp"]
|
categories: ClassVar[list[str]] = ["ptp"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyPtpGMStatus."""
|
"""Main test function for VerifyPtpGMStatus."""
|
||||||
|
@ -116,7 +116,7 @@ class VerifyPtpLockStatus(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["ptp"]
|
categories: ClassVar[list[str]] = ["ptp"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyPtpLockStatus."""
|
"""Main test function for VerifyPtpLockStatus."""
|
||||||
|
@ -155,7 +155,7 @@ class VerifyPtpOffset(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["ptp"]
|
categories: ClassVar[list[str]] = ["ptp"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp monitor", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp monitor", revision=1)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyPtpOffset."""
|
"""Main test function for VerifyPtpOffset."""
|
||||||
|
@ -196,7 +196,7 @@ class VerifyPtpPortModeStatus(AntaTest):
|
||||||
categories: ClassVar[list[str]] = ["ptp"]
|
categories: ClassVar[list[str]] = ["ptp"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ptp", revision=2)]
|
||||||
|
|
||||||
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab"])
|
@skip_on_platforms(["cEOSLab", "vEOS-lab", "cEOSCloudLab", "vEOS"])
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyPtpPortModeStatus."""
|
"""Main test function for VerifyPtpPortModeStatus."""
|
||||||
|
|
|
@ -42,6 +42,34 @@ def _check_bgp_neighbor_capability(capability_status: dict[str, bool]) -> bool:
|
||||||
return all(capability_status.get(state, False) for state in ("advertised", "received", "enabled"))
|
return all(capability_status.get(state, False) for state in ("advertised", "received", "enabled"))
|
||||||
|
|
||||||
|
|
||||||
|
def _get_bgp_peer_data(peer: BgpPeer, command_output: dict[str, Any]) -> dict[str, Any] | None:
|
||||||
|
"""Retrieve BGP peer data for the given peer from the command output.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
peer
|
||||||
|
The BgpPeer object to look up.
|
||||||
|
command_output
|
||||||
|
Parsed output of the command.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
dict | None
|
||||||
|
The peer data dictionary if found, otherwise None.
|
||||||
|
"""
|
||||||
|
if peer.interface is not None:
|
||||||
|
# RFC5549
|
||||||
|
identity = peer.interface
|
||||||
|
lookup_key = "ifName"
|
||||||
|
else:
|
||||||
|
identity = str(peer.peer_address)
|
||||||
|
lookup_key = "peerAddress"
|
||||||
|
|
||||||
|
peer_list = get_value(command_output, f"vrfs.{peer.vrf}.peerList", default=[])
|
||||||
|
|
||||||
|
return get_item(peer_list, lookup_key, identity)
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerCount(AntaTest):
|
class VerifyBGPPeerCount(AntaTest):
|
||||||
"""Verifies the count of BGP peers for given address families.
|
"""Verifies the count of BGP peers for given address families.
|
||||||
|
|
||||||
|
@ -348,14 +376,14 @@ class VerifyBGPSpecificPeers(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerSession(AntaTest):
|
class VerifyBGPPeerSession(AntaTest):
|
||||||
"""Verifies the session state of BGP IPv4 peer(s).
|
"""Verifies the session state of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
1. Verifies that the peer is found in its VRF in the BGP configuration.
|
1. Verifies that the peer is found in its VRF in the BGP configuration.
|
||||||
2. Verifies that the BGP session is `Established` and, if specified, has remained established for at least the duration given by `minimum_established_time`.
|
2. Verifies that the BGP session is `Established` and, if specified, has remained established for at least the duration given by `minimum_established_time`.
|
||||||
3. Ensures that both input and output TCP message queues are empty.
|
3. Ensures that both input and output TCP message queues are empty.
|
||||||
Can be disabled by setting `check_tcp_queues` global flag to `False`.
|
Can be disabled by setting `check_tcp_queues` input flag to `False`.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -387,6 +415,13 @@ class VerifyBGPPeerSession(AntaTest):
|
||||||
vrf: DEV
|
vrf: DEV
|
||||||
- peer_address: 10.1.255.4
|
- peer_address: 10.1.255.4
|
||||||
vrf: DEV
|
vrf: DEV
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: default
|
||||||
|
- interface: Vlan3499
|
||||||
|
vrf: PROD
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -397,11 +432,11 @@ class VerifyBGPPeerSession(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerSession test."""
|
"""Input model for the VerifyBGPPeerSession test."""
|
||||||
|
|
||||||
minimum_established_time: PositiveInt | None = None
|
minimum_established_time: PositiveInt | None = None
|
||||||
"""Minimum established time (seconds) for all the BGP sessions."""
|
"""Minimum established time (seconds) for all BGP sessions."""
|
||||||
check_tcp_queues: bool = True
|
check_tcp_queues: bool = True
|
||||||
"""Flag to check if the TCP session queues are empty for all BGP peers. Defaults to `True`."""
|
"""Flag to check if the TCP session queues are empty for all BGP peers."""
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
|
@ -411,11 +446,8 @@ class VerifyBGPPeerSession(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -440,20 +472,20 @@ class VerifyBGPPeerSession(AntaTest):
|
||||||
class VerifyBGPExchangedRoutes(AntaTest):
|
class VerifyBGPExchangedRoutes(AntaTest):
|
||||||
"""Verifies the advertised and received routes of BGP IPv4 peer(s).
|
"""Verifies the advertised and received routes of BGP IPv4 peer(s).
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each advertised and received route for each peer:
|
||||||
|
|
||||||
For each advertised and received route:
|
- Confirms that the route exists in the BGP route table.
|
||||||
- Confirms that the route exists in the BGP route table.
|
- If `check_active` input flag is True, verifies that the route is 'valid' and 'active'.
|
||||||
- Verifies that the route is in an 'active' and 'valid' state.
|
- If `check_active` input flag is False, verifies that the route is 'valid'.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
* Success: If all of the following conditions are met:
|
* Success: If all of the following conditions are met:
|
||||||
- All specified advertised/received routes are found in the BGP route table.
|
- All specified advertised/received routes are found in the BGP route table.
|
||||||
- All routes are in both 'active' and 'valid' states.
|
- All routes are 'active' and 'valid' or 'valid' only per the `check_active` input flag.
|
||||||
* Failure: If any of the following occur:
|
* Failure: If any of the following occur:
|
||||||
- An advertised/received route is not found in the BGP route table.
|
- An advertised/received route is not found in the BGP route table.
|
||||||
- Any route is not in an 'active' or 'valid' state.
|
- Any route is not 'active' and 'valid' or 'valid' only per `check_active` input flag.
|
||||||
|
|
||||||
Examples
|
Examples
|
||||||
--------
|
--------
|
||||||
|
@ -461,6 +493,7 @@ class VerifyBGPExchangedRoutes(AntaTest):
|
||||||
anta.tests.routing:
|
anta.tests.routing:
|
||||||
bgp:
|
bgp:
|
||||||
- VerifyBGPExchangedRoutes:
|
- VerifyBGPExchangedRoutes:
|
||||||
|
check_active: True
|
||||||
bgp_peers:
|
bgp_peers:
|
||||||
- peer_address: 172.30.255.5
|
- peer_address: 172.30.255.5
|
||||||
vrf: default
|
vrf: default
|
||||||
|
@ -485,8 +518,10 @@ class VerifyBGPExchangedRoutes(AntaTest):
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifyBGPExchangedRoutes test."""
|
"""Input model for the VerifyBGPExchangedRoutes test."""
|
||||||
|
|
||||||
|
check_active: bool = True
|
||||||
|
"""Flag to check if the provided prefixes must be active and valid. If False, checks if the prefixes are valid only. """
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpNeighbor: ClassVar[type[BgpNeighbor]] = BgpNeighbor
|
BgpNeighbor: ClassVar[type[BgpNeighbor]] = BgpNeighbor
|
||||||
|
|
||||||
@field_validator("bgp_peers")
|
@field_validator("bgp_peers")
|
||||||
|
@ -503,7 +538,7 @@ class VerifyBGPExchangedRoutes(AntaTest):
|
||||||
"""Render the template for each BGP peer in the input list."""
|
"""Render the template for each BGP peer in the input list."""
|
||||||
return [template.render(peer=str(bgp_peer.peer_address), vrf=bgp_peer.vrf) for bgp_peer in self.inputs.bgp_peers]
|
return [template.render(peer=str(bgp_peer.peer_address), vrf=bgp_peer.vrf) for bgp_peer in self.inputs.bgp_peers]
|
||||||
|
|
||||||
def _validate_bgp_route_paths(self, peer: str, route_type: str, route: str, entries: dict[str, Any]) -> str | None:
|
def _validate_bgp_route_paths(self, peer: str, route_type: str, route: str, entries: dict[str, Any], *, active_flag: bool = True) -> str | None:
|
||||||
"""Validate the BGP route paths."""
|
"""Validate the BGP route paths."""
|
||||||
# Check if the route is found
|
# Check if the route is found
|
||||||
if route in entries:
|
if route in entries:
|
||||||
|
@ -511,8 +546,11 @@ class VerifyBGPExchangedRoutes(AntaTest):
|
||||||
route_paths = entries[route]["bgpRoutePaths"][0]["routeType"]
|
route_paths = entries[route]["bgpRoutePaths"][0]["routeType"]
|
||||||
is_active = route_paths["active"]
|
is_active = route_paths["active"]
|
||||||
is_valid = route_paths["valid"]
|
is_valid = route_paths["valid"]
|
||||||
if not is_active or not is_valid:
|
if active_flag:
|
||||||
return f"{peer} {route_type} route: {route} - Valid: {is_valid} Active: {is_active}"
|
if not is_active or not is_valid:
|
||||||
|
return f"{peer} {route_type} route: {route} - Valid: {is_valid} Active: {is_active}"
|
||||||
|
elif not is_valid:
|
||||||
|
return f"{peer} {route_type} route: {route} - Valid: {is_valid}"
|
||||||
return None
|
return None
|
||||||
|
|
||||||
return f"{peer} {route_type} route: {route} - Not found"
|
return f"{peer} {route_type} route: {route} - Not found"
|
||||||
|
@ -544,14 +582,14 @@ class VerifyBGPExchangedRoutes(AntaTest):
|
||||||
|
|
||||||
entries = command_output[route_type]
|
entries = command_output[route_type]
|
||||||
for route in routes:
|
for route in routes:
|
||||||
# Check if the route is found. If yes then checks the route is active and valid
|
# Check if the route is found. If yes then checks the route is active/valid
|
||||||
failure_msg = self._validate_bgp_route_paths(str(peer), route_type, str(route), entries)
|
failure_msg = self._validate_bgp_route_paths(str(peer), route_type, str(route), entries, active_flag=self.inputs.check_active)
|
||||||
if failure_msg:
|
if failure_msg:
|
||||||
self.result.is_failure(failure_msg)
|
self.result.is_failure(failure_msg)
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerMPCaps(AntaTest):
|
class VerifyBGPPeerMPCaps(AntaTest):
|
||||||
"""Verifies the multiprotocol capabilities of BGP IPv4 peer(s).
|
"""Verifies the multiprotocol capabilities of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -588,6 +626,19 @@ class VerifyBGPPeerMPCaps(AntaTest):
|
||||||
capabilities:
|
capabilities:
|
||||||
- ipv4 labeled-Unicast
|
- ipv4 labeled-Unicast
|
||||||
- ipv4MplsVpn
|
- ipv4MplsVpn
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
strict: False
|
||||||
|
capabilities:
|
||||||
|
- ipv4 labeled-Unicast
|
||||||
|
- ipv4MplsVpn
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: default
|
||||||
|
strict: False
|
||||||
|
capabilities:
|
||||||
|
- ipv4 labeled-Unicast
|
||||||
|
- ipv4MplsVpn
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -598,7 +649,7 @@ class VerifyBGPPeerMPCaps(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerMPCaps test."""
|
"""Input model for the VerifyBGPPeerMPCaps test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@field_validator("bgp_peers")
|
@field_validator("bgp_peers")
|
||||||
|
@ -619,16 +670,15 @@ class VerifyBGPPeerMPCaps(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Fetching the multiprotocol capabilities
|
# Check if the multiprotocol capabilities are found
|
||||||
act_mp_caps = get_value(peer_data, "neighborCapabilities.multiprotocolCaps")
|
if (act_mp_caps := get_value(peer_data, "neighborCapabilities.multiprotocolCaps")) is None:
|
||||||
|
self.result.is_failure(f"{peer} - Multiprotocol capabilities not found")
|
||||||
|
continue
|
||||||
|
|
||||||
# If strict is True, check if only the specified capabilities are configured
|
# If strict is True, check if only the specified capabilities are configured
|
||||||
if peer.strict and sorted(peer.capabilities) != sorted(act_mp_caps):
|
if peer.strict and sorted(peer.capabilities) != sorted(act_mp_caps):
|
||||||
|
@ -647,7 +697,7 @@ class VerifyBGPPeerMPCaps(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerASNCap(AntaTest):
|
class VerifyBGPPeerASNCap(AntaTest):
|
||||||
"""Verifies the four octet ASN capability of BGP IPv4 peer(s).
|
"""Verifies the four octet ASN capability of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -675,6 +725,11 @@ class VerifyBGPPeerASNCap(AntaTest):
|
||||||
bgp_peers:
|
bgp_peers:
|
||||||
- peer_address: 172.30.11.1
|
- peer_address: 172.30.11.1
|
||||||
vrf: default
|
vrf: default
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -685,7 +740,7 @@ class VerifyBGPPeerASNCap(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerASNCap test."""
|
"""Input model for the VerifyBGPPeerASNCap test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -696,11 +751,8 @@ class VerifyBGPPeerASNCap(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -715,7 +767,7 @@ class VerifyBGPPeerASNCap(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerRouteRefreshCap(AntaTest):
|
class VerifyBGPPeerRouteRefreshCap(AntaTest):
|
||||||
"""Verifies the route refresh capabilities of IPv4 BGP peer(s) in a specified VRF.
|
"""Verifies the route refresh capabilities of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -743,6 +795,11 @@ class VerifyBGPPeerRouteRefreshCap(AntaTest):
|
||||||
bgp_peers:
|
bgp_peers:
|
||||||
- peer_address: 172.30.11.1
|
- peer_address: 172.30.11.1
|
||||||
vrf: default
|
vrf: default
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -753,7 +810,7 @@ class VerifyBGPPeerRouteRefreshCap(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerRouteRefreshCap test."""
|
"""Input model for the VerifyBGPPeerRouteRefreshCap test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -764,11 +821,8 @@ class VerifyBGPPeerRouteRefreshCap(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -783,7 +837,7 @@ class VerifyBGPPeerRouteRefreshCap(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerMD5Auth(AntaTest):
|
class VerifyBGPPeerMD5Auth(AntaTest):
|
||||||
"""Verifies the MD5 authentication and state of IPv4 BGP peer(s) in a specified VRF.
|
"""Verifies the MD5 authentication and state of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -813,6 +867,11 @@ class VerifyBGPPeerMD5Auth(AntaTest):
|
||||||
vrf: default
|
vrf: default
|
||||||
- peer_address: 172.30.11.5
|
- peer_address: 172.30.11.5
|
||||||
vrf: default
|
vrf: default
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: default
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -823,7 +882,7 @@ class VerifyBGPPeerMD5Auth(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerMD5Auth test."""
|
"""Input model for the VerifyBGPPeerMD5Auth test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of IPv4 BGP peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -834,11 +893,8 @@ class VerifyBGPPeerMD5Auth(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -921,24 +977,21 @@ class VerifyEVPNType2Route(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPAdvCommunities(AntaTest):
|
class VerifyBGPAdvCommunities(AntaTest):
|
||||||
"""Verifies that advertised communities are standard, extended and large for BGP IPv4 peer(s).
|
"""Verifies the advertised communities of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
1. Verifies that the peer is found in its VRF in the BGP configuration.
|
1. Verifies that the peer is found in its VRF in the BGP configuration.
|
||||||
2. Validates that all required community types are advertised:
|
2. Validates that given community types are advertised. If not provided, validates that all communities (standard, extended, large) are advertised.
|
||||||
- Standard communities
|
|
||||||
- Extended communities
|
|
||||||
- Large communities
|
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
* Success: If all of the following conditions are met:
|
* Success: If all of the following conditions are met:
|
||||||
- All specified peers are found in the BGP configuration.
|
- All specified peers are found in the BGP configuration.
|
||||||
- Each peer advertises standard, extended and large communities.
|
- Each peer advertises the given community types.
|
||||||
* Failure: If any of the following occur:
|
* Failure: If any of the following occur:
|
||||||
- A specified peer is not found in the BGP configuration.
|
- A specified peer is not found in the BGP configuration.
|
||||||
- A peer does not advertise standard, extended or large communities.
|
- A peer does not advertise any of the given community types.
|
||||||
|
|
||||||
Examples
|
Examples
|
||||||
--------
|
--------
|
||||||
|
@ -950,7 +1003,14 @@ class VerifyBGPAdvCommunities(AntaTest):
|
||||||
- peer_address: 172.30.11.17
|
- peer_address: 172.30.11.17
|
||||||
vrf: default
|
vrf: default
|
||||||
- peer_address: 172.30.11.21
|
- peer_address: 172.30.11.21
|
||||||
|
vrf: MGMT
|
||||||
|
advertised_communities: ["standard", "extended"]
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
vrf: default
|
vrf: default
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: default
|
||||||
|
advertised_communities: ["standard", "extended"]
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -961,7 +1021,7 @@ class VerifyBGPAdvCommunities(AntaTest):
|
||||||
"""Input model for the VerifyBGPAdvCommunities test."""
|
"""Input model for the VerifyBGPAdvCommunities test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -972,21 +1032,18 @@ class VerifyBGPAdvCommunities(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Check BGP peer advertised communities
|
# Check BGP peer advertised communities
|
||||||
if not all(get_value(peer_data, f"advertisedCommunities.{community}") is True for community in ["standard", "extended", "large"]):
|
if not all(get_value(peer_data, f"advertisedCommunities.{community}") is True for community in peer.advertised_communities):
|
||||||
self.result.is_failure(f"{peer} - {format_data(peer_data['advertisedCommunities'])}")
|
self.result.is_failure(f"{peer} - {format_data(peer_data['advertisedCommunities'])}")
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPTimers(AntaTest):
|
class VerifyBGPTimers(AntaTest):
|
||||||
"""Verifies the timers of BGP IPv4 peer(s).
|
"""Verifies the timers of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -1017,6 +1074,15 @@ class VerifyBGPTimers(AntaTest):
|
||||||
vrf: default
|
vrf: default
|
||||||
hold_time: 180
|
hold_time: 180
|
||||||
keep_alive_time: 60
|
keep_alive_time: 60
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
hold_time: 180
|
||||||
|
keep_alive_time: 60
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
|
hold_time: 180
|
||||||
|
keep_alive_time: 60
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1027,7 +1093,7 @@ class VerifyBGPTimers(AntaTest):
|
||||||
"""Input model for the VerifyBGPTimers test."""
|
"""Input model for the VerifyBGPTimers test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@field_validator("bgp_peers")
|
@field_validator("bgp_peers")
|
||||||
|
@ -1048,11 +1114,8 @@ class VerifyBGPTimers(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1064,7 +1127,7 @@ class VerifyBGPTimers(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerDropStats(AntaTest):
|
class VerifyBGPPeerDropStats(AntaTest):
|
||||||
"""Verifies BGP NLRI drop statistics for the provided BGP IPv4 peer(s).
|
"""Verifies BGP NLRI drop statistics of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -1096,6 +1159,17 @@ class VerifyBGPPeerDropStats(AntaTest):
|
||||||
drop_stats:
|
drop_stats:
|
||||||
- inDropAsloop
|
- inDropAsloop
|
||||||
- prefixEvpnDroppedUnsupportedRouteType
|
- prefixEvpnDroppedUnsupportedRouteType
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
drop_stats:
|
||||||
|
- inDropAsloop
|
||||||
|
- prefixEvpnDroppedUnsupportedRouteType
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
|
drop_stats:
|
||||||
|
- inDropAsloop
|
||||||
|
- prefixEvpnDroppedUnsupportedRouteType
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1106,7 +1180,7 @@ class VerifyBGPPeerDropStats(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerDropStats test."""
|
"""Input model for the VerifyBGPPeerDropStats test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -1117,12 +1191,9 @@ class VerifyBGPPeerDropStats(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
drop_stats_input = peer.drop_stats
|
drop_stats_input = peer.drop_stats
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1140,7 +1211,7 @@ class VerifyBGPPeerDropStats(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerUpdateErrors(AntaTest):
|
class VerifyBGPPeerUpdateErrors(AntaTest):
|
||||||
"""Verifies BGP update error counters for the provided BGP IPv4 peer(s).
|
"""Verifies BGP update error counters of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -1173,6 +1244,15 @@ class VerifyBGPPeerUpdateErrors(AntaTest):
|
||||||
vrf: default
|
vrf: default
|
||||||
update_errors:
|
update_errors:
|
||||||
- inUpdErrWithdraw
|
- inUpdErrWithdraw
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
update_errors:
|
||||||
|
- inUpdErrWithdraw
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
|
update_errors:
|
||||||
|
- inUpdErrWithdraw
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1183,7 +1263,7 @@ class VerifyBGPPeerUpdateErrors(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerUpdateErrors test."""
|
"""Input model for the VerifyBGPPeerUpdateErrors test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -1194,12 +1274,9 @@ class VerifyBGPPeerUpdateErrors(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
update_errors_input = peer.update_errors
|
update_errors_input = peer.update_errors
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1217,7 +1294,7 @@ class VerifyBGPPeerUpdateErrors(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBgpRouteMaps(AntaTest):
|
class VerifyBgpRouteMaps(AntaTest):
|
||||||
"""Verifies BGP inbound and outbound route-maps of BGP IPv4 peer(s).
|
"""Verifies BGP inbound and outbound route-maps of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -1244,6 +1321,15 @@ class VerifyBgpRouteMaps(AntaTest):
|
||||||
vrf: default
|
vrf: default
|
||||||
inbound_route_map: RM-MLAG-PEER-IN
|
inbound_route_map: RM-MLAG-PEER-IN
|
||||||
outbound_route_map: RM-MLAG-PEER-OUT
|
outbound_route_map: RM-MLAG-PEER-OUT
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
inbound_route_map: RM-MLAG-PEER-IN
|
||||||
|
outbound_route_map: RM-MLAG-PEER-OUT
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
|
inbound_route_map: RM-MLAG-PEER-IN
|
||||||
|
outbound_route_map: RM-MLAG-PEER-OUT
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1254,7 +1340,7 @@ class VerifyBgpRouteMaps(AntaTest):
|
||||||
"""Input model for the VerifyBgpRouteMaps test."""
|
"""Input model for the VerifyBgpRouteMaps test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@field_validator("bgp_peers")
|
@field_validator("bgp_peers")
|
||||||
|
@ -1275,13 +1361,11 @@ class VerifyBgpRouteMaps(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
inbound_route_map = peer.inbound_route_map
|
inbound_route_map = peer.inbound_route_map
|
||||||
outbound_route_map = peer.outbound_route_map
|
outbound_route_map = peer.outbound_route_map
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1295,7 +1379,7 @@ class VerifyBgpRouteMaps(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerRouteLimit(AntaTest):
|
class VerifyBGPPeerRouteLimit(AntaTest):
|
||||||
"""Verifies maximum routes and warning limit for BGP IPv4 peer(s).
|
"""Verifies maximum routes and warning limit for BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -1322,6 +1406,15 @@ class VerifyBGPPeerRouteLimit(AntaTest):
|
||||||
vrf: default
|
vrf: default
|
||||||
maximum_routes: 12000
|
maximum_routes: 12000
|
||||||
warning_limit: 10000
|
warning_limit: 10000
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
maximum_routes: 12000
|
||||||
|
warning_limit: 10000
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
|
maximum_routes: 12000
|
||||||
|
warning_limit: 10000
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1332,7 +1425,7 @@ class VerifyBGPPeerRouteLimit(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerRouteLimit test."""
|
"""Input model for the VerifyBGPPeerRouteLimit test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
BgpPeer: ClassVar[type[BgpPeer]] = BgpPeer
|
||||||
|
|
||||||
@field_validator("bgp_peers")
|
@field_validator("bgp_peers")
|
||||||
|
@ -1353,13 +1446,11 @@ class VerifyBGPPeerRouteLimit(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
maximum_routes = peer.maximum_routes
|
maximum_routes = peer.maximum_routes
|
||||||
warning_limit = peer.warning_limit
|
warning_limit = peer.warning_limit
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1373,7 +1464,7 @@ class VerifyBGPPeerRouteLimit(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerGroup(AntaTest):
|
class VerifyBGPPeerGroup(AntaTest):
|
||||||
"""Verifies BGP peer group of BGP IPv4 peer(s).
|
"""Verifies BGP peer group of BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -1399,6 +1490,13 @@ class VerifyBGPPeerGroup(AntaTest):
|
||||||
- peer_address: 172.30.11.1
|
- peer_address: 172.30.11.1
|
||||||
vrf: default
|
vrf: default
|
||||||
peer_group: IPv4-UNDERLAY-PEERS
|
peer_group: IPv4-UNDERLAY-PEERS
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
peer_group: IPv4-UNDERLAY-PEERS
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
|
peer_group: IPv4-UNDERLAY-PEERS
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1409,7 +1507,7 @@ class VerifyBGPPeerGroup(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerGroup test."""
|
"""Input model for the VerifyBGPPeerGroup test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
|
|
||||||
@field_validator("bgp_peers")
|
@field_validator("bgp_peers")
|
||||||
@classmethod
|
@classmethod
|
||||||
|
@ -1429,11 +1527,8 @@ class VerifyBGPPeerGroup(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1442,7 +1537,7 @@ class VerifyBGPPeerGroup(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerSessionRibd(AntaTest):
|
class VerifyBGPPeerSessionRibd(AntaTest):
|
||||||
"""Verifies the session state of BGP IPv4 peer(s).
|
"""Verifies the session state of BGP peers.
|
||||||
|
|
||||||
Compatible with EOS operating in `ribd` routing protocol model.
|
Compatible with EOS operating in `ribd` routing protocol model.
|
||||||
|
|
||||||
|
@ -1451,7 +1546,7 @@ class VerifyBGPPeerSessionRibd(AntaTest):
|
||||||
1. Verifies that the peer is found in its VRF in the BGP configuration.
|
1. Verifies that the peer is found in its VRF in the BGP configuration.
|
||||||
2. Verifies that the BGP session is `Established` and, if specified, has remained established for at least the duration given by `minimum_established_time`.
|
2. Verifies that the BGP session is `Established` and, if specified, has remained established for at least the duration given by `minimum_established_time`.
|
||||||
3. Ensures that both input and output TCP message queues are empty.
|
3. Ensures that both input and output TCP message queues are empty.
|
||||||
Can be disabled by setting `check_tcp_queues` global flag to `False`.
|
Can be disabled by setting `check_tcp_queues` input flag to `False`.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -1477,12 +1572,13 @@ class VerifyBGPPeerSessionRibd(AntaTest):
|
||||||
bgp_peers:
|
bgp_peers:
|
||||||
- peer_address: 10.1.0.1
|
- peer_address: 10.1.0.1
|
||||||
vrf: default
|
vrf: default
|
||||||
- peer_address: 10.1.0.2
|
|
||||||
vrf: default
|
|
||||||
- peer_address: 10.1.255.2
|
|
||||||
vrf: DEV
|
|
||||||
- peer_address: 10.1.255.4
|
- peer_address: 10.1.255.4
|
||||||
vrf: DEV
|
vrf: DEV
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1497,7 +1593,7 @@ class VerifyBGPPeerSessionRibd(AntaTest):
|
||||||
check_tcp_queues: bool = True
|
check_tcp_queues: bool = True
|
||||||
"""Flag to check if the TCP session queues are empty for all BGP peers. Defaults to `True`."""
|
"""Flag to check if the TCP session queues are empty for all BGP peers. Defaults to `True`."""
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of BGP IPv4 peers."""
|
"""List of BGP peers."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
|
@ -1507,11 +1603,8 @@ class VerifyBGPPeerSessionRibd(AntaTest):
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_address = str(peer.peer_address)
|
|
||||||
peers = get_value(output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_data := get_item(peers, "peerAddress", peer_address)) is None:
|
if (peer_data := _get_bgp_peer_data(peer, output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1534,7 +1627,7 @@ class VerifyBGPPeerSessionRibd(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeersHealthRibd(AntaTest):
|
class VerifyBGPPeersHealthRibd(AntaTest):
|
||||||
"""Verifies the health of all the BGP IPv4 peer(s).
|
"""Verifies the health of all the BGP peers.
|
||||||
|
|
||||||
Compatible with EOS operating in `ribd` routing protocol model.
|
Compatible with EOS operating in `ribd` routing protocol model.
|
||||||
|
|
||||||
|
@ -1542,7 +1635,7 @@ class VerifyBGPPeersHealthRibd(AntaTest):
|
||||||
|
|
||||||
1. Verifies that the BGP session is in the `Established` state.
|
1. Verifies that the BGP session is in the `Established` state.
|
||||||
2. Checks that both input and output TCP message queues are empty.
|
2. Checks that both input and output TCP message queues are empty.
|
||||||
Can be disabled by setting `check_tcp_queues` global flag to `False`.
|
Can be disabled by setting `check_tcp_queues` input flag to `False`.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -1594,7 +1687,7 @@ class VerifyBGPPeersHealthRibd(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPNlriAcceptance(AntaTest):
|
class VerifyBGPNlriAcceptance(AntaTest):
|
||||||
"""Verifies that all received NLRI are accepted for all AFI/SAFI configured for BGP IPv4 peer(s).
|
"""Verifies that all received NLRI are accepted for all AFI/SAFI configured for BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified peer:
|
This test performs the following checks for each specified peer:
|
||||||
|
|
||||||
|
@ -1619,11 +1712,27 @@ class VerifyBGPNlriAcceptance(AntaTest):
|
||||||
vrf: default
|
vrf: default
|
||||||
capabilities:
|
capabilities:
|
||||||
- ipv4Unicast
|
- ipv4Unicast
|
||||||
|
- peer_address: 2001:db8:1::2
|
||||||
|
vrf: default
|
||||||
|
capabilities:
|
||||||
|
- ipv6Unicast
|
||||||
|
- peer_address: fe80::2%Et1
|
||||||
|
vrf: default
|
||||||
|
capabilities:
|
||||||
|
- ipv6Unicast
|
||||||
|
# RFC 5549
|
||||||
|
- peer_address: fe80::2%Et1
|
||||||
|
vrf: default
|
||||||
|
capabilities:
|
||||||
|
- ipv6Unicast
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
categories: ClassVar[list[str]] = ["bgp"]
|
categories: ClassVar[list[str]] = ["bgp"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show bgp summary vrf all", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [
|
||||||
|
AntaCommand(command="show bgp summary vrf all", revision=1),
|
||||||
|
AntaCommand(command="show bgp neighbors vrf all", revision=3),
|
||||||
|
]
|
||||||
|
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifyBGPNlriAcceptance test."""
|
"""Input model for the VerifyBGPNlriAcceptance test."""
|
||||||
|
@ -1641,16 +1750,50 @@ class VerifyBGPNlriAcceptance(AntaTest):
|
||||||
raise ValueError(msg)
|
raise ValueError(msg)
|
||||||
return bgp_peers
|
return bgp_peers
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _get_peer_address(peer: BgpPeer, command_output: dict[str, Any]) -> str | None:
|
||||||
|
"""Retrieve the peer address for the given BGP peer data.
|
||||||
|
|
||||||
|
If an interface is specified, the address is extracted from the command output;
|
||||||
|
otherwise, it is retrieved directly from the peer object.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
peer
|
||||||
|
The BGP peer object to look up.
|
||||||
|
command_output
|
||||||
|
Parsed output from the relevant command.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
str | None
|
||||||
|
The peer address if found, otherwise None.
|
||||||
|
"""
|
||||||
|
if peer.interface is not None:
|
||||||
|
# RFC5549
|
||||||
|
interface = str(peer.interface)
|
||||||
|
lookup_key = "ifName"
|
||||||
|
|
||||||
|
peer_list = get_value(command_output, f"vrfs.{peer.vrf}.peerList", default=[])
|
||||||
|
# Check if the peer is found
|
||||||
|
if (peer_details := get_item(peer_list, lookup_key, interface)) is not None:
|
||||||
|
return str(peer_details.get("peerAddress"))
|
||||||
|
return None
|
||||||
|
|
||||||
|
return str(peer.peer_address)
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyBGPNlriAcceptance."""
|
"""Main test function for VerifyBGPNlriAcceptance."""
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
|
|
||||||
output = self.instance_commands[0].json_output
|
output = self.instance_commands[0].json_output
|
||||||
|
peer_output = self.instance_commands[1].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
|
identity = self._get_peer_address(peer, peer_output)
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if not (peer_data := get_value(output, f"vrfs..{peer.vrf}..peers..{peer.peer_address}", separator="..")):
|
if not (peer_data := get_value(output, f"vrfs..{peer.vrf}..peers..{identity}", separator="..")):
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
@ -1931,7 +2074,7 @@ class VerifyBGPRedistribution(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyBGPPeerTtlMultiHops(AntaTest):
|
class VerifyBGPPeerTtlMultiHops(AntaTest):
|
||||||
"""Verifies BGP TTL and max-ttl-hops count for BGP IPv4 peer(s).
|
"""Verifies BGP TTL and max-ttl-hops count for BGP peers.
|
||||||
|
|
||||||
This test performs the following checks for each specified BGP peer:
|
This test performs the following checks for each specified BGP peer:
|
||||||
|
|
||||||
|
@ -1960,6 +2103,15 @@ class VerifyBGPPeerTtlMultiHops(AntaTest):
|
||||||
vrf: test
|
vrf: test
|
||||||
ttl: 30
|
ttl: 30
|
||||||
max_ttl_hops: 30
|
max_ttl_hops: 30
|
||||||
|
- peer_address: fd00:dc:1::1
|
||||||
|
vrf: default
|
||||||
|
ttl: 30
|
||||||
|
max_ttl_hops: 30
|
||||||
|
# RFC5549
|
||||||
|
- interface: Ethernet1
|
||||||
|
vrf: MGMT
|
||||||
|
ttl: 30
|
||||||
|
max_ttl_hops: 30
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -1970,7 +2122,7 @@ class VerifyBGPPeerTtlMultiHops(AntaTest):
|
||||||
"""Input model for the VerifyBGPPeerTtlMultiHops test."""
|
"""Input model for the VerifyBGPPeerTtlMultiHops test."""
|
||||||
|
|
||||||
bgp_peers: list[BgpPeer]
|
bgp_peers: list[BgpPeer]
|
||||||
"""List of IPv4 peer(s)."""
|
"""List of peer(s)."""
|
||||||
|
|
||||||
@field_validator("bgp_peers")
|
@field_validator("bgp_peers")
|
||||||
@classmethod
|
@classmethod
|
||||||
|
@ -1993,11 +2145,8 @@ class VerifyBGPPeerTtlMultiHops(AntaTest):
|
||||||
command_output = self.instance_commands[0].json_output
|
command_output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
for peer in self.inputs.bgp_peers:
|
for peer in self.inputs.bgp_peers:
|
||||||
peer_ip = str(peer.peer_address)
|
|
||||||
peer_list = get_value(command_output, f"vrfs.{peer.vrf}.peerList", default=[])
|
|
||||||
|
|
||||||
# Check if the peer is found
|
# Check if the peer is found
|
||||||
if (peer_details := get_item(peer_list, "peerAddress", peer_ip)) is None:
|
if (peer_details := _get_bgp_peer_data(peer, command_output)) is None:
|
||||||
self.result.is_failure(f"{peer} - Not found")
|
self.result.is_failure(f"{peer} - Not found")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
|
|
|
@ -9,7 +9,7 @@ from __future__ import annotations
|
||||||
|
|
||||||
from functools import cache
|
from functools import cache
|
||||||
from ipaddress import IPv4Address, IPv4Interface
|
from ipaddress import IPv4Address, IPv4Interface
|
||||||
from typing import TYPE_CHECKING, ClassVar, Literal
|
from typing import TYPE_CHECKING, Any, ClassVar, Literal
|
||||||
|
|
||||||
from pydantic import field_validator, model_validator
|
from pydantic import field_validator, model_validator
|
||||||
|
|
||||||
|
@ -349,3 +349,58 @@ class VerifyIPv4RouteNextHops(AntaTest):
|
||||||
for nexthop in entry.nexthops:
|
for nexthop in entry.nexthops:
|
||||||
if not get_item(route_data["vias"], "nexthopAddr", str(nexthop)):
|
if not get_item(route_data["vias"], "nexthopAddr", str(nexthop)):
|
||||||
self.result.is_failure(f"{entry} Nexthop: {nexthop} - Route not found")
|
self.result.is_failure(f"{entry} Nexthop: {nexthop} - Route not found")
|
||||||
|
|
||||||
|
|
||||||
|
class VerifyRoutingStatus(AntaTest):
|
||||||
|
"""Verifies the routing status for IPv4/IPv6 unicast, multicast, and IPv6 interfaces (RFC5549).
|
||||||
|
|
||||||
|
Expected Results
|
||||||
|
----------------
|
||||||
|
* Success: The test will pass if the routing status is correct.
|
||||||
|
* Failure: The test will fail if the routing status doesn't match the expected configuration.
|
||||||
|
|
||||||
|
Examples
|
||||||
|
--------
|
||||||
|
```yaml
|
||||||
|
anta.tests.routing:
|
||||||
|
generic:
|
||||||
|
- VerifyRoutingStatus:
|
||||||
|
ipv4_unicast: True
|
||||||
|
ipv6_unicast: True
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
categories: ClassVar[list[str]] = ["routing"]
|
||||||
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show ip", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyRoutingStatus test."""
|
||||||
|
|
||||||
|
ipv4_unicast: bool = False
|
||||||
|
"""IPv4 unicast routing status."""
|
||||||
|
ipv6_unicast: bool = False
|
||||||
|
"""IPv6 unicast routing status."""
|
||||||
|
ipv4_multicast: bool = False
|
||||||
|
"""IPv4 multicast routing status."""
|
||||||
|
ipv6_multicast: bool = False
|
||||||
|
"""IPv6 multicast routing status."""
|
||||||
|
ipv6_interfaces: bool = False
|
||||||
|
"""IPv6 interface forwarding status."""
|
||||||
|
|
||||||
|
@AntaTest.anta_test
|
||||||
|
def test(self) -> None:
|
||||||
|
"""Main test function for VerifyRoutingStatus."""
|
||||||
|
self.result.is_success()
|
||||||
|
command_output = self.instance_commands[0].json_output
|
||||||
|
actual_routing_status: dict[str, Any] = {
|
||||||
|
"ipv4_unicast": command_output["v4RoutingEnabled"],
|
||||||
|
"ipv6_unicast": command_output["v6RoutingEnabled"],
|
||||||
|
"ipv4_multicast": command_output["multicastRouting"]["ipMulticastEnabled"],
|
||||||
|
"ipv6_multicast": command_output["multicastRouting"]["ip6MulticastEnabled"],
|
||||||
|
"ipv6_interfaces": command_output.get("v6IntfForwarding", False),
|
||||||
|
}
|
||||||
|
|
||||||
|
for input_key, value in self.inputs:
|
||||||
|
if input_key in actual_routing_status and value != actual_routing_status[input_key]:
|
||||||
|
route_type = " ".join([{"ipv4": "IPv4", "ipv6": "IPv6"}.get(part, part) for part in input_key.split("_")])
|
||||||
|
self.result.is_failure(f"{route_type} routing enabled status mismatch - Expected: {value} Actual: {actual_routing_status[input_key]}")
|
||||||
|
|
|
@ -442,3 +442,78 @@ class VerifyISISSegmentRoutingTunnels(AntaTest):
|
||||||
and (via_input.interface is None or via_input.interface == eos_via.get("interface"))
|
and (via_input.interface is None or via_input.interface == eos_via.get("interface"))
|
||||||
and (via_input.tunnel_id is None or via_input.tunnel_id.upper() == get_value(eos_via, "tunnelId.type", default="").upper())
|
and (via_input.tunnel_id is None or via_input.tunnel_id.upper() == get_value(eos_via, "tunnelId.type", default="").upper())
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class VerifyISISGracefulRestart(AntaTest):
|
||||||
|
"""Verifies the IS-IS graceful restart feature.
|
||||||
|
|
||||||
|
This test performs the following checks for each IS-IS instance:
|
||||||
|
|
||||||
|
1. Verifies that the specified IS-IS instance is configured on the device.
|
||||||
|
2. Verifies the statuses of the graceful restart and graceful restart helper functionalities.
|
||||||
|
|
||||||
|
Expected Results
|
||||||
|
----------------
|
||||||
|
* Success: The test will pass if all of the following conditions are met:
|
||||||
|
- The specified IS-IS instance is configured on the device.
|
||||||
|
- Expected and actual IS-IS graceful restart and graceful restart helper values match.
|
||||||
|
* Failure: The test will fail if any of the following conditions is met:
|
||||||
|
- The specified IS-IS instance is not configured on the device.
|
||||||
|
- Expected and actual IS-IS graceful restart and graceful restart helper values do not match.
|
||||||
|
* Skipped: The test will skip if IS-IS is not configured on the device.
|
||||||
|
|
||||||
|
Examples
|
||||||
|
--------
|
||||||
|
```yaml
|
||||||
|
anta.tests.routing:
|
||||||
|
isis:
|
||||||
|
- VerifyISISGracefulRestart:
|
||||||
|
instances:
|
||||||
|
- name: '1'
|
||||||
|
vrf: default
|
||||||
|
graceful_restart: True
|
||||||
|
graceful_restart_helper: False
|
||||||
|
- name: '2'
|
||||||
|
vrf: default
|
||||||
|
- name: '11'
|
||||||
|
vrf: test
|
||||||
|
graceful_restart: True
|
||||||
|
- name: '12'
|
||||||
|
vrf: test
|
||||||
|
graceful_restart_helper: False
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
categories: ClassVar[list[str]] = ["isis"]
|
||||||
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show isis graceful-restart vrf all", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyISISGracefulRestart test."""
|
||||||
|
|
||||||
|
instances: list[ISISInstance]
|
||||||
|
"""List of IS-IS instance entries."""
|
||||||
|
|
||||||
|
@AntaTest.anta_test
|
||||||
|
def test(self) -> None:
|
||||||
|
"""Main test function for VerifyISISGracefulRestart."""
|
||||||
|
self.result.is_success()
|
||||||
|
|
||||||
|
# Verify if IS-IS is configured
|
||||||
|
if not (command_output := self.instance_commands[0].json_output["vrfs"]):
|
||||||
|
self.result.is_skipped("IS-IS not configured")
|
||||||
|
return
|
||||||
|
|
||||||
|
# If IS-IS instance is not found or GR and GR helpers are not matching with the expected values, test fails.
|
||||||
|
for instance in self.inputs.instances:
|
||||||
|
graceful_restart = "enabled" if instance.graceful_restart else "disabled"
|
||||||
|
graceful_restart_helper = "enabled" if instance.graceful_restart_helper else "disabled"
|
||||||
|
|
||||||
|
if (instance_details := get_value(command_output, f"{instance.vrf}..isisInstances..{instance.name}", separator="..")) is None:
|
||||||
|
self.result.is_failure(f"{instance} - Not configured")
|
||||||
|
continue
|
||||||
|
|
||||||
|
if (act_state := instance_details.get("gracefulRestart")) != graceful_restart:
|
||||||
|
self.result.is_failure(f"{instance} - Incorrect graceful restart state - Expected: {graceful_restart} Actual: {act_state}")
|
||||||
|
|
||||||
|
if (act_helper_state := instance_details.get("gracefulRestartHelper")) != graceful_restart_helper:
|
||||||
|
self.result.is_failure(f"{instance} - Incorrect graceful restart helper state - Expected: {graceful_restart_helper} Actual: {act_helper_state}")
|
||||||
|
|
|
@ -320,7 +320,7 @@ class VerifySnmpErrorCounters(AntaTest):
|
||||||
|
|
||||||
# Verify SNMP PDU counters.
|
# Verify SNMP PDU counters.
|
||||||
if not (snmp_counters := get_value(command_output, "counters")):
|
if not (snmp_counters := get_value(command_output, "counters")):
|
||||||
self.result.is_failure("SNMP counters not found.")
|
self.result.is_failure("SNMP counters not found")
|
||||||
return
|
return
|
||||||
|
|
||||||
# In case SNMP error counters not provided, It will check all the error counters.
|
# In case SNMP error counters not provided, It will check all the error counters.
|
||||||
|
|
|
@ -11,7 +11,7 @@ from typing import ClassVar, Literal
|
||||||
|
|
||||||
from pydantic import Field
|
from pydantic import Field
|
||||||
|
|
||||||
from anta.custom_types import Vlan
|
from anta.custom_types import VlanId
|
||||||
from anta.models import AntaCommand, AntaTemplate, AntaTest
|
from anta.models import AntaCommand, AntaTemplate, AntaTest
|
||||||
from anta.tools import get_value
|
from anta.tools import get_value
|
||||||
|
|
||||||
|
@ -44,7 +44,7 @@ class VerifySTPMode(AntaTest):
|
||||||
|
|
||||||
mode: Literal["mstp", "rstp", "rapidPvst"] = "mstp"
|
mode: Literal["mstp", "rstp", "rapidPvst"] = "mstp"
|
||||||
"""STP mode to verify. Supported values: mstp, rstp, rapidPvst. Defaults to mstp."""
|
"""STP mode to verify. Supported values: mstp, rstp, rapidPvst. Defaults to mstp."""
|
||||||
vlans: list[Vlan]
|
vlans: list[VlanId]
|
||||||
"""List of VLAN on which to verify STP mode."""
|
"""List of VLAN on which to verify STP mode."""
|
||||||
|
|
||||||
def render(self, template: AntaTemplate) -> list[AntaCommand]:
|
def render(self, template: AntaTemplate) -> list[AntaCommand]:
|
||||||
|
@ -157,7 +157,7 @@ class VerifySTPForwardingPorts(AntaTest):
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifySTPForwardingPorts test."""
|
"""Input model for the VerifySTPForwardingPorts test."""
|
||||||
|
|
||||||
vlans: list[Vlan]
|
vlans: list[VlanId]
|
||||||
"""List of VLAN on which to verify forwarding states."""
|
"""List of VLAN on which to verify forwarding states."""
|
||||||
|
|
||||||
def render(self, template: AntaTemplate) -> list[AntaCommand]:
|
def render(self, template: AntaTemplate) -> list[AntaCommand]:
|
||||||
|
@ -213,7 +213,7 @@ class VerifySTPRootPriority(AntaTest):
|
||||||
|
|
||||||
priority: int
|
priority: int
|
||||||
"""STP root priority to verify."""
|
"""STP root priority to verify."""
|
||||||
instances: list[Vlan] = Field(default=[])
|
instances: list[VlanId] = Field(default=[])
|
||||||
"""List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified."""
|
"""List of VLAN or MST instance ID(s). If empty, ALL VLAN or MST instance ID(s) will be verified."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -299,9 +299,9 @@ class VerifySTPDisabledVlans(AntaTest):
|
||||||
|
|
||||||
This test performs the following checks:
|
This test performs the following checks:
|
||||||
|
|
||||||
1. Verifies that the STP is configured.
|
1. Verifies that the STP is configured.
|
||||||
2. Verifies that the specified VLAN(s) exist on the device.
|
2. Verifies that the specified VLAN(s) exist on the device.
|
||||||
3. Verifies that the STP is disabled for the specified VLAN(s).
|
3. Verifies that the STP is disabled for the specified VLAN(s).
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -331,7 +331,7 @@ class VerifySTPDisabledVlans(AntaTest):
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifySTPDisabledVlans test."""
|
"""Input model for the VerifySTPDisabledVlans test."""
|
||||||
|
|
||||||
vlans: list[Vlan]
|
vlans: list[VlanId]
|
||||||
"""List of STP disabled VLAN(s)."""
|
"""List of STP disabled VLAN(s)."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
|
|
@ -10,9 +10,9 @@ from __future__ import annotations
|
||||||
import re
|
import re
|
||||||
from typing import TYPE_CHECKING, Any, ClassVar
|
from typing import TYPE_CHECKING, Any, ClassVar
|
||||||
|
|
||||||
from pydantic import model_validator
|
from pydantic import Field, model_validator
|
||||||
|
|
||||||
from anta.custom_types import Hostname, PositiveInteger
|
from anta.custom_types import Hostname, PositiveInteger, ReloadCause
|
||||||
from anta.input_models.system import NTPPool, NTPServer
|
from anta.input_models.system import NTPPool, NTPServer
|
||||||
from anta.models import AntaCommand, AntaTest
|
from anta.models import AntaCommand, AntaTest
|
||||||
from anta.tools import get_value
|
from anta.tools import get_value
|
||||||
|
@ -73,8 +73,8 @@ class VerifyReloadCause(AntaTest):
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
* Success: The test will pass if there are NO reload causes or if the last reload was caused by the user or after an FPGA upgrade.
|
* Success: The test passes if there is no reload cause, or if the last reload cause was one of the provided inputs.
|
||||||
* Failure: The test will fail if the last reload was NOT caused by the user or after an FPGA upgrade.
|
* Failure: The test will fail if the last reload cause was NOT one of the provided inputs.
|
||||||
* Error: The test will report an error if the reload cause is NOT available.
|
* Error: The test will report an error if the reload cause is NOT available.
|
||||||
|
|
||||||
Examples
|
Examples
|
||||||
|
@ -82,12 +82,22 @@ class VerifyReloadCause(AntaTest):
|
||||||
```yaml
|
```yaml
|
||||||
anta.tests.system:
|
anta.tests.system:
|
||||||
- VerifyReloadCause:
|
- VerifyReloadCause:
|
||||||
|
allowed_causes:
|
||||||
|
- USER
|
||||||
|
- FPGA
|
||||||
|
- ZTP
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
categories: ClassVar[list[str]] = ["system"]
|
categories: ClassVar[list[str]] = ["system"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show reload cause", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show reload cause", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyReloadCause test."""
|
||||||
|
|
||||||
|
allowed_causes: list[ReloadCause] = Field(default=["USER", "FPGA"], validate_default=True)
|
||||||
|
"""A list of allowed system reload causes."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
"""Main test function for VerifyReloadCause."""
|
"""Main test function for VerifyReloadCause."""
|
||||||
|
@ -96,15 +106,14 @@ class VerifyReloadCause(AntaTest):
|
||||||
# No reload causes
|
# No reload causes
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
return
|
return
|
||||||
|
|
||||||
reset_causes = command_output["resetCauses"]
|
reset_causes = command_output["resetCauses"]
|
||||||
command_output_data = reset_causes[0].get("description")
|
command_output_data = reset_causes[0].get("description")
|
||||||
if command_output_data in [
|
if command_output_data in self.inputs.allowed_causes:
|
||||||
"Reload requested by the user.",
|
|
||||||
"Reload requested after FPGA upgrade",
|
|
||||||
]:
|
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
else:
|
else:
|
||||||
self.result.is_failure(f"Reload cause is: {command_output_data}")
|
causes = ", ".join(f"'{c}'" for c in self.inputs.allowed_causes)
|
||||||
|
self.result.is_failure(f"Invalid reload cause - Expected: {causes} Actual: '{command_output_data}'")
|
||||||
|
|
||||||
|
|
||||||
class VerifyCoredump(AntaTest):
|
class VerifyCoredump(AntaTest):
|
||||||
|
@ -451,7 +460,7 @@ class VerifyMaintenance(AntaTest):
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
categories: ClassVar[list[str]] = ["Maintenance"]
|
categories: ClassVar[list[str]] = ["system"]
|
||||||
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show maintenance", revision=1)]
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show maintenance", revision=1)]
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -476,8 +485,8 @@ class VerifyMaintenance(AntaTest):
|
||||||
|
|
||||||
# Building the error message.
|
# Building the error message.
|
||||||
if units_under_maintenance:
|
if units_under_maintenance:
|
||||||
self.result.is_failure(f"Units under maintenance: '{', '.join(units_under_maintenance)}'.")
|
self.result.is_failure(f"Units under maintenance: '{', '.join(units_under_maintenance)}'")
|
||||||
if units_entering_maintenance:
|
if units_entering_maintenance:
|
||||||
self.result.is_failure(f"Units entering maintenance: '{', '.join(units_entering_maintenance)}'.")
|
self.result.is_failure(f"Units entering maintenance: '{', '.join(units_entering_maintenance)}'")
|
||||||
if causes:
|
if causes:
|
||||||
self.result.is_failure(f"Possible causes: '{', '.join(sorted(causes))}'.")
|
self.result.is_failure(f"Possible causes: '{', '.join(sorted(causes))}'")
|
||||||
|
|
|
@ -9,7 +9,8 @@ from __future__ import annotations
|
||||||
|
|
||||||
from typing import TYPE_CHECKING, ClassVar, Literal
|
from typing import TYPE_CHECKING, ClassVar, Literal
|
||||||
|
|
||||||
from anta.custom_types import DynamicVlanSource, Vlan
|
from anta.custom_types import DynamicVlanSource, VlanId
|
||||||
|
from anta.input_models.vlan import Vlan
|
||||||
from anta.models import AntaCommand, AntaTest
|
from anta.models import AntaCommand, AntaTest
|
||||||
from anta.tools import get_value
|
from anta.tools import get_value
|
||||||
|
|
||||||
|
@ -47,9 +48,9 @@ class VerifyVlanInternalPolicy(AntaTest):
|
||||||
|
|
||||||
policy: Literal["ascending", "descending"]
|
policy: Literal["ascending", "descending"]
|
||||||
"""The VLAN internal allocation policy. Supported values: ascending, descending."""
|
"""The VLAN internal allocation policy. Supported values: ascending, descending."""
|
||||||
start_vlan_id: Vlan
|
start_vlan_id: VlanId
|
||||||
"""The starting VLAN ID in the range."""
|
"""The starting VLAN ID in the range."""
|
||||||
end_vlan_id: Vlan
|
end_vlan_id: VlanId
|
||||||
"""The ending VLAN ID in the range."""
|
"""The ending VLAN ID in the range."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
|
@ -145,3 +146,48 @@ class VerifyDynamicVlanSource(AntaTest):
|
||||||
unexpected_sources = sources_with_vlans - expected_sources
|
unexpected_sources = sources_with_vlans - expected_sources
|
||||||
if unexpected_sources:
|
if unexpected_sources:
|
||||||
self.result.is_failure(f"Strict mode enabled: Unexpected sources have VLANs allocated: {', '.join(sorted(unexpected_sources))}")
|
self.result.is_failure(f"Strict mode enabled: Unexpected sources have VLANs allocated: {', '.join(sorted(unexpected_sources))}")
|
||||||
|
|
||||||
|
|
||||||
|
class VerifyVlanStatus(AntaTest):
|
||||||
|
"""Verifies the administrative status of specified VLANs.
|
||||||
|
|
||||||
|
Expected Results
|
||||||
|
----------------
|
||||||
|
* Success: The test will pass if all specified VLANs exist in the configuration and their administrative status is correct.
|
||||||
|
* Failure: The test will fail if any of the specified VLANs is not found in the configuration or if its administrative status is incorrect.
|
||||||
|
|
||||||
|
Examples
|
||||||
|
--------
|
||||||
|
```yaml
|
||||||
|
anta.tests.vlan:
|
||||||
|
- VerifyVlanStatus:
|
||||||
|
vlans:
|
||||||
|
- vlan_id: 10
|
||||||
|
status: suspended
|
||||||
|
- vlan_id: 4094
|
||||||
|
status: active
|
||||||
|
```
|
||||||
|
"""
|
||||||
|
|
||||||
|
categories: ClassVar[list[str]] = ["vlan"]
|
||||||
|
commands: ClassVar[list[AntaCommand | AntaTemplate]] = [AntaCommand(command="show vlan", revision=1)]
|
||||||
|
|
||||||
|
class Input(AntaTest.Input):
|
||||||
|
"""Input model for the VerifyVlanStatus test."""
|
||||||
|
|
||||||
|
vlans: list[Vlan]
|
||||||
|
"""List of VLAN details."""
|
||||||
|
|
||||||
|
@AntaTest.anta_test
|
||||||
|
def test(self) -> None:
|
||||||
|
"""Main test function for VerifyVlanStatus."""
|
||||||
|
self.result.is_success()
|
||||||
|
command_output = self.instance_commands[0].json_output
|
||||||
|
|
||||||
|
for vlan in self.inputs.vlans:
|
||||||
|
if (vlan_detail := get_value(command_output, f"vlans.{vlan.vlan_id}")) is None:
|
||||||
|
self.result.is_failure(f"{vlan} - Not configured")
|
||||||
|
continue
|
||||||
|
|
||||||
|
if (act_status := vlan_detail["status"]) != vlan.status:
|
||||||
|
self.result.is_failure(f"{vlan} - Incorrect administrative status - Expected: {vlan.status} Actual: {act_status}")
|
||||||
|
|
|
@ -12,7 +12,7 @@ from typing import TYPE_CHECKING, ClassVar
|
||||||
|
|
||||||
from pydantic import Field
|
from pydantic import Field
|
||||||
|
|
||||||
from anta.custom_types import Vlan, Vni, VxlanSrcIntf
|
from anta.custom_types import VlanId, Vni, VxlanSrcIntf
|
||||||
from anta.models import AntaCommand, AntaTest
|
from anta.models import AntaCommand, AntaTest
|
||||||
from anta.tools import get_value
|
from anta.tools import get_value
|
||||||
|
|
||||||
|
@ -102,11 +102,11 @@ class VerifyVxlanConfigSanity(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyVxlanVniBinding(AntaTest):
|
class VerifyVxlanVniBinding(AntaTest):
|
||||||
"""Verifies the VNI-VLAN bindings of the Vxlan1 interface.
|
"""Verifies the VNI-VLAN, VNI-VRF bindings of the Vxlan1 interface.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
* Success: The test will pass if the VNI-VLAN bindings provided are properly configured.
|
* Success: The test will pass if the VNI-VLAN and VNI-VRF bindings provided are properly configured.
|
||||||
* Failure: The test will fail if any VNI lacks bindings or if any bindings are incorrect.
|
* Failure: The test will fail if any VNI lacks bindings or if any bindings are incorrect.
|
||||||
* Skipped: The test will be skipped if the Vxlan1 interface is not configured.
|
* Skipped: The test will be skipped if the Vxlan1 interface is not configured.
|
||||||
|
|
||||||
|
@ -118,6 +118,7 @@ class VerifyVxlanVniBinding(AntaTest):
|
||||||
bindings:
|
bindings:
|
||||||
10010: 10
|
10010: 10
|
||||||
10020: 20
|
10020: 20
|
||||||
|
500: PROD
|
||||||
```
|
```
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -127,8 +128,8 @@ class VerifyVxlanVniBinding(AntaTest):
|
||||||
class Input(AntaTest.Input):
|
class Input(AntaTest.Input):
|
||||||
"""Input model for the VerifyVxlanVniBinding test."""
|
"""Input model for the VerifyVxlanVniBinding test."""
|
||||||
|
|
||||||
bindings: dict[Vni, Vlan]
|
bindings: dict[Vni, VlanId | str]
|
||||||
"""VNI to VLAN bindings to verify."""
|
"""VNI-VLAN or VNI-VRF bindings to verify."""
|
||||||
|
|
||||||
@AntaTest.anta_test
|
@AntaTest.anta_test
|
||||||
def test(self) -> None:
|
def test(self) -> None:
|
||||||
|
@ -136,26 +137,32 @@ class VerifyVxlanVniBinding(AntaTest):
|
||||||
self.result.is_success()
|
self.result.is_success()
|
||||||
|
|
||||||
if (vxlan1 := get_value(self.instance_commands[0].json_output, "vxlanIntfs.Vxlan1")) is None:
|
if (vxlan1 := get_value(self.instance_commands[0].json_output, "vxlanIntfs.Vxlan1")) is None:
|
||||||
self.result.is_skipped("Vxlan1 interface is not configured")
|
self.result.is_skipped("Interface: Vxlan1 - Not configured")
|
||||||
return
|
return
|
||||||
|
|
||||||
for vni, vlan in self.inputs.bindings.items():
|
for vni, vlan_vrf in self.inputs.bindings.items():
|
||||||
str_vni = str(vni)
|
str_vni = str(vni)
|
||||||
retrieved_vlan = ""
|
retrieved_vlan = ""
|
||||||
if str_vni in vxlan1["vniBindings"]:
|
retrieved_vrf = ""
|
||||||
|
if all([str_vni in vxlan1["vniBindings"], isinstance(vlan_vrf, int)]):
|
||||||
retrieved_vlan = get_value(vxlan1, f"vniBindings..{str_vni}..vlan", separator="..")
|
retrieved_vlan = get_value(vxlan1, f"vniBindings..{str_vni}..vlan", separator="..")
|
||||||
elif str_vni in vxlan1["vniBindingsToVrf"]:
|
elif str_vni in vxlan1["vniBindingsToVrf"]:
|
||||||
retrieved_vlan = get_value(vxlan1, f"vniBindingsToVrf..{str_vni}..vlan", separator="..")
|
if isinstance(vlan_vrf, int):
|
||||||
|
retrieved_vlan = get_value(vxlan1, f"vniBindingsToVrf..{str_vni}..vlan", separator="..")
|
||||||
if not retrieved_vlan:
|
else:
|
||||||
|
retrieved_vrf = get_value(vxlan1, f"vniBindingsToVrf..{str_vni}..vrfName", separator="..")
|
||||||
|
if not any([retrieved_vlan, retrieved_vrf]):
|
||||||
self.result.is_failure(f"Interface: Vxlan1 VNI: {str_vni} - Binding not found")
|
self.result.is_failure(f"Interface: Vxlan1 VNI: {str_vni} - Binding not found")
|
||||||
|
|
||||||
elif vlan != retrieved_vlan:
|
elif retrieved_vlan and vlan_vrf != retrieved_vlan:
|
||||||
self.result.is_failure(f"Interface: Vxlan1 VNI: {str_vni} VLAN: {vlan} - Wrong VLAN binding - Actual: {retrieved_vlan}")
|
self.result.is_failure(f"Interface: Vxlan1 VNI: {str_vni} - Wrong VLAN binding - Expected: {vlan_vrf} Actual: {retrieved_vlan}")
|
||||||
|
|
||||||
|
elif retrieved_vrf and vlan_vrf != retrieved_vrf:
|
||||||
|
self.result.is_failure(f"Interface: Vxlan1 VNI: {str_vni} - Wrong VRF binding - Expected: {vlan_vrf} Actual: {retrieved_vrf}")
|
||||||
|
|
||||||
|
|
||||||
class VerifyVxlanVtep(AntaTest):
|
class VerifyVxlanVtep(AntaTest):
|
||||||
"""Verifies the VTEP peers of the Vxlan1 interface.
|
"""Verifies Vxlan1 VTEP peers.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -191,7 +198,7 @@ class VerifyVxlanVtep(AntaTest):
|
||||||
inputs_vteps = [str(input_vtep) for input_vtep in self.inputs.vteps]
|
inputs_vteps = [str(input_vtep) for input_vtep in self.inputs.vteps]
|
||||||
|
|
||||||
if (vxlan1 := get_value(self.instance_commands[0].json_output, "interfaces.Vxlan1")) is None:
|
if (vxlan1 := get_value(self.instance_commands[0].json_output, "interfaces.Vxlan1")) is None:
|
||||||
self.result.is_skipped("Vxlan1 interface is not configured")
|
self.result.is_skipped("Interface: Vxlan1 - Not configured")
|
||||||
return
|
return
|
||||||
|
|
||||||
difference1 = set(inputs_vteps).difference(set(vxlan1["vteps"]))
|
difference1 = set(inputs_vteps).difference(set(vxlan1["vteps"]))
|
||||||
|
@ -205,7 +212,7 @@ class VerifyVxlanVtep(AntaTest):
|
||||||
|
|
||||||
|
|
||||||
class VerifyVxlan1ConnSettings(AntaTest):
|
class VerifyVxlan1ConnSettings(AntaTest):
|
||||||
"""Verifies the interface vxlan1 source interface and UDP port.
|
"""Verifies Vxlan1 source interface and UDP port.
|
||||||
|
|
||||||
Expected Results
|
Expected Results
|
||||||
----------------
|
----------------
|
||||||
|
@ -243,7 +250,7 @@ class VerifyVxlan1ConnSettings(AntaTest):
|
||||||
# Skip the test case if vxlan1 interface is not configured
|
# Skip the test case if vxlan1 interface is not configured
|
||||||
vxlan_output = get_value(command_output, "interfaces.Vxlan1")
|
vxlan_output = get_value(command_output, "interfaces.Vxlan1")
|
||||||
if not vxlan_output:
|
if not vxlan_output:
|
||||||
self.result.is_skipped("Vxlan1 interface is not configured.")
|
self.result.is_skipped("Interface: Vxlan1 - Not configured")
|
||||||
return
|
return
|
||||||
|
|
||||||
src_intf = vxlan_output.get("srcIpIntf")
|
src_intf = vxlan_output.get("srcIpIntf")
|
||||||
|
|
|
@ -290,7 +290,7 @@ class Catchtime:
|
||||||
"""__enter__ method."""
|
"""__enter__ method."""
|
||||||
self.start = perf_counter()
|
self.start = perf_counter()
|
||||||
if self.logger and self.message:
|
if self.logger and self.message:
|
||||||
self.logger.info("%s ...", self.message)
|
self.logger.debug("%s ...", self.message)
|
||||||
return self
|
return self
|
||||||
|
|
||||||
def __exit__(self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None) -> None:
|
def __exit__(self, exc_type: type[BaseException] | None, exc_val: BaseException | None, exc_tb: TracebackType | None) -> None:
|
||||||
|
@ -298,7 +298,7 @@ class Catchtime:
|
||||||
self.raw_time = perf_counter() - self.start
|
self.raw_time = perf_counter() - self.start
|
||||||
self.time = format_td(self.raw_time, 3)
|
self.time = format_td(self.raw_time, 3)
|
||||||
if self.logger and self.message:
|
if self.logger and self.message:
|
||||||
self.logger.info("%s completed in: %s.", self.message, self.time)
|
self.logger.debug("%s completed in: %s.", self.message, self.time)
|
||||||
|
|
||||||
|
|
||||||
def cprofile(sort_by: str = "cumtime") -> Callable[[F], F]:
|
def cprofile(sort_by: str = "cumtime") -> Callable[[F], F]:
|
||||||
|
|
|
@ -21,7 +21,6 @@ if TYPE_CHECKING:
|
||||||
LOGGER = getLogger(__name__)
|
LOGGER = getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
# pylint: disable=too-many-instance-attributes
|
|
||||||
@dataclass(frozen=True)
|
@dataclass(frozen=True)
|
||||||
class EapiRequest:
|
class EapiRequest:
|
||||||
"""Model for an eAPI request.
|
"""Model for an eAPI request.
|
||||||
|
|
|
@ -16,6 +16,7 @@ from typing import TYPE_CHECKING, Any, Literal, overload
|
||||||
# Public Imports
|
# Public Imports
|
||||||
# -----------------------------------------------------------------------------
|
# -----------------------------------------------------------------------------
|
||||||
import httpx
|
import httpx
|
||||||
|
from typing_extensions import deprecated
|
||||||
|
|
||||||
# -----------------------------------------------------------------------------
|
# -----------------------------------------------------------------------------
|
||||||
# Private Imports
|
# Private Imports
|
||||||
|
@ -51,6 +52,7 @@ class Device(httpx.AsyncClient):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
auth = None
|
auth = None
|
||||||
|
EAPI_COMMAND_API_URL = "/command-api"
|
||||||
EAPI_OFMT_OPTIONS = ("json", "text")
|
EAPI_OFMT_OPTIONS = ("json", "text")
|
||||||
EAPI_DEFAULT_OFMT = "json"
|
EAPI_DEFAULT_OFMT = "json"
|
||||||
|
|
||||||
|
@ -109,6 +111,7 @@ class Device(httpx.AsyncClient):
|
||||||
super().__init__(**kwargs)
|
super().__init__(**kwargs)
|
||||||
self.headers["Content-Type"] = "application/json-rpc"
|
self.headers["Content-Type"] = "application/json-rpc"
|
||||||
|
|
||||||
|
@deprecated("This method is deprecated, use `Device.check_api_endpoint` method instead. This will be removed in ANTA v2.0.0.", category=DeprecationWarning)
|
||||||
async def check_connection(self) -> bool:
|
async def check_connection(self) -> bool:
|
||||||
"""Check the target device to ensure that the eAPI port is open and accepting connections.
|
"""Check the target device to ensure that the eAPI port is open and accepting connections.
|
||||||
|
|
||||||
|
@ -122,6 +125,22 @@ class Device(httpx.AsyncClient):
|
||||||
"""
|
"""
|
||||||
return await port_check_url(self.base_url)
|
return await port_check_url(self.base_url)
|
||||||
|
|
||||||
|
async def check_api_endpoint(self) -> bool:
|
||||||
|
"""Check the target device eAPI HTTP endpoint with a HEAD request.
|
||||||
|
|
||||||
|
It is recommended that a Caller checks the connection before involving cli commands,
|
||||||
|
but this step is not required.
|
||||||
|
|
||||||
|
Returns
|
||||||
|
-------
|
||||||
|
bool
|
||||||
|
True when the device eAPI HTTP endpoint is accessible (2xx status code),
|
||||||
|
otherwise an HTTPStatusError exception is raised.
|
||||||
|
"""
|
||||||
|
response = await self.head(self.EAPI_COMMAND_API_URL, timeout=5)
|
||||||
|
response.raise_for_status()
|
||||||
|
return True
|
||||||
|
|
||||||
# Single command, JSON output, no suppression
|
# Single command, JSON output, no suppression
|
||||||
@overload
|
@overload
|
||||||
async def cli(
|
async def cli(
|
||||||
|
@ -416,7 +435,7 @@ class Device(httpx.AsyncClient):
|
||||||
The list of command results; either dict or text depending on the
|
The list of command results; either dict or text depending on the
|
||||||
JSON-RPC format parameter.
|
JSON-RPC format parameter.
|
||||||
"""
|
"""
|
||||||
res = await self.post("/command-api", json=jsonrpc)
|
res = await self.post(self.EAPI_COMMAND_API_URL, json=jsonrpc)
|
||||||
res.raise_for_status()
|
res.raise_for_status()
|
||||||
body = res.json()
|
body = res.json()
|
||||||
|
|
||||||
|
|
|
@ -7,7 +7,7 @@
|
||||||
ANTA is a Python library that can be used in user applications. This section describes how you can leverage ANTA Python modules to help you create your own NRFU solution.
|
ANTA is a Python library that can be used in user applications. This section describes how you can leverage ANTA Python modules to help you create your own NRFU solution.
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> If you are unfamiliar with asyncio, refer to the Python documentation relevant to your Python version - https://docs.python.org/3/library/asyncio.html
|
> If you are unfamiliar with asyncio, refer to the Python documentation relevant to your Python version - <https://docs.python.org/3/library/asyncio.html>
|
||||||
|
|
||||||
## [AntaDevice](../api/device.md#anta.device.AntaDevice) Abstract Class
|
## [AntaDevice](../api/device.md#anta.device.AntaDevice) Abstract Class
|
||||||
|
|
||||||
|
@ -24,7 +24,7 @@ The [copy()](../api/device.md#anta.device.AntaDevice.copy) coroutine is used to
|
||||||
The [AsyncEOSDevice](../api/device.md#anta.device.AsyncEOSDevice) class is an implementation of [AntaDevice](../api/device.md#anta.device.AntaDevice) for Arista EOS.
|
The [AsyncEOSDevice](../api/device.md#anta.device.AsyncEOSDevice) class is an implementation of [AntaDevice](../api/device.md#anta.device.AntaDevice) for Arista EOS.
|
||||||
It uses the [aio-eapi](https://github.com/jeremyschulman/aio-eapi) eAPI client and the [AsyncSSH](https://github.com/ronf/asyncssh) library.
|
It uses the [aio-eapi](https://github.com/jeremyschulman/aio-eapi) eAPI client and the [AsyncSSH](https://github.com/ronf/asyncssh) library.
|
||||||
|
|
||||||
- The [\_collect()](../api/device.md#anta.device.AsyncEOSDevice._collect) coroutine collects [AntaCommand](../api/models.md#anta.models.AntaCommand) outputs using eAPI.
|
- The [\_collect()](../api/device.md#anta.device.AsyncEOSDevice._collect) coroutine collects [AntaCommand](../api/commands.md#anta.models.AntaCommand) outputs using eAPI.
|
||||||
- The [refresh()](../api/device.md#anta.device.AsyncEOSDevice.refresh) coroutine tries to open a TCP connection on the eAPI port and update the `is_online` attribute accordingly. If the TCP connection succeeds, it sends a `show version` command to gather the hardware model of the device and updates the `established` and `hw_model` attributes.
|
- The [refresh()](../api/device.md#anta.device.AsyncEOSDevice.refresh) coroutine tries to open a TCP connection on the eAPI port and update the `is_online` attribute accordingly. If the TCP connection succeeds, it sends a `show version` command to gather the hardware model of the device and updates the `established` and `hw_model` attributes.
|
||||||
- The [copy()](../api/device.md#anta.device.AsyncEOSDevice.copy) coroutine copies files to and from the device using the SCP protocol.
|
- The [copy()](../api/device.md#anta.device.AsyncEOSDevice.copy) coroutine copies files to and from the device using the SCP protocol.
|
||||||
|
|
||||||
|
|
13
docs/api/settings.md
Normal file
13
docs/api/settings.md
Normal file
|
@ -0,0 +1,13 @@
|
||||||
|
---
|
||||||
|
anta_title: ANTA Settings
|
||||||
|
---
|
||||||
|
<!--
|
||||||
|
~ Copyright (c) 2023-2025 Arista Networks, Inc.
|
||||||
|
~ Use of this source code is governed by the Apache License 2.0
|
||||||
|
~ that can be found in the LICENSE file.
|
||||||
|
-->
|
||||||
|
|
||||||
|
### ::: anta.settings
|
||||||
|
|
||||||
|
options:
|
||||||
|
show_root_full_path: true
|
|
@ -19,6 +19,7 @@ Here are the tests that we currently provide:
|
||||||
- [Configuration](tests/configuration.md)
|
- [Configuration](tests/configuration.md)
|
||||||
- [Connectivity](tests/connectivity.md)
|
- [Connectivity](tests/connectivity.md)
|
||||||
- [CVX](tests/cvx.md)
|
- [CVX](tests/cvx.md)
|
||||||
|
- [EVPN](tests/evpn.md)
|
||||||
- [Field Notices](tests/field_notices.md)
|
- [Field Notices](tests/field_notices.md)
|
||||||
- [Flow Tracking](tests/flow_tracking.md)
|
- [Flow Tracking](tests/flow_tracking.md)
|
||||||
- [GreenT](tests/greent.md)
|
- [GreenT](tests/greent.md)
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for AAA tests
|
anta_title: ANTA Tests for AAA
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for AAA tests
|
||||||
::: anta.tests.aaa
|
::: anta.tests.aaa
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for Adaptive Virtual Topology (AVT) tests
|
anta_title: ANTA Tests for Adaptive Virtual Topology (AVT)
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for Adaptive Virtual Topology (AVT) tests
|
||||||
::: anta.tests.avt
|
::: anta.tests.avt
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for Adaptive Virtual Topology (AVT) tests
|
||||||
::: anta.input_models.avt
|
::: anta.input_models.avt
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__init__"
|
- "!^__init__"
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for BFD tests
|
anta_title: ANTA Tests for BFD
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for BFD tests
|
||||||
::: anta.tests.bfd
|
::: anta.tests.bfd
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for BFD tests
|
||||||
::: anta.input_models.bfd
|
::: anta.input_models.bfd
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for device configuration tests
|
anta_title: ANTA Tests for device configuration
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for device configuration tests
|
||||||
::: anta.tests.configuration
|
::: anta.tests.configuration
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for connectivity tests
|
anta_title: ANTA Tests for connectivity
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for connectivity tests
|
||||||
::: anta.tests.connectivity
|
::: anta.tests.connectivity
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,9 +31,11 @@ anta_title: ANTA catalog for connectivity tests
|
||||||
::: anta.input_models.connectivity
|
::: anta.input_models.connectivity
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
- "!^__init__"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
show_bases: false
|
show_bases: false
|
||||||
show_labels: true
|
show_labels: true
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for CVX tests
|
anta_title: ANTA Tests for CVX
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -8,10 +8,31 @@ anta_title: ANTA catalog for CVX tests
|
||||||
~ that can be found in the LICENSE file.
|
~ that can be found in the LICENSE file.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
# Tests
|
||||||
|
|
||||||
::: anta.tests.cvx
|
::: anta.tests.cvx
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
|
filters:
|
||||||
|
- "!test"
|
||||||
|
- "!render"
|
||||||
|
merge_init_into_class: false
|
||||||
|
show_bases: false
|
||||||
|
show_labels: true
|
||||||
|
show_root_heading: false
|
||||||
|
show_root_toc_entry: false
|
||||||
|
show_symbol_type_heading: false
|
||||||
|
show_symbol_type_toc: false
|
||||||
|
|
||||||
|
# Input models
|
||||||
|
|
||||||
|
::: anta.input_models.cvx
|
||||||
|
|
||||||
|
options:
|
||||||
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
45
docs/api/tests/evpn.md
Normal file
45
docs/api/tests/evpn.md
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
---
|
||||||
|
anta_title: ANTA Tests for EVPN
|
||||||
|
---
|
||||||
|
|
||||||
|
<!--
|
||||||
|
~ Copyright (c) 2023-2025 Arista Networks, Inc.
|
||||||
|
~ Use of this source code is governed by the Apache License 2.0
|
||||||
|
~ that can be found in the LICENSE file.
|
||||||
|
-->
|
||||||
|
|
||||||
|
# Tests
|
||||||
|
|
||||||
|
::: anta.tests.evpn
|
||||||
|
|
||||||
|
options:
|
||||||
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
|
filters:
|
||||||
|
- "!test"
|
||||||
|
- "!render"
|
||||||
|
- "!^_[^_]"
|
||||||
|
merge_init_into_class: false
|
||||||
|
show_bases: false
|
||||||
|
show_labels: true
|
||||||
|
show_root_heading: false
|
||||||
|
show_root_toc_entry: false
|
||||||
|
show_symbol_type_heading: false
|
||||||
|
show_symbol_type_toc: false
|
||||||
|
|
||||||
|
# Input models
|
||||||
|
|
||||||
|
::: anta.input_models.evpn
|
||||||
|
|
||||||
|
options:
|
||||||
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
|
filters:
|
||||||
|
- "!^__str__"
|
||||||
|
merge_init_into_class: false
|
||||||
|
show_bases: false
|
||||||
|
show_labels: true
|
||||||
|
show_root_heading: false
|
||||||
|
show_root_toc_entry: false
|
||||||
|
show_symbol_type_heading: false
|
||||||
|
show_symbol_type_toc: false
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for Field Notices tests
|
anta_title: ANTA Tests for Field Notices
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for Field Notices tests
|
||||||
::: anta.tests.field_notices
|
::: anta.tests.field_notices
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for flow tracking tests
|
anta_title: ANTA Tests for flow tracking
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for flow tracking tests
|
||||||
::: anta.tests.flow_tracking
|
::: anta.tests.flow_tracking
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -31,7 +32,8 @@ anta_title: ANTA catalog for flow tracking tests
|
||||||
::: anta.input_models.flow_tracking
|
::: anta.input_models.flow_tracking
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__init__"
|
- "!^__init__"
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for GreenT tests
|
anta_title: ANTA Tests for GreenT
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for GreenT tests
|
||||||
::: anta.tests.greent
|
::: anta.tests.greent
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for hardware tests
|
anta_title: ANTA Tests for hardware
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for hardware tests
|
||||||
::: anta.tests.hardware
|
::: anta.tests.hardware
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for interfaces tests
|
anta_title: ANTA Tests for interfaces
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,10 +13,12 @@ anta_title: ANTA catalog for interfaces tests
|
||||||
::: anta.tests.interfaces
|
::: anta.tests.interfaces
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
- "!_.*"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
show_bases: false
|
show_bases: false
|
||||||
show_labels: true
|
show_labels: true
|
||||||
|
@ -30,9 +32,11 @@ anta_title: ANTA catalog for interfaces tests
|
||||||
::: anta.input_models.interfaces
|
::: anta.input_models.interfaces
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
- "!^__init__"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
show_bases: false
|
show_bases: false
|
||||||
show_labels: true
|
show_labels: true
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for LANZ tests
|
anta_title: ANTA Tests for LANZ
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for LANZ tests
|
||||||
::: anta.tests.lanz
|
::: anta.tests.lanz
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for logging tests
|
anta_title: ANTA Tests for logging
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for logging tests
|
||||||
::: anta.tests.logging
|
::: anta.tests.logging
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for logging tests
|
||||||
::: anta.input_models.logging
|
::: anta.input_models.logging
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for MLAG tests
|
anta_title: ANTA Tests for MLAG
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for MLAG tests
|
||||||
::: anta.tests.mlag
|
::: anta.tests.mlag
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for multicast and IGMP tests
|
anta_title: ANTA Tests for multicast and IGMP
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for multicast and IGMP tests
|
||||||
::: anta.tests.multicast
|
::: anta.tests.multicast
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for Router path-selection tests
|
anta_title: ANTA Tests for Router path-selection
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for Router path-selection tests
|
||||||
::: anta.tests.path_selection
|
::: anta.tests.path_selection
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for Router path-selection tests
|
||||||
::: anta.input_models.path_selection
|
::: anta.input_models.path_selection
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for profiles tests
|
anta_title: ANTA Tests for profiles
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for profiles tests
|
||||||
::: anta.tests.profiles
|
::: anta.tests.profiles
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for PTP tests
|
anta_title: ANTA Tests for PTP
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for PTP tests
|
||||||
::: anta.tests.ptp
|
::: anta.tests.ptp
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for BGP tests
|
anta_title: ANTA Tests for BGP
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -21,7 +21,8 @@ anta_title: ANTA catalog for BGP tests
|
||||||
::: anta.tests.routing.bgp
|
::: anta.tests.routing.bgp
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -39,7 +40,8 @@ anta_title: ANTA catalog for BGP tests
|
||||||
::: anta.input_models.routing.bgp
|
::: anta.input_models.routing.bgp
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__init__"
|
- "!^__init__"
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for generic routing tests
|
anta_title: ANTA Tests for generic routing
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for generic routing tests
|
||||||
::: anta.tests.routing.generic
|
::: anta.tests.routing.generic
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for generic routing tests
|
||||||
::: anta.input_models.routing.generic
|
::: anta.input_models.routing.generic
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for IS-IS tests
|
anta_title: ANTA Tests for IS-IS
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for IS-IS tests
|
||||||
::: anta.tests.routing.isis
|
::: anta.tests.routing.isis
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -31,7 +32,8 @@ anta_title: ANTA catalog for IS-IS tests
|
||||||
::: anta.input_models.routing.isis
|
::: anta.input_models.routing.isis
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__init__"
|
- "!^__init__"
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for OSPF tests
|
anta_title: ANTA Tests for OSPF
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for OSPF tests
|
||||||
::: anta.tests.routing.ospf
|
::: anta.tests.routing.ospf
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for security tests
|
anta_title: ANTA Tests for security
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for security tests
|
||||||
::: anta.tests.security
|
::: anta.tests.security
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for security tests
|
||||||
::: anta.input_models.security
|
::: anta.input_models.security
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__init__"
|
- "!^__init__"
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for services tests
|
anta_title: ANTA Tests for services
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for services tests
|
||||||
::: anta.tests.services
|
::: anta.tests.services
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for services tests
|
||||||
::: anta.input_models.services
|
::: anta.input_models.services
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__init__"
|
- "!^__init__"
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for SNMP tests
|
anta_title: ANTA Tests for SNMP
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for SNMP tests
|
||||||
::: anta.tests.snmp
|
::: anta.tests.snmp
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for SNMP tests
|
||||||
::: anta.input_models.snmp
|
::: anta.input_models.snmp
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for Software tests
|
anta_title: ANTA Tests for Software
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for Software tests
|
||||||
::: anta.tests.software
|
::: anta.tests.software
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for STP tests
|
anta_title: ANTA Tests for STP
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for STP tests
|
||||||
::: anta.tests.stp
|
::: anta.tests.stp
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for STUN tests
|
anta_title: ANTA Tests for STUN
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for STUN tests
|
||||||
::: anta.tests.stun
|
::: anta.tests.stun
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for STUN tests
|
||||||
::: anta.input_models.stun
|
::: anta.input_models.stun
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__init__"
|
- "!^__init__"
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for System tests
|
anta_title: ANTA Tests for System
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -13,7 +13,8 @@ anta_title: ANTA catalog for System tests
|
||||||
::: anta.tests.system
|
::: anta.tests.system
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -30,7 +31,8 @@ anta_title: ANTA catalog for System tests
|
||||||
::: anta.input_models.system
|
::: anta.input_models.system
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!^__str__"
|
- "!^__str__"
|
||||||
merge_init_into_class: false
|
merge_init_into_class: false
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for VLAN tests
|
anta_title: ANTA Tests for VLAN
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -8,10 +8,13 @@ anta_title: ANTA catalog for VLAN tests
|
||||||
~ that can be found in the LICENSE file.
|
~ that can be found in the LICENSE file.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
# Tests
|
||||||
|
|
||||||
::: anta.tests.vlan
|
::: anta.tests.vlan
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
@ -22,3 +25,20 @@ anta_title: ANTA catalog for VLAN tests
|
||||||
show_root_toc_entry: false
|
show_root_toc_entry: false
|
||||||
show_symbol_type_heading: false
|
show_symbol_type_heading: false
|
||||||
show_symbol_type_toc: false
|
show_symbol_type_toc: false
|
||||||
|
|
||||||
|
# Input models
|
||||||
|
|
||||||
|
::: anta.input_models.vlan
|
||||||
|
|
||||||
|
options:
|
||||||
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
|
filters:
|
||||||
|
- "!^__str__"
|
||||||
|
merge_init_into_class: false
|
||||||
|
show_bases: false
|
||||||
|
show_labels: true
|
||||||
|
show_root_heading: false
|
||||||
|
show_root_toc_entry: false
|
||||||
|
show_symbol_type_heading: false
|
||||||
|
show_symbol_type_toc: false
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
---
|
---
|
||||||
anta_title: ANTA catalog for VXLAN tests
|
anta_title: ANTA Tests for VXLAN
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
|
@ -11,7 +11,8 @@ anta_title: ANTA catalog for VXLAN tests
|
||||||
::: anta.tests.vxlan
|
::: anta.tests.vxlan
|
||||||
|
|
||||||
options:
|
options:
|
||||||
anta_hide_test_module_description: true
|
extra:
|
||||||
|
anta_hide_test_module_description: true
|
||||||
filters:
|
filters:
|
||||||
- "!test"
|
- "!test"
|
||||||
- "!render"
|
- "!render"
|
||||||
|
|
|
@ -16,40 +16,7 @@ This command will list all devices available in the inventory. Using the `--tags
|
||||||
### Command overview
|
### Command overview
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
Usage: anta get inventory [OPTIONS]
|
--8<-- "anta_get_inventory_help.txt"
|
||||||
|
|
||||||
Show inventory loaded in ANTA.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
-u, --username TEXT Username to connect to EOS [env var:
|
|
||||||
ANTA_USERNAME; required]
|
|
||||||
-p, --password TEXT Password to connect to EOS that must be
|
|
||||||
provided. It can be prompted using '--prompt'
|
|
||||||
option. [env var: ANTA_PASSWORD]
|
|
||||||
--enable-password TEXT Password to access EOS Privileged EXEC mode.
|
|
||||||
It can be prompted using '--prompt' option.
|
|
||||||
Requires '--enable' option. [env var:
|
|
||||||
ANTA_ENABLE_PASSWORD]
|
|
||||||
--enable Some commands may require EOS Privileged EXEC
|
|
||||||
mode. This option tries to access this mode
|
|
||||||
before sending a command to the device. [env
|
|
||||||
var: ANTA_ENABLE]
|
|
||||||
-P, --prompt Prompt for passwords if they are not
|
|
||||||
provided. [env var: ANTA_PROMPT]
|
|
||||||
--timeout FLOAT Global API timeout. This value will be used
|
|
||||||
for all devices. [env var: ANTA_TIMEOUT;
|
|
||||||
default: 30.0]
|
|
||||||
--insecure Disable SSH Host Key validation. [env var:
|
|
||||||
ANTA_INSECURE]
|
|
||||||
--disable-cache Disable cache globally. [env var:
|
|
||||||
ANTA_DISABLE_CACHE]
|
|
||||||
-i, --inventory FILE Path to the inventory YAML file. [env var:
|
|
||||||
ANTA_INVENTORY; required]
|
|
||||||
--tags TEXT List of tags using comma as separator:
|
|
||||||
tag1,tag2,tag3. [env var: ANTA_TAGS]
|
|
||||||
--connected / --not-connected Display inventory after connection has been
|
|
||||||
created
|
|
||||||
--help Show this message and exit.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
|
|
|
@ -7,27 +7,18 @@ anta_title: Retrieving Tests information
|
||||||
~ that can be found in the LICENSE file.
|
~ that can be found in the LICENSE file.
|
||||||
-->
|
-->
|
||||||
|
|
||||||
`anta get tests` commands help you discover available tests in ANTA.
|
## `anta get tests`
|
||||||
|
|
||||||
|
`anta get tests` commands help you discover the available tests in ANTA.
|
||||||
|
|
||||||
### Command overview
|
### Command overview
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
Usage: anta get tests [OPTIONS]
|
--8<-- "anta_get_tests_help.txt"
|
||||||
|
|
||||||
Show all builtin ANTA tests with an example output retrieved from each test
|
|
||||||
documentation.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
--module TEXT Filter tests by module name. [default: anta.tests]
|
|
||||||
--test TEXT Filter by specific test name. If module is specified,
|
|
||||||
searches only within that module.
|
|
||||||
--short Display test names without their inputs.
|
|
||||||
--count Print only the number of tests found.
|
|
||||||
--help Show this message and exit.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> [!TIP]
|
> [!TIP]
|
||||||
> By default, `anta get tests` will retrieve all tests available in ANTA.
|
> By default, `anta get tests` retrieves all the tests available in ANTA.
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
|
@ -60,7 +51,7 @@ anta.tests.aaa:
|
||||||
[...]
|
[...]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Module usage
|
#### Filtering using `--module`
|
||||||
|
|
||||||
To retrieve all the tests from `anta.tests.stun`.
|
To retrieve all the tests from `anta.tests.stun`.
|
||||||
|
|
||||||
|
@ -81,7 +72,7 @@ anta.tests.stun:
|
||||||
# Verifies the STUN server status is enabled and running.
|
# Verifies the STUN server status is enabled and running.
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Test usage
|
#### Filtering using `--test`
|
||||||
|
|
||||||
``` yaml title="anta get tests --test VerifyTacacsSourceIntf"
|
``` yaml title="anta get tests --test VerifyTacacsSourceIntf"
|
||||||
anta.tests.aaa:
|
anta.tests.aaa:
|
||||||
|
@ -118,3 +109,132 @@ anta.tests.aaa:
|
||||||
```bash title="anta get tests --count"
|
```bash title="anta get tests --count"
|
||||||
There are 155 tests available in `anta.tests`.
|
There are 155 tests available in `anta.tests`.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## `anta get commands`
|
||||||
|
|
||||||
|
`anta get commands` returns the EOS commands used by the targeted tests, if no filter is provided, the targeted tests are all the built-in ANTA tests.
|
||||||
|
|
||||||
|
### Command overview
|
||||||
|
|
||||||
|
```bash
|
||||||
|
--8<-- "anta_get_commands_help.txt"
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> By default, `anta get commands` returns the commands from every tests builtin in ANTA.
|
||||||
|
|
||||||
|
### Examples
|
||||||
|
|
||||||
|
#### Default usage
|
||||||
|
|
||||||
|
``` yaml title="anta get commands"
|
||||||
|
anta.tests.aaa:
|
||||||
|
- VerifyAcctConsoleMethods:
|
||||||
|
- show aaa methods accounting
|
||||||
|
- VerifyAcctDefaultMethods:
|
||||||
|
- show aaa methods accounting
|
||||||
|
- VerifyAuthenMethods:
|
||||||
|
- show aaa methods authentication
|
||||||
|
- VerifyAuthzMethods:
|
||||||
|
- show aaa methods authorization
|
||||||
|
- VerifyTacacsServerGroups:
|
||||||
|
- show tacacs
|
||||||
|
- VerifyTacacsServers:
|
||||||
|
- show tacacs
|
||||||
|
- VerifyTacacsSourceIntf:
|
||||||
|
- show tacacs
|
||||||
|
anta.tests.avt:
|
||||||
|
- VerifyAVTPathHealth:
|
||||||
|
- show adaptive-virtual-topology path
|
||||||
|
- VerifyAVTRole:
|
||||||
|
- show adaptive-virtual-topology path
|
||||||
|
- VerifyAVTSpecificPath:
|
||||||
|
- show adaptive-virtual-topology path
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Filtering using `--module`
|
||||||
|
|
||||||
|
To retrieve all the commands from the tests in `anta.tests.stun`.
|
||||||
|
|
||||||
|
``` yaml title="anta get commands --module anta.tests.stun"
|
||||||
|
anta.tests.stun:
|
||||||
|
- VerifyStunClient:
|
||||||
|
- show stun client translations {source_address} {source_port}
|
||||||
|
- VerifyStunClientTranslation:
|
||||||
|
- show stun client translations {source_address} {source_port}
|
||||||
|
- VerifyStunServer:
|
||||||
|
- show stun server status
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Filtering using `--test`
|
||||||
|
|
||||||
|
``` yaml title="anta get commands --test VerifyBGPExchangedRoutes"
|
||||||
|
anta.tests.routing.bgp:
|
||||||
|
- VerifyBGPExchangedRoutes:
|
||||||
|
- show bgp neighbors {peer} advertised-routes vrf {vrf}
|
||||||
|
- show bgp neighbors {peer} routes vrf {vrf}
|
||||||
|
vrf: MGMT
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> You can filter tests by providing a prefix - ANTA will return all tests that start with your specified string.
|
||||||
|
|
||||||
|
```yaml title="anta get tests --test VerifyTacacs"
|
||||||
|
anta.tests.aaa:
|
||||||
|
- VerifyTacacsServerGroups:
|
||||||
|
- show tacacs
|
||||||
|
- VerifyTacacsServers:
|
||||||
|
- show tacacs
|
||||||
|
- VerifyTacacsSourceIntf:
|
||||||
|
- show tacacs
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Filtering using `--catalog`
|
||||||
|
|
||||||
|
To retrieve all the commands from the tests in a catalog:
|
||||||
|
|
||||||
|
``` yaml title="anta get commands --catalog my-catalog.yml"
|
||||||
|
anta.tests.interfaces:
|
||||||
|
- VerifyL3MTU:
|
||||||
|
- show interfaces
|
||||||
|
anta.tests.mlag:
|
||||||
|
- VerifyMlagStatus:
|
||||||
|
- show mlag
|
||||||
|
anta.tests.system:
|
||||||
|
- VerifyAgentLogs:
|
||||||
|
- show agent logs crash
|
||||||
|
- VerifyCPUUtilization:
|
||||||
|
- show processes top once
|
||||||
|
- VerifyCoredump:
|
||||||
|
- show system coredump
|
||||||
|
- VerifyFileSystemUtilization:
|
||||||
|
- bash timeout 10 df -h
|
||||||
|
- VerifyMemoryUtilization:
|
||||||
|
- show version
|
||||||
|
- VerifyNTP:
|
||||||
|
- show ntp status
|
||||||
|
- VerifyReloadCause:
|
||||||
|
- show reload cause
|
||||||
|
- VerifyUptime:
|
||||||
|
- show uptime
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Output using `--unique`
|
||||||
|
|
||||||
|
Using the `--unique` flag will output only the list of unique commands that will be run which can be useful to configure a AAA system.
|
||||||
|
|
||||||
|
For instance with the previous catalog, the output would be:
|
||||||
|
|
||||||
|
``` yaml title="anta get commands --catalog my-catalog.yml --unique"
|
||||||
|
show processes top once
|
||||||
|
bash timeout 10 df -h
|
||||||
|
show system coredump
|
||||||
|
show agent logs crash
|
||||||
|
show interfaces
|
||||||
|
show uptime
|
||||||
|
show ntp status
|
||||||
|
show version
|
||||||
|
show reload cause
|
||||||
|
show mlag
|
||||||
|
```
|
||||||
|
|
|
@ -12,23 +12,7 @@ In large setups, it might be beneficial to construct your inventory based on you
|
||||||
## Command overview
|
## Command overview
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ anta get from-ansible --help
|
--8<-- "anta_get_fromansible_help.txt"
|
||||||
Usage: anta get from-ansible [OPTIONS]
|
|
||||||
|
|
||||||
Build ANTA inventory from an ansible inventory YAML file.
|
|
||||||
|
|
||||||
NOTE: This command does not support inline vaulted variables. Make sure to
|
|
||||||
comment them out.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
-o, --output FILE Path to save inventory file [env var:
|
|
||||||
ANTA_INVENTORY; required]
|
|
||||||
--overwrite Do not prompt when overriding current inventory
|
|
||||||
[env var: ANTA_GET_FROM_ANSIBLE_OVERWRITE]
|
|
||||||
-g, --ansible-group TEXT Ansible group to filter
|
|
||||||
--ansible-inventory FILE Path to your ansible inventory file to read
|
|
||||||
[required]
|
|
||||||
--help Show this message and exit.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
> [!WARNING]
|
> [!WARNING]
|
||||||
|
|
|
@ -15,26 +15,7 @@ In large setups, it might be beneficial to construct your inventory based on Clo
|
||||||
## Command overview
|
## Command overview
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
Usage: anta get from-cvp [OPTIONS]
|
--8<-- "anta_get_fromcvp_help.txt"
|
||||||
|
|
||||||
Build ANTA inventory from CloudVision.
|
|
||||||
|
|
||||||
NOTE: Only username/password authentication is supported for on-premises CloudVision instances.
|
|
||||||
Token authentication for both on-premises and CloudVision as a Service (CVaaS) is not supported.
|
|
||||||
|
|
||||||
Options:
|
|
||||||
-o, --output FILE Path to save inventory file [env var: ANTA_INVENTORY;
|
|
||||||
required]
|
|
||||||
--overwrite Do not prompt when overriding current inventory [env
|
|
||||||
var: ANTA_GET_FROM_CVP_OVERWRITE]
|
|
||||||
-host, --host TEXT CloudVision instance FQDN or IP [required]
|
|
||||||
-u, --username TEXT CloudVision username [required]
|
|
||||||
-p, --password TEXT CloudVision password [required]
|
|
||||||
-c, --container TEXT CloudVision container where devices are configured
|
|
||||||
--ignore-cert By default connection to CV will use HTTPS
|
|
||||||
certificate, set this flag to disable it [env var:
|
|
||||||
ANTA_GET_FROM_CVP_IGNORE_CERT]
|
|
||||||
--help Show this message and exit.
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The output is an inventory where the name of the container is added as a tag for each host:
|
The output is an inventory where the name of the container is added as a tag for each host:
|
||||||
|
|
|
@ -29,9 +29,22 @@ $ pip install -e .[dev,cli]
|
||||||
$ pip list -e
|
$ pip list -e
|
||||||
Package Version Editable project location
|
Package Version Editable project location
|
||||||
------- ------- -------------------------
|
------- ------- -------------------------
|
||||||
anta 1.3.0 /mnt/lab/projects/anta
|
anta 1.4.0 /mnt/lab/projects/anta
|
||||||
```
|
```
|
||||||
|
|
||||||
|
!!! info "Installation Note"
|
||||||
|
1. If you are using a terminal such as zsh, ensure that commands involving shell expansions within editable installs (like specifying development dependencies) are enclosed in double quotes. For example: `pip install -e ."[dev]"`
|
||||||
|
2. If you do not see any output when running the verification command (`pip list -e`), it is likely because the command needs to be executed from within the inner `anta` directory. Navigate to this directory and then verify the installation:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ cd anta/anta
|
||||||
|
# Verify installation
|
||||||
|
$ pip list -e
|
||||||
|
Package Version Editable project location
|
||||||
|
------- ------- --------------------------
|
||||||
|
anta 1.4.0 /mnt/lab/projects/anta
|
||||||
|
```
|
||||||
|
|
||||||
Then, [`tox`](https://tox.wiki/) is configured with few environments to run CI locally:
|
Then, [`tox`](https://tox.wiki/) is configured with few environments to run CI locally:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -103,14 +116,19 @@ The `pytest_generate_tests` function will parametrize the generic test function
|
||||||
|
|
||||||
See https://docs.pytest.org/en/7.3.x/how-to/parametrize.html#basic-pytest-generate-tests-example
|
See https://docs.pytest.org/en/7.3.x/how-to/parametrize.html#basic-pytest-generate-tests-example
|
||||||
|
|
||||||
The `DATA` structure is a list of dictionaries used to parametrize the test. The list elements have the following keys:
|
The `DATA` structure is a dictionary where:
|
||||||
|
|
||||||
|
- Each key is a tuple of size 2 containing:
|
||||||
|
- An AntaTest subclass imported in the test module as first element - e.g. VerifyUptime.
|
||||||
|
- A string used as name displayed by pytest as second element.
|
||||||
|
- Each value is an instance of AntaUnitTest, which is a Python TypedDict.
|
||||||
|
|
||||||
|
And AntaUnitTest have the following keys:
|
||||||
|
|
||||||
- `name` (str): Test name as displayed by Pytest.
|
|
||||||
- `test` (AntaTest): An AntaTest subclass imported in the test module - e.g. VerifyUptime.
|
|
||||||
- `eos_data` (list[dict]): List of data mocking EOS returned data to be passed to the test.
|
- `eos_data` (list[dict]): List of data mocking EOS returned data to be passed to the test.
|
||||||
- `inputs` (dict): Dictionary to instantiate the `test` inputs as defined in the class from `test`.
|
- `inputs` (dict): Dictionary to instantiate the `test` inputs as defined in the class from `test`.
|
||||||
- `expected` (dict): Expected test result structure, a dictionary containing a key
|
- `expected` (dict): Expected test result structure, a dictionary containing a key
|
||||||
`result` containing one of the allowed status (`Literal['success', 'failure', 'unset', 'skipped', 'error']`) and optionally a key `messages` which is a list(str) and each message is expected to be a substring of one of the actual messages in the TestResult object.
|
`result` containing one of the allowed status (`Literal[AntaTestStatus.SUCCESS, AntaTestStatus.FAILURE, AntaTestStatus.SKIPPED]`) and optionally a key `messages` which is a list(str) and each message is expected to be a substring of one of the actual messages in the TestResult object.
|
||||||
|
|
||||||
In order for your unit tests to be correctly collected, you need to import the generic test function even if not used in the Python module.
|
In order for your unit tests to be correctly collected, you need to import the generic test function even if not used in the Python module.
|
||||||
|
|
||||||
|
@ -124,29 +142,24 @@ from tests.units.anta_tests import test
|
||||||
from anta.tests.system import VerifyUptime
|
from anta.tests.system import VerifyUptime
|
||||||
|
|
||||||
# Define test parameters
|
# Define test parameters
|
||||||
DATA: list[dict[str, Any]] = [
|
DATA: dict[tuple[type[AntaTest], str], AntaUnitTest] = {
|
||||||
{
|
(VerifyUptime, "success"): {
|
||||||
# Arbitrary test name
|
# Data returned by EOS on which the AntaTest is tested
|
||||||
"name": "success",
|
"eos_data": [{"upTime": 1186689.15, "loadAvg": [0.13, 0.12, 0.09], "users": 1, "currentTime": 1683186659.139859}],
|
||||||
# Must be an AntaTest definition
|
# Dictionary to instantiate VerifyUptime.Input
|
||||||
"test": VerifyUptime,
|
"inputs": {"minimum": 666},
|
||||||
# Data returned by EOS on which the AntaTest is tested
|
# Expected test result
|
||||||
"eos_data": [{"upTime": 1186689.15, "loadAvg": [0.13, 0.12, 0.09], "users": 1, "currentTime": 1683186659.139859}],
|
"expected": {"result": AntaTestStatus.SUCCESS},
|
||||||
# Dictionary to instantiate VerifyUptime.Input
|
},
|
||||||
"inputs": {"minimum": 666},
|
(VerifyUptime, "failure"): {
|
||||||
# Expected test result
|
# Data returned by EOS on which the AntaTest is tested
|
||||||
"expected": {"result": "success"},
|
"eos_data": [{"upTime": 665.15, "loadAvg": [0.13, 0.12, 0.09], "users": 1, "currentTime": 1683186659.139859}],
|
||||||
},
|
"inputs": {"minimum": 666},
|
||||||
{
|
# If the test returns messages, it needs to be expected otherwise test will fail.
|
||||||
"name": "failure",
|
# NB: expected messages only needs to be included in messages returned by the test. Exact match is not required.
|
||||||
"test": VerifyUptime,
|
"expected": {"result": AntaTestStatus.FAILURE, "messages": ["Device uptime is 665.15 seconds"]},
|
||||||
"eos_data": [{"upTime": 665.15, "loadAvg": [0.13, 0.12, 0.09], "users": 1, "currentTime": 1683186659.139859}],
|
}
|
||||||
"inputs": {"minimum": 666},
|
}
|
||||||
# If the test returns messages, it needs to be expected otherwise test will fail.
|
|
||||||
# NB: expected messages only needs to be included in messages returned by the test. Exact match is not required.
|
|
||||||
"expected": {"result": "failure", "messages": ["Device uptime is 665.15 seconds"]},
|
|
||||||
},
|
|
||||||
]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Git Pre-commit hook
|
## Git Pre-commit hook
|
||||||
|
|
33
docs/faq.md
33
docs/faq.md
|
@ -30,7 +30,7 @@ anta_title: Frequently Asked Questions (FAQ)
|
||||||
|
|
||||||
This usually means that the operating system refused to open a new file descriptor (or socket) for the ANTA process. This might be due to the hard limit for open file descriptors currently set for the ANTA process.
|
This usually means that the operating system refused to open a new file descriptor (or socket) for the ANTA process. This might be due to the hard limit for open file descriptors currently set for the ANTA process.
|
||||||
|
|
||||||
At startup, ANTA sets the soft limit of its process to the hard limit up to 16384. This is because the soft limit is usually 1024 and the hard limit is usually higher (depends on the system). If the hard limit of the ANTA process is still lower than the number of selected tests in ANTA, the ANTA process may request to the operating system too many file descriptors and get an error, a WARNING is displayed at startup if this is the case.
|
At startup, ANTA sets the soft limit of its process to the hard limit up to 16384. This is because the soft limit is usually 1024 and the hard limit is usually higher (depends on the system). If the hard limit of the ANTA process is still lower than the potential connections of all devices, the ANTA process may request to the operating system too many file descriptors and get an error, a WARNING is displayed at startup if this is the case.
|
||||||
|
|
||||||
### Solution
|
### Solution
|
||||||
|
|
||||||
|
@ -43,11 +43,35 @@ anta_title: Frequently Asked Questions (FAQ)
|
||||||
The `user` is the one with which the ANTA process is started.
|
The `user` is the one with which the ANTA process is started.
|
||||||
The `value` is the new hard limit. The maximum value depends on the system. A hard limit of 16384 should be sufficient for ANTA to run in most high scale scenarios. After creating this file, log out the current session and log in again.
|
The `value` is the new hard limit. The maximum value depends on the system. A hard limit of 16384 should be sufficient for ANTA to run in most high scale scenarios. After creating this file, log out the current session and log in again.
|
||||||
|
|
||||||
|
## Tests throttling WARNING in the logs
|
||||||
|
|
||||||
|
???+ faq "Tests throttling `WARNING` in the logs"
|
||||||
|
|
||||||
|
ANTA is designed to execute many tests concurrently while ensuring system stability. If the total test count exceeds the maximum concurrency limit, tests are throttled to avoid overwhelming the asyncio event loop and exhausting system resources. A `WARNING` message is logged at startup when this occurs.
|
||||||
|
|
||||||
|
By default, ANTA schedules up to **50000** tests concurrently. This should be sufficient for most use cases, but it may not be optimal for every system. If the number of tests exceeds this value, ANTA executes the first 50000 tests and waits for some tests to complete before executing more.
|
||||||
|
|
||||||
|
### Solution
|
||||||
|
|
||||||
|
You can adjust the maximum concurrency limit using the `ANTA_MAX_CONCURRENCY` environment variable. The optimal value depends on your system CPU usage, memory consumption, and file descriptor limits.
|
||||||
|
|
||||||
|
!!! warning
|
||||||
|
|
||||||
|
Increasing the maximum concurrency limit can lead to system instability if the system is not able to handle the increased load. Monitor system resources and adjust the limit accordingly.
|
||||||
|
|
||||||
|
!!! info "Device Connection Limits"
|
||||||
|
|
||||||
|
Each EOS device is limited to a maximum of **100** concurrent connections. This means that, even if ANTA schedules a high number of tests, it will only attempt to open up to 100 connections at a time towards each device.
|
||||||
|
|
||||||
|
!!! tip
|
||||||
|
If you run ANTA on a large fabric or encounter issues related to resource limits, consider tuning `ANTA_MAX_CONCURRENCY`.
|
||||||
|
Test different values to find the optimal setting for your environment.
|
||||||
|
|
||||||
## `Timeout` error in the logs
|
## `Timeout` error in the logs
|
||||||
|
|
||||||
???+ faq "`Timeout` error in the logs"
|
???+ faq "`Timeout` error in the logs"
|
||||||
|
|
||||||
When running ANTA, you can receive `<Foo>Timeout` errors in the logs (could be ReadTimeout, WriteTimeout, ConnectTimeout or PoolTimeout). More details on the timeouts of the underlying library are available here: https://www.python-httpx.org/advanced/timeouts.
|
When running ANTA, you can receive `<Foo>Timeout` errors in the logs (could be `ReadTimeout`, `WriteTimeout`, `ConnectTimeout` or `PoolTimeout`). More details on the timeouts of the underlying library are available here: https://www.python-httpx.org/advanced/timeouts.
|
||||||
|
|
||||||
This might be due to the time the host on which ANTA is run takes to reach the target devices (for instance if going through firewalls, NATs, ...) or when a lot of tests are being run at the same time on a device (eAPI has a queue mechanism to avoid exhausting EOS resources because of a high number of simultaneous eAPI requests).
|
This might be due to the time the host on which ANTA is run takes to reach the target devices (for instance if going through firewalls, NATs, ...) or when a lot of tests are being run at the same time on a device (eAPI has a queue mechanism to avoid exhausting EOS resources because of a high number of simultaneous eAPI requests).
|
||||||
|
|
||||||
|
@ -59,8 +83,7 @@ anta_title: Frequently Asked Questions (FAQ)
|
||||||
anta nrfu --enable --username username --password arista --inventory inventory.yml -c nrfu.yml --timeout 50 text
|
anta nrfu --enable --username username --password arista --inventory inventory.yml -c nrfu.yml --timeout 50 text
|
||||||
```
|
```
|
||||||
|
|
||||||
The previous command set a couple of options for ANTA NRFU, one them being the `timeout` command, by default, when running ANTA from CLI, it is set to 30s.
|
In this command, ANTA NRFU is configured with several options. Notably, the `--timeout` parameter is set to 50 seconds (instead of the default 30 seconds) to allow extra time for API calls to complete.
|
||||||
The timeout is increased to 50s to allow ANTA to wait for API calls a little longer.
|
|
||||||
|
|
||||||
## `ImportError` related to `urllib3`
|
## `ImportError` related to `urllib3`
|
||||||
|
|
||||||
|
@ -154,6 +177,8 @@ anta_title: Frequently Asked Questions (FAQ)
|
||||||
```
|
```
|
||||||
|
|
||||||
You can then add other commands if they are required for your test catalog (`ping` for example) and then tighten down the show commands to only those required for your tests.
|
You can then add other commands if they are required for your test catalog (`ping` for example) and then tighten down the show commands to only those required for your tests.
|
||||||
|
To figure out the full list of commands used by your catalog or ANTA in general, you can use [`anta get commands`](./cli/get-tests.md#anta-get-commands)
|
||||||
|
|
||||||
|
|
||||||
2. Configure the following authorization (You may need to adapt depending on your AAA setup).
|
2. Configure the following authorization (You may need to adapt depending on your AAA setup).
|
||||||
|
|
||||||
|
|
|
@ -84,7 +84,7 @@ which anta
|
||||||
```bash
|
```bash
|
||||||
# Check ANTA version
|
# Check ANTA version
|
||||||
anta --version
|
anta --version
|
||||||
anta, version v1.3.0
|
anta, version v1.4.0
|
||||||
```
|
```
|
||||||
|
|
||||||
## EOS Requirements
|
## EOS Requirements
|
||||||
|
|
|
@ -22,6 +22,11 @@ COMMANDS = [
|
||||||
"anta nrfu tpl-report --help",
|
"anta nrfu tpl-report --help",
|
||||||
"anta nrfu md-report --help",
|
"anta nrfu md-report --help",
|
||||||
"anta get tags --help",
|
"anta get tags --help",
|
||||||
|
"anta get inventory --help",
|
||||||
|
"anta get tests --help",
|
||||||
|
"anta get from-cvp --help",
|
||||||
|
"anta get from-ansible --help",
|
||||||
|
"anta get commands --help",
|
||||||
]
|
]
|
||||||
|
|
||||||
for command in COMMANDS:
|
for command in COMMANDS:
|
||||||
|
|
19
docs/snippets/anta_get_commands_help.txt
Normal file
19
docs/snippets/anta_get_commands_help.txt
Normal file
|
@ -0,0 +1,19 @@
|
||||||
|
$ anta get commands --help
|
||||||
|
Usage: anta get commands [OPTIONS]
|
||||||
|
|
||||||
|
Print all EOS commands used by the selected ANTA tests.
|
||||||
|
|
||||||
|
It can be filtered by module, test or using a catalog. If no filter is
|
||||||
|
given, all built-in ANTA tests commands are retrieved.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
--module TEXT Filter commands by module name. [default:
|
||||||
|
anta.tests]
|
||||||
|
--test TEXT Filter by specific test name. If module is
|
||||||
|
specified, searches only within that module.
|
||||||
|
-c, --catalog FILE Path to the test catalog file [env var:
|
||||||
|
ANTA_CATALOG]
|
||||||
|
--catalog-format [yaml|json] Format of the catalog file, either 'yaml' or
|
||||||
|
'json' [env var: ANTA_CATALOG_FORMAT]
|
||||||
|
--unique Print only the unique commands.
|
||||||
|
--help Show this message and exit.
|
17
docs/snippets/anta_get_fromansible_help.txt
Normal file
17
docs/snippets/anta_get_fromansible_help.txt
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
$ anta get from-ansible --help
|
||||||
|
Usage: anta get from-ansible [OPTIONS]
|
||||||
|
|
||||||
|
Build ANTA inventory from an ansible inventory YAML file.
|
||||||
|
|
||||||
|
NOTE: This command does not support inline vaulted variables. Make sure to
|
||||||
|
comment them out.
|
||||||
|
|
||||||
|
Options:
|
||||||
|
-o, --output FILE Path to save inventory file [env var:
|
||||||
|
ANTA_INVENTORY; required]
|
||||||
|
--overwrite Do not prompt when overriding current inventory
|
||||||
|
[env var: ANTA_GET_FROM_ANSIBLE_OVERWRITE]
|
||||||
|
-g, --ansible-group TEXT Ansible group to filter
|
||||||
|
--ansible-inventory FILE Path to your ansible inventory file to read
|
||||||
|
[required]
|
||||||
|
--help Show this message and exit.
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue