Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2E tests: Add an initial validation stage #3175

Merged
merged 33 commits into from
Aug 25, 2022

Conversation

mauromalara
Copy link
Contributor

@mauromalara mauromalara commented Aug 15, 2022

Related issue
#3142

Description

The validation phase allows to run a general validation prior to all tests (according to the tests selected to run) and then run the particular validation tasks for each test.

This phase is divided into 4 steps:

  1. Step 1: Collect the data related to the selected tests that will be executed.
  2. Step 2: Generate a playbook containing cross-checks for selected tests.
  3. Step 3: Run the generated playbook.
  4. Step 4: Generate a test-specific playbook to validate the environment required by that test, then execute that playbook. This will run one validation for each selected test set.

Add specific validation tasks (for a test module)

To add specific validation tasks to a test, its necessary to add a new playbook inside the test module, in the playbook folder with the default Play structure:

- name: <ANY-NAME>
  hosts: "{{ target_hosts }}"
  tasks:
    <ANY-TASK-YOU-WANT>

Added

  • env_requirements.json: contains information about the requirements of each E2E test.
  • generate_general_play.yaml: generates the general validation playbook.
  • general_validation.j2: Jinja2 template used by the generator playbook.

The following files were added within the host_checker and service_controller roles:

tests/end_to_end/roles/
├── host_checker
│   └── tasks
│       ├── check_controller_indexer.yaml | check the connection between controller node and indexer.
│       ├── check_filebeat_indexer.yaml | check the connection between Filebeat and indexer.
│       ├── check_os.yaml | checks if the OS distribution of each host is supported.
│       ├── check_python.yaml | check if the required version of Python is in each host by default.
│       ├── check_wazuh_components.yaml | check if Wazuh components are up and running.
│       └── main.yaml | unifies general checks and it searches for failures.
└── service_controller
    └── tasks
        └── get_installation_type.yaml | check Wazuh installation type (linux hosts)

Updated

  • conftest.py: added the initial validation phase fixture.
  • test_fim: some modifications were made to run the test successfully (these modifications were needed to test the validation phase)

Testing performed

It is highly recommended (for debugging) to enable the following pytest arguments:

--tb=short: shorter traceback format
-s: disable all capturing (to show Ansible output)

otherwise, the output will be very noisy, but it'll still work anyway.


The controller node can't establish a connection with the `wazuh-indexer` (general: failed, specific: not ran) 🟢
TASK [host_checker : Verify if any check have failed] **************************
fatal: [wazuh-manager]: FAILED! => {"changed": false, "msg": "Some validations were fail:\n'Ansible Controller node cannot connect correctly with Wazuh Indexer.\n'"}
skipping: [wazuh-agent]
skipping: [wazuh-windows]

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
wazuh-agent                : ok=8    changed=3    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
wazuh-manager              : ok=10   changed=5    unreachable=0    failed=1    skipped=9    rescued=0    ignored=1   
wazuh-windows              : ok=4    changed=1    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
E

============================================================= ERRORS =============================================================
___________________________________________ ERROR at setup of test_audit[ping_google] ____________________________________________
tests/end_to_end/conftest.py:101: in validate_environments
    raise Exception(f"The general validations have failed. Please check that the environments meet the expected "
E   Exception: The general validations have failed. Please check that the environments meet the expected requirements. Result:
E   Ansible Controller node cannot connect correctly with Wazuh Indexer.
==================================================== short test summary info =====================================================
ERROR tests/end_to_end/basic_cases/test_audit/test_audit.py::test_audit[ping_google] - Exception: The general validations have ...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================= 1 error in 18.34s ========================================================
The connection between Filebeat and `wazuh-indexer` is not established (general: failed, specific: not ran) 🟢
TASK [host_checker : Verify if any check have failed] **************************
fatal: [wazuh-manager]: FAILED! => {"changed": false, "msg": "Some validations were fail:\n'Filebeat cannot connect correctly with Wazuh Indexer.\n'"}
skipping: [wazuh-agent]
skipping: [wazuh-windows]

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
wazuh-agent                : ok=8    changed=3    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
wazuh-manager              : ok=10   changed=4    unreachable=0    failed=1    skipped=9    rescued=0    ignored=2   
wazuh-windows              : ok=4    changed=1    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
E

=========================================================================== ERRORS ============================================================================
__________________________________________________________ ERROR at setup of test_audit[ping_google] __________________________________________________________
tests/end_to_end/conftest.py:101: in validate_environments
    raise Exception(f"The general validations have failed. Please check that the environments meet the expected "
E   Exception: The general validations have failed. Please check that the environments meet the expected requirements. Result:
E   Filebeat cannot connect correctly with Wazuh Indexer.
=================================================================== short test summary info ===================================================================
ERROR tests/end_to_end/basic_cases/test_audit/test_audit.py::test_audit[ping_google] - Exception: The general validations have failed. Please check that the...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
====================================================================== 1 error in 19.77s ======================================================================
The OS distribution of any of the hosts is not supported by the current tests (general: failed, specific: not ran) 🟢
TASK [host_checker : Verify if any check have failed] **************************
skipping: [centos-manager]
skipping: [windows-agent]
skipping: [centos-agent]
skipping: [ubuntu-agent]
fatal: [ubuntu-manager]: FAILED! => {"changed": false, "msg": "Some validations were fail:\n'The Debian distro isn't supported for the selected tests currently.\n'"}

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
centos-agent               : ok=8    changed=3    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
centos-manager             : ok=9    changed=4    unreachable=0    failed=0    skipped=11   rescued=0    ignored=1   
ubuntu-agent               : ok=8    changed=3    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
ubuntu-manager             : ok=12   changed=5    unreachable=0    failed=1    skipped=7    rescued=0    ignored=2   
windows-agent              : ok=4    changed=1    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
E

=========================================================================== ERRORS ============================================================================
__________________________________________________________ ERROR at setup of test_audit[ping_google] __________________________________________________________
tests/end_to_end/conftest.py:110: in validate_environments
    raise Exception(f"The general validations have failed. Please check that the environments meet the expected "
E   Exception: The general validations have failed. Please check that the environments meet the expected requirements. Result:
E   The Debian distro isn't supported for the selected tests currently.
=================================================================== short test summary info ===================================================================
ERROR tests/end_to_end/basic_cases/test_audit/test_audit.py::test_audit[ping_google] - Exception: The general validations have failed. Please check that the...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
====================================================================== 1 error in 51.80s ======================================================================
The Python version of any of the hosts is not supported (general: failed, specific: not ran) 🟢
TASK [host_checker : Verify if any check have failed] **************************
skipping: [windows-agent]
fatal: [centos-manager]: FAILED! => {"changed": false, "msg": "Some validations were fail:\n'Python version is less than 3. Current version: 2.7.5\n'"}
fatal: [centos-agent]: FAILED! => {"changed": false, "msg": "Some validations were fail:\n'Python version is less than 3. Current version: 2.7.5\n'"}
skipping: [ubuntu-manager]
skipping: [ubuntu-agent]

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
centos-agent               : ok=9    changed=3    unreachable=0    failed=1    skipped=10   rescued=0    ignored=0   
centos-manager             : ok=10   changed=4    unreachable=0    failed=1    skipped=9    rescued=0    ignored=1   
ubuntu-agent               : ok=8    changed=3    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
ubuntu-manager             : ok=9    changed=5    unreachable=0    failed=0    skipped=11   rescued=0    ignored=1   
windows-agent              : ok=4    changed=1    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
E

============================================================= ERRORS =============================================================
___________________________________________ ERROR at setup of test_audit[ping_google] ____________________________________________
tests/end_to_end/conftest.py:110: in validate_environments
    raise Exception(f"The general validations have failed. Please check that the environments meet the expected "
E   Exception: The general validations have failed. Please check that the environments meet the expected requirements. Result:
E   Python version is less than 3. Current version: 2.7.5
E   Python version is less than 3. Current version: 2.7.5
==================================================== short test summary info =====================================================
ERROR tests/end_to_end/basic_cases/test_audit/test_audit.py::test_audit[ping_google] - Exception: The general validations have ...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================= 1 error in 51.08s ========================================================
One or some Wazuh components aren't up and running (general: failed, specific: not ran) 🟢
TASK [host_checker : Verify if any check have failed] **************************
fatal: [wazuh-manager]: FAILED! => {"changed": false, "msg": "Some validations were fail:\n'filebeat.service is not running.\n'"}
skipping: [wazuh-agent]
skipping: [wazuh-windows]

NO MORE HOSTS LEFT *************************************************************

PLAY RECAP *********************************************************************
wazuh-agent                : ok=8    changed=3    unreachable=0    failed=0    skipped=12   rescued=0    ignored=0   
wazuh-manager              : ok=10   changed=4    unreachable=0    failed=1    skipped=9    rescued=0    ignored=1   
wazuh-windows              : ok=4    changed=1    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   
E

============================================================= ERRORS =============================================================
___________________________________________ ERROR at setup of test_audit[ping_google] ____________________________________________
tests/end_to_end/conftest.py:101: in validate_environments
    raise Exception(f"The general validations have failed. Please check that the environments meet the expected "
E   Exception: The general validations have failed. Please check that the environments meet the expected requirements. Result:
E   filebeat.service is not running.
==================================================== short test summary info =====================================================
ERROR tests/end_to_end/basic_cases/test_audit/test_audit.py::test_audit[ping_google] - Exception: The general validations have ...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
======================================================= 1 error in 26.42s ========================================================

Run only 1 test that has a specific validation task that must be executed (general: passed, specific: passed) 🟢

python -m pytest --tb=short -xs tests/end_to_end/basic_cases/test_brute_force/test_brute_force_rdp/ --inventory_path /home/mauro/inventory.yaml

TASK [host_checker : Verify if any check have failed] **************************
skipping: [centos-manager]
skipping: [windows-agent]

PLAY RECAP *********************************************************************
centos-manager             : ok=9    changed=4    unreachable=0    failed=0    skipped=11   rescued=0    ignored=1   
windows-agent              : ok=4    changed=1    unreachable=0    failed=0    skipped=16   rescued=0    ignored=0   

PLAY [Executing validation for Brute Force RDP] ********************************

TASK [Gathering Facts] *********************************************************
ok: [centos-manager]
ok: [windows-agent]

TASK [debug] *******************************************************************
ok: [centos-manager] => {
    "msg": "This task will be executed OK."
}
ok: [windows-agent] => {
    "msg": "This task will be executed OK."
}
Run only 1 test with a validation task that must fail (general: passed, specific: failed) 🟢
PLAY [Executing validation for Brute Force RDP] ********************************

TASK [Gathering Facts] *********************************************************
ok: [centos-manager]
ok: [windows-agent]

TASK [Executing validation for Brute Force RDP] ********************************
fatal: [centos-manager]: FAILED! => {"changed": false, "msg": "This validation task will fail."}
fatal: [windows-agent]: FAILED! => {"changed": false, "msg": "This validation task will fail."}

PLAY RECAP *********************************************************************
centos-manager             : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   
windows-agent              : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   
E

=========================================================================== ERRORS ============================================================================
___________________________________________________ ERROR at setup of test_brute_force_rdp[rdp_brute_force] ___________________________________________________
tests/end_to_end/conftest.py:138: in validate_environments
    raise Exception(f"The validation phase of {test_suite_name} has failed. Please check that the "
E   Exception: The validation phase of test_brute_force_rdp has failed. Please check that the environments meet the expected requirements.
=================================================================== short test summary info ===================================================================
ERROR tests/end_to_end/basic_cases/test_brute_force/test_brute_force_rdp/test_brute_force_rdp.py::test_brute_force_rdp[rdp_brute_force] - Exception: The val...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
====================================================================== 1 error in 35.27s ======================================================================

Brute Force

Tester Test path Jenkins Local OS Commit Notes
@mauromalara (Developer) N/A 🟢 🟢 🟢 Nothing to highlight
@user (Reviewer) ⚫⚫⚫ 🚫 🚫 🚫 Nothing to highlight

Emotet

Tester Test path Jenkins Local OS Commit Notes
@mauromalara (Developer) N/A 🟢 🟢 🟢 Nothing to highlight
@user (Reviewer) ⚫⚫⚫ 🚫 🚫 🚫 Nothing to highlight

FIM

Tester Test path Jenkins Local OS Commit Notes
@mauromalara (Developer) N/A 🟢 🟢 🟢 Nothing to highlight
@user (Reviewer) ⚫⚫⚫ 🚫 🚫 🚫 Nothing to highlight

Suricata

Tester Test path Jenkins Local OS Commit Notes
@mauromalara (Developer) N/A 🟢 🟢 🟢 Nothing to highlight
@user (Reviewer) ⚫⚫⚫ 🚫 🚫 🚫 Nothing to highlight

@mauromalara mauromalara changed the title 3142 validation stage E2E tests: Add an initial validation stage Aug 15, 2022
Tasks to check filebeat-indexer and controller-indexer connections.
New role related to Wazuh services added.
Some changes related to linter corrections.
The alert timestamp was corrected to allow for negative and positive offsets.
Timeout for a task has been removed because it already had an implicit timeout.
Now this phase deletes the generated file at the end of the execution.
Debug tasks removed.
New task to check Wazuh components added.
The error when setting a variable used to search for failures was fixed.
@mauromalara mauromalara linked an issue Aug 19, 2022 that may be closed by this pull request
@mauromalara mauromalara self-assigned this Aug 19, 2022
Copy link
Member

@juliamagan juliamagan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GJ, but some changes are required

@mauromalara
Copy link
Contributor Author

Last changes are in 3142-validation-stage-temp

Copy link
Member

@juliamagan juliamagan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GJ, but some changes are required

tests/end_to_end/README.md Outdated Show resolved Hide resolved
@jmv74211 jmv74211 merged commit d60edde into 2872-tests-e2e Aug 25, 2022
@jmv74211 jmv74211 deleted the 3142-validation-stage branch August 25, 2022 17:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

E2E tests: Add an initial validation stage
3 participants