Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2E tests: Add an initial validation stage #3142

Closed
Tracked by #2830
mauromalara opened this issue Aug 2, 2022 · 4 comments · Fixed by #3175
Closed
Tracked by #2830

E2E tests: Add an initial validation stage #3142

mauromalara opened this issue Aug 2, 2022 · 4 comments · Fixed by #3175
Assignees
Labels
refactor Redesign and implement an existing development/feature

Comments

@mauromalara
Copy link
Contributor

mauromalara commented Aug 2, 2022

Description

The E2E tests have been developed in #2872 with the goal of being run manually. However, after such development, we have picked up a number of necessary changes that are common or impact all E2E tests.

This task aims to add an initial validation stage to E2E tests.

Initial validation stage

The initial validation stage will check via Ansible the following requirements on the managed nodes:

  • Python version
  • Linux distribution
  • ...
@mauromalara mauromalara self-assigned this Aug 2, 2022
@mauromalara mauromalara added team/qa refactor Redesign and implement an existing development/feature labels Aug 2, 2022
@mauromalara mauromalara changed the title E2E tests: Add an initial configuration stage and define Ansible roles to encapsulate common tasks. E2E tests: Add an initial validation stage and define Ansible roles to encapsulate common tasks. Aug 2, 2022
@mauromalara
Copy link
Contributor Author

mauromalara commented Aug 2, 2022

Update 02/08/2022

I have researched Ansible roles, handlers, and others. Also, I have created a basic structure to implement the refactor.

Update 04/08/2022

I have changed the directory structure:

deps/wazuh_testing/wazuh_testing/end_to_end/
├── ansible_roles
│   ├── agents
│   │   ├── roles
│   │   │   └── main.yaml
│   │   └── vars
│   │       └── main.yaml
│   └── managers
│       ├── roles
│       │   └── main.yaml
│       └── vars
│           └── main.yaml
|   ...
└── __init__.py

Before a test, a group of tests, or all tests are executed, a fixture that validates the environment will be executed; it (which has session scope) uses the role path and the inventory path to execute the corresponding tasks.

The validation tasks will be executed in some/all hosts depending on the groups defined in the inventory:

  • managers
  • agents
  • linux
  • windows

All of the above options can be combined.

@mauromalara mauromalara changed the title E2E tests: Add an initial validation stage and define Ansible roles to encapsulate common tasks. E2E tests: Add an initial validation stage Aug 3, 2022
@mauromalara
Copy link
Contributor Author

Update 05/08/2022

I tested the proposed solution from above, but it doesn't work as expected. So, I have been researched further and I tried this solution:

  • The roles will be included in the configuration playbook itself, allowing the validation to run before
  • The role path is defined inside the conftest.py
  • The role will include a task/ folder and the main.yaml inside of it, this file contains all the validation tasks
  • The role path is used in the configure_environment fixture, allowing

This way the pipeline that we will create in the future should only execute the test/s using pytest as a command and the inventory path as parameter.

I still have some doubts about the task itself, such as what would be the validations to be performed since, for example, in the role of Linux agent and manager tasks are shared, so I see the need to create a role more called "common".

@mauromalara
Copy link
Contributor Author

mauromalara commented Aug 15, 2022

Update 15/08/2022

Running the tests with the validation stage already developed I realized that:

  1. All the current tests were only supposed to validate the environment and had no specific validation tasks.
  2. With the previous design, it was necessary to launch 1 validation playbook for each selected test (even if the playbook repeated tasks).

This change allows to run a general validation prior to all tests (according to the tests that were selected to run) and then run (if any) the particular validation tasks for each test.

So, the validation phase is divided into 4 steps:

  1. Step 1: Collect the data related to the selected tests that will be executed.
  2. Step 2: Generate a playbook containing cross-checks for specified tests.
  3. Step 3: Run the generated playbook.
  4. Step 4: Execute test-specific validations (if any). It will run one validation for each selected test set.

The validation phase was executed beside the following tests:

  • test_fim
  • test_audit
  • test_brute_force

Remaining tasks

  1. Merge changes from the base branch into the working branch
  2. Add a task to check the connection between Filebeat and the wazuh-indexer
  3. Add a task to check the connection between Localhost and the wazuh-indexer
  4. Add a task to check that all the Wazuh components are up and running.
  5. Add a task in each test to validate the supported OS.

mauromalara added a commit that referenced this issue Aug 16, 2022
Tasks to check filebeat-indexer and controller-indexer connections.
New role related to Wazuh services added.
Some changes related to linter corrections.
@mauromalara
Copy link
Contributor Author

mauromalara commented Aug 16, 2022

Update 16/08/2022

Done:

  1. Merge changes from the base branch into the working branch
  2. Add a task to check the connection between Filebeat and the wazuh-indexer
  3. Add a task to check the connection between Localhost and the wazuh-indexer
  4. Run a test to check if the validation phase works properly

Remain:

  1. Add a task to check that all the Wazuh components are up and running. (This task will be cherry-picked from the @BelenValdivia development and included in the host_checker role)
  2. Add a task in each test to validate the supported OS.

test_fim execution (6 test cases)

image

mauromalara added a commit that referenced this issue Aug 18, 2022
The alert timestamp was corrected to allow for negative and positive offsets.
Timeout for a task has been removed because it already had an implicit timeout.
mauromalara added a commit that referenced this issue Aug 18, 2022
Now this phase deletes the generated file at the end of the execution.
mauromalara added a commit that referenced this issue Aug 19, 2022
Debug tasks removed.
New task to check Wazuh components added.
The error when setting a variable used to search for failures was fixed.
mauromalara added a commit that referenced this issue Aug 19, 2022
mauromalara added a commit that referenced this issue Aug 19, 2022
@mauromalara mauromalara linked a pull request Aug 19, 2022 that will close this issue
mauromalara added a commit that referenced this issue Aug 24, 2022
This change allows no stopping all execution when 1 specific validation fails.
mauromalara added a commit that referenced this issue Aug 24, 2022
mauromalara added a commit that referenced this issue Aug 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
refactor Redesign and implement an existing development/feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants