Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installation assistant distributed test rework and migration #60

Merged
Merged
Show file tree
Hide file tree
Changes from 18 commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
bddc228
Added provision and deletion of instances
davidcr01 Sep 9, 2024
758dda9
Added private IP capture
davidcr01 Sep 9, 2024
e6fe051
Added certificates generation logic
davidcr01 Sep 10, 2024
5fbb84f
Added certificates generation logic
davidcr01 Sep 10, 2024
ca9051f
Added certificates copy playbook execution
davidcr01 Sep 10, 2024
48bb2b8
Added indexer install playbook execution
davidcr01 Sep 10, 2024
ddbadbc
Added server playbook and task in distributed workflow
davidcr01 Sep 10, 2024
d70f826
Added dashboard install playbook execution
davidcr01 Sep 10, 2024
cc14906
Added indexer cluster start playbook execution
davidcr01 Sep 10, 2024
b0ff7f4
Changed indexer cluster playbook execution order
davidcr01 Sep 10, 2024
b29e4e5
Workers wait master node to be installed
davidcr01 Sep 13, 2024
7244372
Added distributed test playbook execution
davidcr01 Sep 13, 2024
008e2c7
Improving the playbooks output
davidcr01 Sep 13, 2024
c2d44bb
Added allocator info upload as artifact
davidcr01 Sep 16, 2024
841f568
Removed logs save logic
davidcr01 Sep 16, 2024
4864a73
Changed repository reference to the workflow branch
davidcr01 Sep 16, 2024
f21e89a
Added README for workflows
davidcr01 Sep 16, 2024
cccd735
Updated CHANGELOG for #60
davidcr01 Sep 17, 2024
2b0dacd
Changing systems when PR is created
davidcr01 Sep 17, 2024
eebeb6c
Merge branch '4.10.0' into enhancement/20-rework-distributed-workflow…
davidcr01 Sep 18, 2024
31428bd
Merge branch '4.10.0' into enhancement/20-rework-distributed-workflow…
davidcr01 Sep 18, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions .github/workflows/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Installation assistant workflows

This repository includes several GitHub Actions workflows. These workflows are designed to automate the testing process for the installation of the Wazuh Installation Assistant in various environments and to build the different tools and scripts.

## Workflows Overview

1. `Test_installation_assistant`.
This workflow tests the installation of the Wazuh Installation Assistant in a single-node setup. It triggers on pull requests that modify specific directories or files, and can also be manually dispatched.

2. `Test_installation_assistant_distributed`.
This workflow is an extension of the Test_installation_assistant workflow, intended for distributed environments. It provisions three instances and simulates a distributed Wazuh deployment across multiple nodes (indexers, managers, and dashboards).

## Triggering the Workflows
### Automatic Trigger
The workflows tests are triggered automatically when a pull request (PR) is created or updated, affecting the following paths:

- `cert_tool/`
- `common_functions/`
- `config/`
- `install_functions/`
- `passwords_tool/`
- `tests/`

### Manual Trigger
The test workflows can be triggered manually via the GitHub interface under the "Actions" tab, using the workflow_dispatch event. When triggered manually, several input parameters are required:

- **REPOSITORY**: Defines the repository environment (e.g., staging, pre-release).
- **AUTOMATION_REFERENCE**: The branch or tag of the `wazuh-automation` repository, used to clone the Allocation module.
- **SYSTEMS**: A comma-separated list of operating systems to be tested, enclosed in square brackets (e.g., `["CentOS_8", "AmazonLinux_2", "Ubuntu_22", "RHEL8"]`). The available options are: `CentOS_7`, `CentOS_8`, `AmazonLinux_2`, `Ubuntu_16`, `Ubuntu_18`, `Ubuntu_20`, `Ubuntu_22`, `RHEL7`, `RHEL8`.
- **VERBOSITY**: The verbosity level for Ansible playbook execution, with options `-v`, `-vv`, `-vvv`, and `-vvvv`.
- **DESTROY**: Boolean value (true or false) indicating whether to destroy the instances after testing.

## Workflow Structure
### Jobs

The tests workflows follow a similar structure with the following key jobs:

1. **Checkout Code**: The workflow fetches the latest code from the wazuh-automation and wazuh-installation-assistant repositories.

2. **Set Up Environment**: The operating system is configured based on the selected OS in the SYSTEMS input. The corresponding OS name is stored in the environment variable COMPOSITE_NAME.

3. **Install Ansible**: Ansible is installed for managing the provisioning of instances and running the necessary playbooks.

4. **Provisioning Instances**: The distributed workflow allocates AWS instances using the wazuh-automation repository’s allocation module. It provisions indexers, managers, and dashboards across the instances. The instance inventory is dynamically created and used for later playbook executions.

5. **Ansible Playbooks Execution**: Provision playbooks are executed to prepare the environments for Wazuh components.

6. **Test Execution**: A Python-based testing framework is executed to verify the successful installation and functionality of the Wazuh components on the allocated instances.

7. **Destroy Instances (Optional)**: If the `DESTROY` parameter is set to true, the allocated AWS instances are terminated after the tests. If set to false, the instances and their details are saved as artifacts for later analysis.

### Artifacts
If instances are not destroyed, the workflow compresses the allocated instances' directory and uploads it as an artifact. Also, the artifacts are compressed with a password. Ask @devel-devops teams for this password. An artifact is uploaded per OS selected.
## Notes
- Instance allocation: The `Test_installation_assistant_distributed` workflow provisions three instances by default. The roles are distributed as follows:
- `indexer1`, `indexer2`, `indexer3`: Indexers in the Wazuh cluster.
- `master`, `worker1`, `worker2`: Wazuh managers, where `master` is the main manager, and `worker1` and `worker2` are worker nodes.
- `dashboard`: Wazuh dashboard.

- Customization: These workflows allow for customization through the various input parameters, making it easy to test different operating systems, verbosity levels, or different versions of the repositories.
294 changes: 281 additions & 13 deletions .github/workflows/Test_installation_assistant_distributed.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
run-name: (Distributed) Test installation assistant - Launched by @${{ github.actor }}
name: (Distributed) Test installation assistant
run-name: (Distributed) Test installation assistant - ${{ github.run_id }} - ${{ inputs.SYSTEMS }} - Launched by @${{ github.actor }}
name: (Distributed) Test installation assistant

on:
pull_request:
Expand All @@ -21,27 +21,295 @@ on:
- staging
- pre-release
AUTOMATION_REFERENCE:
description: 'wazuh-automation reference'
description: 'Branch or tag of the wazuh-automation repository'
required: true
default: 'v4.10.0'
DEBUG:
description: 'Debug mode'
default: '4.10.0'
SYSTEMS:
description: 'Operating Systems (list of comma-separated quoted strings enclosed in square brackets)'
required: true
default: false
type: boolean
default: '["Ubuntu_22"]'
type: string
VERBOSITY:
description: 'Verbosity level on playbooks execution'
required: true
default: '-v'
type: choice
options:
- -v
- -vv
- -vvv
- -vvvv
DESTROY:
description: 'Destroy instances after run'
required: true
default: true
type: boolean

env:
LABEL: ubuntu-latest
COMPOSITE_NAME: "linux-SUBNAME-amd64"
SESSION_NAME: "Installation-Assistant-Test"
REGION: "us-east-1"
TMP_PATH: "/tmp/test"
ANSIBLE_CALLBACK: "yaml"
RESOURCES_PATH: "${{ github.workspace }}"
PKG_REPOSITORY: "${{ inputs.REPOSITORY }}"
TEST_NAME: "test_installation_assistant"
REPOSITORY_URL: "${{ github.server_url }}/${{ github.repository }}.git"
ALLOCATOR_PATH: "/tmp/allocator_instance"
INSTANCE_NAMES: "instance_1 instance_2 instance_3"

permissions:
id-token: write # This is required for requesting the JWT
contents: read # This is required for actions/checkout

jobs:
initialize-environment:
runs-on: $LABEL
run-test:
runs-on: ubuntu-latest
strategy:
fail-fast: false # If a job fails, the rest of jobs will not be canceled
matrix:
system: ${{ fromJson(inputs.SYSTEMS) }}

steps:
- name: Set up Git
uses: actions/checkout@v3
- name: Checkout code
uses: actions/checkout@v4

- name: View parameters
run: echo "${{ toJson(inputs) }}"

- name: Set COMPOSITE_NAME variable
run: |
case "${{ matrix.system }}" in
"CentOS_7")
SUBNAME="centos-7"
;;
"CentOS_8")
SUBNAME="centos-8"
;;
"AmazonLinux_2")
SUBNAME="amazon-2"
;;
"Ubuntu_16")
SUBNAME="ubuntu-16.04"
;;
"Ubuntu_18")
SUBNAME="ubuntu-18.04"
;;
"Ubuntu_20")
SUBNAME="ubuntu-20.04"
;;
"Ubuntu_22")
SUBNAME="ubuntu-22.04"
;;
"RHEL7")
SUBNAME="redhat-7"
;;
"RHEL8")
SUBNAME="redhat-8"
;;
*)
echo "Invalid SYSTEM selection" >&2
exit 1
;;
esac
COMPOSITE_NAME="${COMPOSITE_NAME/SUBNAME/$SUBNAME}"
echo "COMPOSITE_NAME=$COMPOSITE_NAME" >> $GITHUB_ENV

- name: Install Ansible
run: sudo apt-get update && sudo apt install -y python3 && python3 -m pip install --user ansible-core==2.16 && pip install pyyaml && ansible-galaxy collection install community.general

- name: Set up AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_IAM_ROLE }}
role-session-name: ${{ env.SESSION_NAME }}
aws-region: ${{ env.REGION }}

- name: Checkout wazuh/wazuh-automation repository
uses: actions/checkout@v4
with:
repository: wazuh/wazuh-automation
ref: ${{ inputs.AUTOMATION_REFERENCE }}
token: ${{ secrets.GH_CLONE_TOKEN }}
path: wazuh-automation

- name: Install and set allocator requirements
run: pip3 install -r wazuh-automation/deployability/deps/requirements.txt

- name: Allocate instances and create inventory
id: allocator_instance
run: |
instance_names=($INSTANCE_NAMES)
inventory_file="$ALLOCATOR_PATH/inventory"
inventory_indexers="$ALLOCATOR_PATH/inventory_indexers"
inventory_managers="$ALLOCATOR_PATH/inventory_managers"
inventory_dashboards="$ALLOCATOR_PATH/inventory_dashboards"
inventory_common="$ALLOCATOR_PATH/inventory_common"
inventory_file="$ALLOCATOR_PATH/inventory"

mkdir -p $ALLOCATOR_PATH
echo "[indexers]" > $inventory_indexers
echo "[managers]" > $inventory_managers
echo "[dashboards]" > $inventory_dashboards
echo "[all:vars]" > $inventory_common

for i in ${!instance_names[@]}; do
instance_name=${instance_names[$i]}
# Provision instance in parallel
(
python3 wazuh-automation/deployability/modules/allocation/main.py \
--action create --provider aws --size large \
--composite-name ${{ env.COMPOSITE_NAME }} \
--working-dir $ALLOCATOR_PATH --track-output $ALLOCATOR_PATH/track_${instance_name}.yml \
--inventory-output $ALLOCATOR_PATH/inventory_${instance_name}.yml \
--instance-name gha_${{ github.run_id }}_${{ env.TEST_NAME }}_${instance_name} --label-team devops --label-termination-date 1d

instance_id=$(grep '^identifier' $ALLOCATOR_PATH/track_${instance_name}.yml | awk '{print $2}')
private_ip=$(aws ec2 describe-instances \
--instance-ids $instance_id \
--query 'Reservations[*].Instances[*].PrivateIpAddress' \
--output text)

sed 's/: */=/g' $ALLOCATOR_PATH/inventory_${instance_name}.yml > $ALLOCATOR_PATH/inventory_mod_${instance_name}.yml
sed -i 's/-o StrictHostKeyChecking=no/\"-o StrictHostKeyChecking=no\"/g' $ALLOCATOR_PATH/inventory_mod_${instance_name}.yml
source $ALLOCATOR_PATH/inventory_mod_${instance_name}.yml

# Add instance to corresponding group
if [[ $i -eq 0 ]]; then
echo "indexer1 ansible_host=$ansible_host private_ip=$private_ip ansible_ssh_private_key_file=$ansible_ssh_private_key_file" >> $inventory_indexers
echo "master ansible_host=$ansible_host private_ip=$private_ip ansible_ssh_private_key_file=$ansible_ssh_private_key_file manager_type=master instance_type=indexer_manager" >> $inventory_managers

echo "ansible_user=$ansible_user" >> $inventory_common
echo "ansible_port=$ansible_port" >> $inventory_common
echo "ansible_ssh_common_args='$ansible_ssh_common_args'" >> $inventory_common
elif [[ $i -eq 1 ]]; then
echo "indexer2 ansible_host=$ansible_host private_ip=$private_ip ansible_ssh_private_key_file=$ansible_ssh_private_key_file" >> $inventory_indexers
echo "worker1 ansible_host=$ansible_host private_ip=$private_ip ansible_ssh_private_key_file=$ansible_ssh_private_key_file manager_type=worker instance_type=indexer_manager" >> $inventory_managers
else
echo "indexer3 ansible_host=$ansible_host private_ip=$private_ip ansible_ssh_private_key_file=$ansible_ssh_private_key_file" >> $inventory_indexers
echo "worker2 ansible_host=$ansible_host private_ip=$private_ip ansible_ssh_private_key_file=$ansible_ssh_private_key_file manager_type=worker instance_type=indexer_manager_dashboard" >> $inventory_managers
echo "dashboard ansible_host=$ansible_host private_ip=$private_ip ansible_ssh_private_key_file=$ansible_ssh_private_key_file" >> $inventory_dashboards
fi
) &
done

# Wait for all provisioning tasks to complete
wait

# Combine the temporary inventories into one
cat $inventory_indexers > $inventory_file
cat $inventory_managers >> $inventory_file
cat $inventory_dashboards >> $inventory_file
cat $inventory_common >> $inventory_file

- name: Execute provision playbook
run: |
INSTALL_DEPS=true
INSTALL_PYTHON=true
INSTALL_PIP_DEPS=true

ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/provision.yml \
-i $ALLOCATOR_PATH/inventory \
-l indexers \
-e "repository=$REPOSITORY_URL" \
-e "reference=${{ github.ref_name }}" \
-e "tmp_path=$TMP_PATH" \
-e "pkg_repository=$PKG_REPOSITORY" \
-e "install_deps=$INSTALL_DEPS" \
-e "install_python=$INSTALL_PYTHON" \
-e "install_pip_deps=$INSTALL_PIP_DEPS" \
"${{ inputs.VERBOSITY }}"

- name: Execute certificates generation playbook
run: |
ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/distributed_generate_certificates.yml \
-i $ALLOCATOR_PATH/inventory \
-e "resources_path=$RESOURCES_PATH" \
-e "pkg_repository=$PKG_REPOSITORY" \
"${{ inputs.VERBOSITY }}"

- name: Copy certificates to nodes
run: |
ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/distributed_copy_certificates.yml \
-i $ALLOCATOR_PATH/inventory \
-l indexers \
-e "tmp_path=$TMP_PATH" \
-e "resources_path=$RESOURCES_PATH" \
"${{ inputs.VERBOSITY }}"

- name: Execute indexer installation playbook
run: |
ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/distributed_install_indexer.yml \
-i $ALLOCATOR_PATH/inventory \
-l indexers \
-e "tmp_path=$TMP_PATH" \
"${{ inputs.VERBOSITY }}"

- name: Execute indexer cluster start playbook
run: |
INDEXER_ADMIN_PASSWORD="admin"
ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/distributed_start_indexer_cluster.yml \
-i $ALLOCATOR_PATH/inventory \
-l indexers \
-e "tmp_path=$TMP_PATH" \
"${{ inputs.VERBOSITY }}"

- name: Execute server installation playbook
run: |
ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/distributed_install_wazuh.yml \
-i $ALLOCATOR_PATH/inventory \
-l managers \
-e "tmp_path=$TMP_PATH" \
"${{ inputs.VERBOSITY }}"

- name: Execute dashboard installation playbook
run: |
ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/distributed_install_dashboard.yml \
-i $ALLOCATOR_PATH/inventory \
-l dashboards \
-e "tmp_path=$TMP_PATH" \
"${{ inputs.VERBOSITY }}"

- name: Execute Python test playbook
run: |
ANSIBLE_STDOUT_CALLBACK=$ANSIBLE_CALLBACK ansible-playbook .github/workflows/ansible-playbooks/distributed_tests.yml \
-i $ALLOCATOR_PATH/inventory \
-l managers \
-e "tmp_path=$TMP_PATH" \
-e "test_name=$TEST_NAME" \
"${{ inputs.VERBOSITY }}"

- name: Compress Allocator VM directory
id: compress_allocator_files
if: always() && steps.allocator_instance.outcome == 'success' && inputs.DESTROY == false
run: |
zip -P "${{ secrets.ZIP_ARTIFACTS_PASSWORD }}" -r $ALLOCATOR_PATH.zip $ALLOCATOR_PATH

- name: Upload Allocator VM directory as artifact
if: always() && steps.compress_allocator_files.outcome == 'success' && inputs.DESTROY == false
uses: actions/upload-artifact@v4
with:
name: allocator-instance-${{ matrix.system }}
path: ${{ env.ALLOCATOR_PATH }}.zip

- name: Delete allocated VMs
if: always() && steps.allocator_instance.outcome == 'success' && inputs.DESTROY == true
run: |
instance_names=($INSTANCE_NAMES)

for i in ${!instance_names[@]}; do
instance_name=${instance_names[$i]}
track_file="$ALLOCATOR_PATH/track_${instance_name}.yml"

echo "Deleting instance: $instance_name using track file $track_file"

(
# Delete instance
python3 wazuh-automation/deployability/modules/allocation/main.py \
--action delete --provider aws --track-output $track_file
) &
done

# Wait for all deletion tasks to complete
wait

Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
---
- hosts: all
gather_facts: false
tasks:
- name: Copying the wazuh-install-files.tar to the instances
copy:
src: "{{ resources_path }}/wazuh-install-files.tar"
dest: "{{ tmp_path }}/"
force: yes
remote_src: no
become: yes
become_user: root
Loading