Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZTS: Use QEMU for tests on Linux and FreeBSD #16537

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions .github/workflows/scripts/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@

Workflow for each operating system:
- install qemu on the github runner
- download current cloud image of operating system
- start and init that image via cloud-init
- install dependencies and poweroff system
- start system and build openzfs and then poweroff again
- clone build system and start 2 instances of it
- run functional testings and complete in around 3h
- when tests are done, do some logfile preparing
- show detailed results for each system
- in the end, generate the job summary

/TR 14.09.2024
109 changes: 109 additions & 0 deletions .github/workflows/scripts/merge_summary.awk
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
#!/bin/awk -f
#
# Merge multiple ZTS tests results summaries into a single summary. This is
# needed when you're running different parts of ZTS on different tests
# runners or VMs.
#
# Usage:
#
# ./merge_summary.awk summary1.txt [summary2.txt] [summary3.txt] ...
#
# or:
#
# cat summary*.txt | ./merge_summary.awk
#
BEGIN {
i=-1
pass=0
fail=0
skip=0
state=""
cl=0
el=0
upl=0
ul=0

# Total seconds of tests runtime
total=0;
}

# Skip empty lines
/^\s*$/{next}

# Skip Configuration and Test lines
/^Test:/{state=""; next}
/Configuration/{state="";next}

# When we see "test-runner.py" stop saving config lines, and
# save test runner lines
/test-runner.py/{state="testrunner"; runner=runner$0"\n"; next}

# We need to differentiate the PASS counts from test result lines that start
# with PASS, like:
#
# PASS mv_files/setup
#
# Use state="pass_count" to differentiate
#
/Results Summary/{state="pass_count"; next}
/PASS/{ if (state=="pass_count") {pass += $2}}
/FAIL/{ if (state=="pass_count") {fail += $2}}
/SKIP/{ if (state=="pass_count") {skip += $2}}
/Running Time/{
state="";
running[i]=$3;
split($3, arr, ":")
total += arr[1] * 60 * 60;
total += arr[2] * 60;
total += arr[3]
next;
}

/Tests with results other than PASS that are expected/{state="expected_lines"; next}
/Tests with result of PASS that are unexpected/{state="unexpected_pass_lines"; next}
/Tests with results other than PASS that are unexpected/{state="unexpected_lines"; next}
{
if (state == "expected_lines") {
expected_lines[el] = $0
el++
}

if (state == "unexpected_pass_lines") {
unexpected_pass_lines[upl] = $0
upl++
}
if (state == "unexpected_lines") {
unexpected_lines[ul] = $0
ul++
}
}

# Reproduce summary
END {
print runner;
print "\nResults Summary"
print "PASS\t"pass
print "FAIL\t"fail
print "SKIP\t"skip
print ""
print "Running Time:\t"strftime("%T", total, 1)
if (pass+fail+skip > 0) {
percent_passed=(pass/(pass+fail+skip) * 100)
}
printf "Percent passed:\t%3.2f%", percent_passed

print "\n\nTests with results other than PASS that are expected:"
asort(expected_lines, sorted)
for (j in sorted)
print sorted[j]

print "\n\nTests with result of PASS that are unexpected:"
asort(unexpected_pass_lines, sorted)
for (j in sorted)
print sorted[j]

print "\n\nTests with results other than PASS that are unexpected:"
asort(unexpected_lines, sorted)
for (j in sorted)
print sorted[j]
}
91 changes: 91 additions & 0 deletions .github/workflows/scripts/qemu-1-setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
#!/usr/bin/env bash

######################################################################
# 1) setup qemu instance on action runner
######################################################################

set -eu

# install needed packages
export DEBIAN_FRONTEND="noninteractive"
sudo apt-get -y update
sudo apt-get install -y axel cloud-image-utils daemonize guestfs-tools \
ksmtuned virt-manager linux-modules-extra-$(uname -r) zfsutils-linux

# generate ssh keys
rm -f ~/.ssh/id_ed25519
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -q -N ""

# we expect RAM shortage
cat << EOF | sudo tee /etc/ksmtuned.conf > /dev/null
# https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-ksm
KSM_MONITOR_INTERVAL=60

# Millisecond sleep between ksm scans for 16Gb server.
# Smaller servers sleep more, bigger sleep less.
KSM_SLEEP_MSEC=10
KSM_NPAGES_BOOST=300
KSM_NPAGES_DECAY=-50
KSM_NPAGES_MIN=64
KSM_NPAGES_MAX=2048

KSM_THRES_COEF=25
KSM_THRES_CONST=2048

LOGFILE=/var/log/ksmtuned.log
DEBUG=1
EOF
sudo systemctl restart ksm
sudo systemctl restart ksmtuned

# not needed
sudo systemctl stop docker.socket
sudo systemctl stop multipathd.socket

# remove default swapfile and /mnt
sudo swapoff -a
sudo umount -l /mnt
DISK="/dev/disk/cloud/azure_resource-part1"
sudo sed -e "s|^$DISK.*||g" -i /etc/fstab
sudo wipefs -aq $DISK
sudo systemctl daemon-reload

sudo modprobe loop
sudo modprobe zfs

# partition the disk as needed
DISK="/dev/disk/cloud/azure_resource"
sudo sgdisk --zap-all $DISK
sudo sgdisk -p \
-n 1:0:+16G -c 1:"swap" \
-n 2:0:0 -c 2:"tests" \
$DISK
sync
sleep 1

# swap with same size as RAM
sudo mkswap $DISK-part1
sudo swapon $DISK-part1

# 60GB data disk
SSD1="$DISK-part2"

# 10GB data disk on ext4
sudo fallocate -l 10G /test.ssd1
SSD2=$(sudo losetup -b 4096 -f /test.ssd1 --show)

# adjust zfs module parameter and create pool
exec 1>/dev/null
ARC_MIN=$((1024*1024*256))
ARC_MAX=$((1024*1024*512))
echo $ARC_MIN | sudo tee /sys/module/zfs/parameters/zfs_arc_min
echo $ARC_MAX | sudo tee /sys/module/zfs/parameters/zfs_arc_max
echo 1 | sudo tee /sys/module/zfs/parameters/zvol_use_blk_mq
sudo zpool create -f -o ashift=12 zpool $SSD1 $SSD2 \
-O relatime=off -O atime=off -O xattr=sa -O compression=lz4 \
-O mountpoint=/mnt/tests

# no need for some scheduler
for i in /sys/block/s*/queue/scheduler; do
echo "none" | sudo tee $i > /dev/null
done
Loading
Loading