Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kola: Increase amount of disk corruption in verity test #515

Merged
merged 1 commit into from
Apr 4, 2024

Conversation

pothos
Copy link
Member

@pothos pothos commented Apr 3, 2024

For a particular image build the filesystem seemed to be more resistant to the corruption test, be it due to the corruption hitting other structures or some non-flushable cache covering more of the corrupted area. One can still trigger the verity panic by overwriting 100 MB instead of 10 MB.
Increase the amount of zeros written by the corruption test.

How to use

Testing done

Tested with the amd64 image here https://bincache.flatcar-linux.net/images/amd64/9999.9.9+kai-remove-acbuild/flatcar_production_image.bin.bz2 which was able to avoid the direct panic on the 10 MB corruption due to caching effects.

For a particular image build the filesystem seemed to be more resistant
to the corruption test, be it due to the corruption hitting other
structures or some non-flushable cache covering more of the corrupted
area. One can still trigger the verity panic by overwriting 100 MB
instead of 10 MB.
Increase the amount of zeros written by the corruption test.
Comment on lines +99 to +100
// write zero bytes to first 100 MB
c.MustSSH(m, fmt.Sprintf(`sudo dd if=/dev/zero of=%s bs=1M count=100 status=none`, usrdev))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to sound ignorant here, but I thought that basically changing one bit in read-only /usr partition should trigger a failure to boot, no?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but only when it gets read and even with the ls if the contents are cached, this is not the case

// write zero bytes to first 10 MB
c.MustSSH(m, fmt.Sprintf(`sudo dd if=/dev/zero of=%s bs=1M count=10 status=none`, usrdev))
// write zero bytes to first 100 MB
c.MustSSH(m, fmt.Sprintf(`sudo dd if=/dev/zero of=%s bs=1M count=100 status=none`, usrdev))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the flush can be done directly by dd oflag=dsync

Copy link
Member

@jepio jepio Apr 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oflag=dsync means sync after every write - super slow. oflag=direct (no caching) and conv=fsync (flush after all writes) might be better.

Copy link
Contributor

@ader1990 ader1990 Apr 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On NVME:

$: time dd if=/dev/zero of=t bs=1M count=100 status=none oflag=dsync

real    0m0.114s
user    0m0.000s
sys     0m0.057s

$: time dd if=/dev/zero of=t bs=1M count=100 status=none

real    0m0.057s
user    0m0.000s
sys     0m0.053s

Copy link
Contributor

@ader1990 ader1990 Apr 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On SSD:

$: time dd if=/dev/zero of=t bs=1M count=100 status=none oflag=dsync

real    0m0.514s
user    0m0.001s
sys     0m0.347s
$: time dd if=/dev/zero of=t bs=1M count=100 status=none

real    0m0.330s
user    0m0.001s
sys     0m0.265s

Copy link
Member

@jepio jepio Apr 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alright "super slow" may be exaggerated :) but it does get much slower for bigger writes from within a vm:

core@localhost ~ $ time sudo dd if=/dev/zero of=/dev/vdb bs=1M count=1000 status=none oflag=direct conv=fsync

real    0m0.651s
user    0m0.001s
sys     0m0.009s
core@localhost ~ $ time sudo dd if=/dev/zero of=/dev/vdb bs=1M count=1000 status=none oflag=dsync

real    0m3.348s
user    0m0.001s
sys     0m0.010s

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could that lead to the ssh command failing by triggering the panic? Currently it's two separate commands, one that writes the zeros but is not expected to fail but places the corruption successfully, and the second one that is expected to fail. We could merge them into one but maybe the test logic is cleaner this way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merging the commands would also work when combined with && to keep the error checking for the first one. When settings flags for dd to remove the sync we would just have to do enough tests to make sure that the test doesn't become flaky, or we could also keep the extra sync just to be sure.

@pothos pothos merged commit 34752ee into flatcar-master Apr 4, 2024
2 checks passed
@pothos pothos deleted the kai/verity-test branch April 4, 2024 08:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants