-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove duplicate and random algorithms from tests #9626
Conversation
- Random tests (send-c_verify_ratio) lead to inpredicatble results - Duplicate algorithms lead to duplicate execution of tests - Combined the effects are even worse and lead to false positives Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Codecov Report
@@ Coverage Diff @@
## master #9626 +/- ##
=========================================
- Coverage 79.37% 79.18% -0.2%
=========================================
Files 418 418
Lines 123531 123531
=========================================
- Hits 98057 97817 -240
- Misses 25474 25714 +240
Continue to review full report at Codecov.
|
*note *additional note: |
@behlendorf You might want a restart on those failed tests (unrelated). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking in to this! Looks good, I agree we don't want to test each different compression level, and making the tests behave more consistently is a good thing.
What about |
rand_set_prop $vol compression "on" "off" "lzjb" "gzip" \ | ||
"gzip-1" "gzip-2" "gzip-3" "gzip-4" "gzip-5" "gzip-6" \ | ||
"gzip-7" "gzip-8" "gzip-9" | ||
rand_set_prop $vol compression "off" "lzjb" "gzip" "lz4" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add zle? (separate PR?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ZLE with random writen data (which is often used) is basically the same as compression=off
Oh, I just stumbled over this... aren't we missing zle and lz4 in |
@c0d3z3r0 I purposefully left this PR to do one thing only: I have decided not to add the added algorithms brainslayer added to my PR.
But i'll look into it today, Just need to go over all instances |
@c0d3z3r0 |
This is a spinoff from #8941 seperating general, unrelated, test changes from the zstd PR.
TL:DR:
send-c_verify_ratio
) lead to inpredicatble resultsSigned-off-by: Kjeld Schouten-Lebbing kjeld@schouten-lebbing.nl
Motivation and Context
During my work on investigating the compression test suite for #8941 a few things became appearant:
send-c_verify_ratio
used randomly selected compression algorithms. Those algorithms differ widely in performance and for the majority consist of different levels of gzip. This lead to difficulty comparing performance of tests over multiple runs. This also lead to situations where tests ended with a false-positive (pass) result, due to algorithms being skipped.During my research into this there seemed to be 2 versions of multi-compression test suits:
The 2 tests using version 1, both suffered from the same problems. (listed above) This wouldn't be that bad, but it gets worse when new algorithms get added with multiple levels.
Description
This change removes duplicate algorithms from all compression tests (and leaves just 1 of each kind), including the random compression pool.
It also removed the random draw of algorithms from
send-c_verify_ratio
, to make sure results are repeatable. This also removes one whole loop of the test and thus improves performance.Is * taken into account?
The following is taken into account:
How Has This Been Tested?
i've been debating and testing multiple versions/levels of these changes for about a week now. After I came up with them due to (unrelated) issues in #8941
Types of changes
Checklist:
Signed-off-by
.