Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Adaptive compression [was: auto compression] #7560

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

RubenKelevra
Copy link

This is a successor of the auto compression PR (#5928). This PR stale without substantial criticism, so I decided to pick up the work.

Description

I did some performance measurements and it wasn't meeting my expectations. So I tweaked the algoritm and excluded the off-compression as an option since the algorithm actually isn't able to determine the additional latency resulting of the larger data size when no compression is applied. I added gzip-2 to gzip-9 as options for the algorithm to choose from.

The algorithm should adapt to different CPU load situations, since it's measuring the latency introduces over the last 1000 compression cycles (one cycle one block). If the load of the system change over time, it might choose different compression algorithms.

In the light of zstd, the adaptive compression keyword might be a good choice for an adaptive zstd mode in the future selecting different zstd compression rates and relying on the same mechanism to select those.

Motivation and Context

For the initial motivations, head over to the original PR #5928 .

How Has This Been Tested?

I ran a simple benchmark on a single HDD with different scenarios:

  • with and without load
  • with some common block sizes
  • with dva_throttle on and off
  • for xfs and ext4 on zvols

My corpus is /usr/lib from my system (5.9 G with 117,920 files in 17,777 folders), copied with cp -ax from an SSD to an HDD.

All ZFS settings was set to default, except for checksum, which was edonr.

System specs:

  • Intel i3 2330M @ 2.20GHz (2 physical / 4 logical cores)
  • 12 G DDR3 memory
  • 2.5" 750 G Samsung HDD (as destination)
  • Intel SSD 320 (as source)

I understand that this test results might not be valid for a typical server application, but it should be a good measurement for an average notebook user. A use case for ZFS where latency and thruput is important too.

The workload scenario was a synthetic CPU/memory bound only user space program, with one thread per logical CPU core. The programm used for this was BOINC, with seti@home work units.

The load of the system was measured 75 seconds into the copy (on runs which was completed in less than 75 s the load value is somewhat inaccurate). Overall this value isn't really a hard prove, that one test result is better than a different one. I just wanted to show that the load of the system doesn't skyrocket, when using adaptive instead of lz4 or a gzip level.

In the original PR the author explained that dva_throttle might interfere with this adaptive compress algorithm selection. And I can confirm this, it might result in slightly less performance in compression ratio, but I could not find a distinctive drop in I/O performance which would hinder an inclusion into the master. Furthermore, with all compression algorithms, the performance impact was mixed with and without dva_throttle.

Overall those performance numbers for adaptive compression often look pretty good. I wasn't expecting a performance better than plain LZ4 compression, but it performed better in some scenarios.

I also like to point out that I used the filesystems without any parameters natively on the zvols. In my test the physical sector size is set by zfs to the recordsize, so the filesystems are aware of this (ext4 at least) and might use some (automatic) optimizations for those large physical sector sizes. This might lead to different results than in VMs, where the physical block size is usually 512 or 4096 for the filesystems inside the VM.

adaptive compression stats.pdf

adaptive compression stats zvol.pdf

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

  • My code follows the ZFS on Linux code style requirements.
  • I have updated the documentation accordingly.
  • I have read the contributing document.
  • I have added tests to cover my changes.
  • All new and existing tests passed.
  • All commit messages are properly formatted and contain Signed-off-by.
  • Change has been approved by a ZFS on Linux member.

@codecov
Copy link

codecov bot commented May 25, 2018

Codecov Report

Merging #7560 into master will increase coverage by 0.04%.
The diff coverage is 71.9%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #7560      +/-   ##
==========================================
+ Coverage   77.41%   77.45%   +0.04%     
==========================================
  Files         336      337       +1     
  Lines      107605   107725     +120     
==========================================
+ Hits        83304    83440     +136     
+ Misses      24301    24285      -16
Flag Coverage Δ
#kernel 78.1% <14.78%> (-0.05%) ⬇️
#user 66.45% <76.57%> (+0.07%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0dc2f70...5e18cd1. Read the comment docs.

@RubenKelevra
Copy link
Author

I'll update this PR later today to bring the code in line with the style-check and check out the other tests.

@RubenKelevra
Copy link
Author

RubenKelevra commented May 26, 2018

@ahrens I addressed your concerns raised about a incompability with zio_dva_throttle in #5928 (comment) by testing it extensively (see attached pdfs for the numbers).

Do you think that the allocation throttle still needs tweaking?

@ahrens ahrens added the Type: Feature Feature request or new feature label Jun 7, 2018
@ahrens
Copy link
Member

ahrens commented Jun 9, 2018

@allanjude may be interested in this.

"ZIO_COMPRESS_GZIP_6", "ZIO_COMPRESS_GZIP_7",
"ZIO_COMPRESS_GZIP_8", "ZIO_COMPRESS_GZIP_9",
"ZIO_COMPRESS_ZLE", "ZIO_COMPRESS_LZ4",
"ZIO_COMPRESS_ADAPTIVE", "ZIO_COMPRESS_FUNCTION"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks like it's also missing LZ4

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's there, on the right 😄

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahrens I don't see an issue here, the list just has been extended by one member. Have I overlooked something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was mistaken, this looks right. I was thrown off by the two-column formatting.

@@ -3613,6 +3614,14 @@ vdev_stat_update(zio_t *zio, uint64_t psize)
}

if (zio->io_delta && zio->io_delay) {
int n = 1000; // average over 1000 zios
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be a tunable?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, do you think about a specific usecase where this might need tweaking?

I think 1000 I/Os is a nice value for averaging the delay introduced by compression to be still able to react on different loads on the system while still leveling out short spikes in latency.

But I haven't done any scientific approach to determine the optimal value for this. So this solely rely on guessing.

* CDDL HEADER END
*/

#include <sys/compress_adaptive.h>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd appreciate seeing more comments in this file. Perhaps a big comment at the beginning explaining the theory behind this, and then some functions could also use comments describing their purpose. It looks like there are a bunch of concepts implicit in this file that could use explanation, e.g. "optimal delay", "faster/slower/optimal level".

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I'll look into this. Sorry for the delay, I just have some weekdays to spare, currently.

@behlendorf behlendorf added the Status: Work in Progress Not yet ready for general review label Jul 27, 2018
@behlendorf behlendorf added Status: Revision Needed Changes are required for the PR to be accepted and removed Status: Work in Progress Not yet ready for general review labels Sep 25, 2018
@ahrens ahrens mentioned this pull request Jan 8, 2019
11 tasks
@RJVB
Copy link

RJVB commented Jan 9, 2019

A few thoughts (keeping in mind that I have read through the exchange rather quickly)

  • why gzip-9 knowing it's hardly ever more efficient but definitely always slower than gzip-7 and even gzip-8? Going for the maximum is fine for one-time (= CLI) commands but not so much for transparent compression
  • shouldn't adaptive compression be based on a per-file compression property? If not, how do you know what's optimal when multiple files are being written at the same time? Think of an audio or video recording thingy that reads from a device and dumps to MP3 or MP4; it'll probably have a cache file containing raw data that's waiting to be processed.
  • lz4 can already be surprisingly good; typical build directories of mine holding text (source code) and binary data (object files) often compress to over 71% with almost no overhead while gzip-8 gets only a few painstakingly slow extra percent more. I've written a utility for recompressing specific files and directories making it easy to compare the benefits but also very clearly showing the cost (zfsctool in my github:RJVB/afsctool repo).

More in general:

  • Any work on improving compression should start (IMVHO) by updating the embedded lz4 code. I asked the lz4 authors about this last month and their estimate was that the current lz4 code should be significantly better (much more so in terms of decompression speed; use of a dictionary should improve compression esp. of small chunks) while remaining completely backward compatible.

@rincebrain
Copy link
Contributor

@RJVB If nothing else, updating the embedded LZ4 implementation would probably run afoul of the same issues as updating zstd in #8044 - namely, that the checksums of blocks produced by the old version won't match what's produced by the new version, even though the decompressed data payloads are identical, which will (IIUC) require L2ARC changes to accommodate. (See here and the surrounding conversation for what I'm talking about.)

@RJVB
Copy link

RJVB commented Jan 9, 2019 via email

@ahrens
Copy link
Member

ahrens commented Jan 9, 2019

@RJVB Tinker ahead! But let's move the discussion of updating lz4 to a different thread, as it's unrelated to this PR (except that maybe this PR would be less useful with better LZ4 in some cases). Could you open a new Issue to discuss updating lz4 code?

@RJVB
Copy link

RJVB commented Jan 9, 2019 via email

@RJVB
Copy link

RJVB commented Jan 9, 2019 via email

@RJVB
Copy link

RJVB commented Jan 9, 2019

maybe it would be possible to create a string format that describes the algorithms an administrator would be ok with using

Yes. That, or the opposite (blacklisting compressors). Just use a common separator that doesn't require quoting in the shell and don't worry how it looks, it's not a beaty contest ;)

Copy link
Contributor

@PrivatePuffin PrivatePuffin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If anyone (or the original author) want to take this on, a not-complete list of todo's:

@BrainSlayer
This was what I was refering to in #7481
Once you get ZSTD merged this might actually have some merits in combination with #8941

Most of the work needed here is also just a rince and repeat of you work on ZSTD...

@snajpa snajpa mentioned this pull request Jun 20, 2020
17 tasks
@PrivatePuffin
Copy link
Contributor

PrivatePuffin commented Aug 23, 2020

@BrainSlayer
Considering we got ZSTD merged, this might be an interesting follow-up project now...
As we could use such a feature to turn unused CPU processing power into higher compression :)

It might be interesting to look into the contrib included with ZSTD for adaptive compression:
https://github.com/facebook/zstd/tree/a2f492f2b3f364d4fae5bf2548e943da1bd9cacf/contrib/adaptive-compression

facebook/zstd#1327

@BrainSlayer
Copy link
Contributor

@BrainSlayer
Considering we got ZSTD merged, this might be an interesting follow-up project now...
As we could use such a feature to turn unused CPU processing power into higher compression :)

It might be interesting to look into the contrib included with ZSTD for adaptive compression:
https://github.com/facebook/zstd/tree/a2f492f2b3f364d4fae5bf2548e943da1bd9cacf/contrib/adaptive-compression

facebook/zstd#1327

from the documentation it uses unused resources like current cpu load. but you have also take into account the much higher memory requirements for higher compression levels which cannot be ignored. lets say i decompress a tar file on a machine, the cpu is immediatly maxed out and the compression level will always stay low. so the approach here theoreticly works only for machine with rare file access. even if you just store a bigger file, the level will decrease after the first few blocks. so its questionable if there is any usefull case for that patch. and if we add zstd to the selection, gzip will never be used and can be removed from the list since zstd is faster in any way with much higher compression ratio at the higher levels. so rechecking this with zstd in a real life scenario at least makes sense

@PrivatePuffin
Copy link
Contributor

@BrainSlayer Yeah, I agree its quite rare scenario.
Anyway, to be able to test it i'm doing my best rebasing it atm. although it isn't my specialty to rebase these things. So if you care to take a look: please do!
#11002

PrivatePuffin pushed a commit to PrivatePuffin/zfs that referenced this pull request Sep 29, 2020
This Commit rebases and squashes openzfs#7560

Co-authored-by:  Andy Lavr <andy.lavr@gmail.com>
Co-authored-by:  n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by:  RubenKelevra <ruben@vfn-nrw.de>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <ruben@vfn-nrw.de>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
@BrainSlayer
Copy link
Contributor

@Ornias1993 @RubenKelevra there is a work in progress branch "adaptive" in my github tree. which resolves all conflicts. still working to check if all is working and usually need to fix api issues

@RubenKelevra
Copy link
Author

@Ornias1993 @RubenKelevra there is a work in progress branch "adaptive" in my github tree. which resolves all conflicts. still working to check if all is working and usually need to fix api issues

Great, thanks a lot!

Was extremely busy, so haven't been able to look into it anymore. I'm sorry guys.

Great to see this is going still forwards!

@RubenKelevra
Copy link
Author

RubenKelevra commented Oct 1, 2020

@BrainSlayer
Considering we got ZSTD merged, this might be an interesting follow-up project now...
As we could use such a feature to turn unused CPU processing power into higher compression :)

It might be interesting to look into the contrib included with ZSTD for adaptive compression:
https://github.com/facebook/zstd/tree/a2f492f2b3f364d4fae5bf2548e943da1bd9cacf/contrib/adaptive-compression

facebook/zstd#1327

from the documentation it uses unused resources like current cpu load. but you have also take into account the much higher memory requirements for higher compression levels which cannot be ignored. lets say i decompress a tar file on a machine, the cpu is immediatly maxed out and the compression level will always stay low. so the approach here theoreticly works only for machine with rare file access. even if you just store a bigger file, the level will decrease after the first few blocks. so its questionable if there is any usefull case for that patch. and if we add zstd to the selection, gzip will never be used and can be removed from the list since zstd is faster in any way with much higher compression ratio at the higher levels. so rechecking this with zstd in a real life scenario at least makes sense

Well, hope you can test this theory. The adaptive approach steals actually CPU time from other processes, since it's running in the kernel and also with a higher priority.

But it will only steal from other processes, if the storage media can't accept the data fast enough. This could be due to other operations accessing the storage media or if the media is just slower than the CPU can compress.

The idea is more about reducing the total I/O time, by trying to compress harder if the storage media is hitting a wall, to reduce the total data needed to be written - not to adapt to different CPU-loads.

While obviously a higher CPU load might increase the compression time and thus reducing the chosen level for compression - due to lower boost levels in the CPU or blocked parts of the CPU by a concurrent thread on a different virtual CPU.

Hope this makes sense :)

@PrivatePuffin
Copy link
Contributor

PrivatePuffin commented Oct 1, 2020

@BrainSlayer I already got it building and passing most tests a few days ago, I think you could beter use that to continue from here on:
#11002

I also don't get your comment ago, while I finished the rebase a few days ago?! :S

There is only one test failure as I know of currently and I also added adaptive to the compression test suite. I also documented the failure thats fucking with FreeBSD, it's all already in #11002 ;)


edit
@BrainSlayer I looked at your rebase.
It's not going to build, there are some changes that need to be made to the code to get it building. Everything to get it building is fixed in #11002 already, so please use it (so my work and the work of others don't go to waste getting it working)

I added comments to some of the area's of your commit that are not going to work:
BrainSlayer@922dde6

@BrainSlayer
Copy link
Contributor

@Ornias1993 not yet. still fixing api changes. just merged the original patches. the rest is in progress in background. in 60 minutes it should be done

@PrivatePuffin
Copy link
Contributor

PrivatePuffin commented Oct 1, 2020

@BrainSlayer Why do the work double, I already fixed them all days ago, as pointed out.
I don't get your "Not yet", it's all fixed already in the new PR.

So:

in 60 minutes it should be done

Should be:

48 hours ago it was already done.

Dude, you are serieusly manually duplicating the work I already did, because you like to do extra needless affort or something? This isn't a school thing where you get extra credit for doing needless extra work, might I remind you of that? ;)

(Not to be harsh, I totally don't get what the freak you are doing if everything is already done for you days ago and you could just take it and continue with the actually interesting stuff.)

#11002 provide a clean start for you to work on, with all code attibution fixed for you (which isn't actually done right here either to be honest, signoffs etc are wrong) and everything rebased with tests ready to rock.

BrainSlayer added a commit to BrainSlayer/zfs that referenced this pull request Oct 1, 2020
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
@BrainSlayer
Copy link
Contributor

BrainSlayer commented Oct 1, 2020

@Ornias1993 i did it because you asked for. my tree compiles now. but for zstd support it has to be reworked and if you did it too already. dont take it as competition. i just did it, because i felt so. i did not see that you already made the job. but you may compare your work with mine now. maybe its usefull

@PrivatePuffin
Copy link
Contributor

PrivatePuffin commented Oct 1, 2020

@BrainSlayer uhmm, no I didn't ask you to rebase this... I said I was rebasing this and asked for you to check it.

I literally (like literally-literally) asked you to take a look at my work in #11002
Not redo this one... See:

Anyway, to be able to test it i'm doing my best rebasing it atm. although it isn't my specialty to rebase these things. So if you care to take a look: please do!
#11002


dont take it as competition. i just did it, because i felt so

It isn't about competition, it's about not doing work twice ;)

i did not see that you already made the job

Mistakes are only human 👍


Anyway:

but you may compare your work with mine now

Please read my PR, i'm not going to carry this. I only intended to rebase it and add some required tests. I'm not qualified to continue it any further than the rebase.

So if you want to continue this, please use my work and crossreference accordingly. I'm not taking this PR, i've rebased it and added tests and thats all i'm going to do with iut.

but for zstd support it has to be reworked

Yes I cheated it to get it building, but it indeed has to be reworked.

To be honest I'm debating if this should just be refactored into: zstd-adaptive with a fixed set of zstd levels.


edit
I looked at your recent changes.
I'll implement the way you feed the zstd level them into my rebase, because thats the way to do it nicely.
Thanks for figuring that out! I also prefer setting the functions static tbh, so i'll redo that (someone else adviced the header approuch, but it's ugly as freak)

@BrainSlayer
Copy link
Contributor

i just set them static since declarations where missing. so i avoid compiler warnings. thats all. the level thing is still just working for gzip only. zstd handles them very different. this why i added a todo for it. i will look into it deeper and will switch to your code to see what i can enhance. but i'm still skeptical for the whole concept

PrivatePuffin pushed a commit to PrivatePuffin/zfs that referenced this pull request Oct 7, 2020
- This Commit rebases and squashes openzfs#7560
- Adds tests for Adaptive compression

Co-authored-by:  n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by:  Ruben Kelevra <RubenKelevra@users.noreply.github.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by:  Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <ruben@vfn-nrw.de>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
PrivatePuffin pushed a commit to PrivatePuffin/zfs that referenced this pull request Oct 7, 2020
- This Commit rebases and squashes openzfs#7560
- Adds tests for Adaptive compression

Co-authored-by:  n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by:  Ruben Kelevra <RubenKelevra@users.noreply.github.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by:  Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <ruben@vfn-nrw.de>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
PrivatePuffin pushed a commit to PrivatePuffin/zfs that referenced this pull request Oct 8, 2020
This fixes some of the rebase mistakes on the documentation
It also adds some commenting to compress_adaptive

Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl> (+1 squashed commits)

Squashed commits:

[8de8850] Add adaptive compression

- This Commit rebases and squashes openzfs#7560
- Adds tests for Adaptive compression

Co-authored-by:  n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by:  Ruben Kelevra <RubenKelevra@users.noreply.github.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by:  Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <ruben@vfn-nrw.de>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
PrivatePuffin pushed a commit to PrivatePuffin/zfs that referenced this pull request Oct 8, 2020
- This Commit rebases and squashes openzfs#7560
- Add tests for Adaptive compression
- Add some commenting to compress_adaptive

Co-authored-by:  n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by:  Ruben Kelevra <RubenKelevra@users.noreply.github.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by:  Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <ruben@vfn-nrw.de>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
PrivatePuffin pushed a commit to PrivatePuffin/zfs that referenced this pull request Oct 8, 2020
- This Commit rebases and squashes openzfs#7560
- Add tests for Adaptive compression
- Add some commenting to compress_adaptive

Co-authored-by:  n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by:  Ruben Kelevra <RubenKelevra@users.noreply.github.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by:  Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <ruben@vfn-nrw.de>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
RubenKelevra added a commit to RubenKelevra/openzfs_zfs that referenced this pull request Oct 11, 2020
- This Commit rebases and squashes openzfs#7560
- Add tests for Adaptive compression
- Add some commenting to compress_adaptive

Co-authored-by:  n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by:  RubenKelevra <cyrond@gmail.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by:  Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <cyrond@gmail.com>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
@PrivatePuffin
Copy link
Contributor

Don't forget your own copyright lines too ;)

@PrivatePuffin
Copy link
Contributor

The following files need my and @BrainSlayer copyright header:
module/zfs/compress_adaptive.c

The following need only brainslayers copyright header:
module/zfs/zio.c

I think thats all that is copyrightable in the rebase...

RubenKelevra added a commit to RubenKelevra/openzfs_zfs that referenced this pull request Oct 11, 2020
- This Commit rebases and squashes the original work done on openzfs#7560

Additionally to the original patch from openzfs#7560:
- Add tests for Adaptive compression
- Add some commenting to compress_adaptive
- Small fixes
- Copyright headers

Co-authored-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by: RubenKelevra <cyrond@gmail.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <cyrond@gmail.com>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-Authored-By: n1kl (bunge) <n1kl@users.noreply.github.com>
Co-Authored-By: RubenKelevra <cyrond@gmail.com>
Co-Authored-By: Andy Lavr <andy.lavr@gmail.com>
Co-Authored-By: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-Authored-By: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
This Commit rebases and squashes the original work done on openzfs#7560

Additionally to the original patch from openzfs#7560:
- Add tests for Adaptive compression
- Add some commenting to compress_adaptive
- Small fixes
- Copyright headers

Co-authored-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Co-authored-by: RubenKelevra <cyrond@gmail.com>
Co-authored-by: Andy Lavr <andy.lavr@gmail.com>
Co-authored-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-authored-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Signed-off-by: n1kl (bunge) <n1kl@users.noreply.github.com>
Signed-off-by: RubenKelevra <cyrond@gmail.com>
Signed-off-by: Andy Lavr <andy.lavr@gmail.com>
Signed-off-by: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Signed-off-by: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
Co-Authored-By: n1kl (bunge) <n1kl@users.noreply.github.com>
Co-Authored-By: RubenKelevra <cyrond@gmail.com>
Co-Authored-By: Andy Lavr <andy.lavr@gmail.com>
Co-Authored-By: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-Authored-By: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
@RubenKelevra RubenKelevra changed the title Adaptive compression [was: auto compression] WIP: Adaptive compression [was: auto compression] Oct 11, 2020
Co-Authored-By: n1kl (bunge) <n1kl@users.noreply.github.com>
Co-Authored-By: RubenKelevra <cyrond@gmail.com>
Co-Authored-By: Andy Lavr <andy.lavr@gmail.com>
Co-Authored-By: Sebastian Gottschall <s.gottschall@dd-wrt.com>
Co-Authored-By: Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl>
@ahrens
Copy link
Member

ahrens commented Jun 4, 2021

@RubenKelevra It looks like you made some progress on this last year. Is this now complete and ready for a thorough code review?

@ahrens ahrens removed Status: Revision Needed Changes are required for the PR to be accepted Status: Inactive Not being actively updated labels Jun 4, 2021
@PrivatePuffin
Copy link
Contributor

@ahrens It still needs major rework afaik and some thorough designreview as discussed here:
#11002

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants