Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] gzip compression heuristic #7481

Closed
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 19 additions & 2 deletions module/zfs/zio_compress.c
Original file line number Diff line number Diff line change
Expand Up @@ -115,11 +115,28 @@ zio_compress_data(enum zio_compress c, abd_t *src, void *dst, size_t s_len)
if (c == ZIO_COMPRESS_EMPTY)
return (s_len);

/* No compression algorithms can read from ABDs directly */
void *tmp = abd_borrow_buf_copy(src, s_len);

/* for gzip 6+, use lz4 as a heuristic for compressibility */
if (ci->ci_compress == gzip_compress && ci->ci_level >= 6) {
/* lz4 must find some compression */
if (s_len) {
d_len = s_len - 1;
} else {
d_len = s_len;
}
Copy link
Contributor

@behlendorf behlendorf May 1, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this to be worthwhile we need to save at least a full sector since that's what it will be padded out too. Let's go ahead and assume a 4k sector size since the actual ashift isn't available, and this is only a heuristic.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making the assumption of 4K sectors would mean that gzip is mostly disabled on small-block files/zvols, right? That seems like an unnecessary regression in functionality. Could we somehow have the spa_t accessible here, so that we could check the spa_min_ashift?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be good, and after a quick glance it does look like it's available in all the callers so it can be passed in.

Copy link

@RubenKelevra RubenKelevra May 5, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@behlendorf I'm not sure, this is a safe asumption. If the data contains all zeros it might be possible to compress some data larger than one block, down to a very small size, small enought to fit in the block pointer, by the embedded_data feature. If we don't add an additional check, to avoid this from happening, we completely break the embedded_data feature with the check you purpose, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RubenKelevra If the data is all zeros, it will be caught by the zio_compress_zeroed_cb() check on line 112, and written as a hole rather than as data.

The point @behlendorf was trying to make is that the SAVINGS needs to be at least one sector, else doing any compression is a waste, since it won't actually consume any less space on disk.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kpande I don't feel comfortable with your replies anymore. Just stop.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@allanjude well, my point was: if the data is just one sector, no sector can be saved anymore. So the check will always return zero sectors saved. If the data can be compressed very well, but is also very small uncompressed, it won't be compressed anymore and thus the embedded_data feature isn't triggered.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kpande: I think that @RubenKelevra is engaging in a technical discussion about the code in question. One needn't be an expert to ask questions and try to understand the code.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everyone is welcome to contribute to the technical discussion as long as it's done respectfully.

@RubenKelevra the embedded block pointers do complicate things slightly. For small data blocks there may be a benefit in doing the compression even if a full sector isn't saved. This is actually also true for meta data blocks. My comment, which @allanjude clarified nicely, was only for the common medium-large block case. There are quite a few cases here which need to be considered.


c_len = lz4_compress_zfs(tmp, dst, s_len, d_len, 0);
if (c_len > d_len) {
abd_return_buf(src, tmp, s_len);
return (s_len);
}
}

/* Compress at least 12.5% */
d_len = s_len - (s_len >> 3);

/* No compression algorithms can read from ABDs directly */
void *tmp = abd_borrow_buf_copy(src, s_len);
c_len = ci->ci_compress(tmp, dst, s_len, d_len, ci->ci_level);
abd_return_buf(src, tmp, s_len);

Expand Down