Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing good documentation for TRIM on L2ARC cache devices #16644

Closed
mabod opened this issue Oct 13, 2024 · 6 comments
Closed

Missing good documentation for TRIM on L2ARC cache devices #16644

mabod opened this issue Oct 13, 2024 · 6 comments
Labels
Type: Feature Feature request or new feature

Comments

@mabod
Copy link

mabod commented Oct 13, 2024

I just added an SSD as cache device to my main pool (4 HDD RAID10). But I did not find any good documentation about how the trim operation is performaned on SSD cache devices.

I read about persistent and non-persistent L2ARC. I read about parameters like l2arc_trim_ahead and l2arc_write_max and I also read PR #9789

But I still have no clue if and how trim is performed on my new cache SSD. I still do not understand what I need to configure. etc. etc. Should I go for a persistent l2arc? Why? How?

From my point view the easiest setup would be if a trim is performant on every import of a pool.

@mabod mabod added the Type: Feature Feature request or new feature label Oct 13, 2024
@amotin
Copy link
Member

amotin commented Oct 14, 2024

Persistent L2ARC is enabled by default, unless you disable it via module parameter. But AFAIK it should have nothing to do with TRIM. Generally TRIM for L2ARC is a questionable concept, since L2ARC is written sequentially, and so in some opinions should not cause excessive flash fragmentation to solve with TRIM. But if you wish, according to zpoolprops(7): "TRIM on L2ARC devices is enabled by setting l2arc_trim_ahead > 0" and "autotrim=on".

@mabod
Copy link
Author

mabod commented Oct 14, 2024

Thank you, but where can I read about this? Especially about autotrim=on in conjunction with cache devices.

@amotin
Copy link
Member

amotin commented Oct 14, 2024

Is section for autotrim in man 7 zpoolprops not enough? Please propose the extension.

@mabod
Copy link
Author

mabod commented Oct 14, 2024

To me it is not clear that the autotrim property includes log and cache devices. If that is the case a "zpool trim" command would also do the trick. But I have 4 HDD in the pool plus the SSD for cache. And when I initiate a zpool trim for that pool, I get the error that trim is not supported.


╰─# zpool status zstore               
  pool: zstore
 state: ONLINE
  scan: scrub repaired 0B in 05:42:11 with 0 errors on Tue Oct  1 05:46:43 2024
config:

	NAME                               STATE     READ WRITE CKSUM
	zstore                             ONLINE       0     0     0
	  mirror-0                         ONLINE       0     0     0
	    WD40EFRX-68N32N0-WCC7K4JK1111  ONLINE       0     0     0
	    WD40EFRX-68N32N0-WCC7K6YX2222  ONLINE       0     0     0
	  mirror-1                         ONLINE       0     0     0
	    WD40EFRX-68N32N0-WCC7KA693333  ONLINE       0     0     0
	    WD40EFRX-68N32N0-WC6ALFKX4444  ONLINE       0     0     0
	cache
	  sde                              ONLINE       0     0     0

╰─# zpool trim zstore  
cannot trim: no devices in pool support trim operations

Why would autotrim for that pool than work?

PS
I am 100 % sure that the SSD (dev/sde) is trimming fine. I mounted it as xfs filesystem and successfully did an fstrim before I added it as cache device to the pool.

@amotin
Copy link
Member

amotin commented Oct 14, 2024

Both zpool trim (manual TRIM) and autotrim work for special and log vdevs, since they are pretty much a normal vdevs from space allocation point of view. L2ARC vdevs are different, they require explicit TRIM support. As I commented in other ticket, autotrim is supported there if you enable it via the module parameter. zpool trim I guess may be not implemented, but it may not have much sense in many cases when persistent L2ARC is full of data.

@mabod
Copy link
Author

mabod commented Oct 14, 2024

Thank you for your explanations. Very helpful for me. I will close this issue now.

@mabod mabod closed this as completed Oct 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Feature Feature request or new feature
Projects
None yet
Development

No branches or pull requests

2 participants