Replies: 2 comments
-
If you're just looking to find the max performance, then running 32 separate jobs is not the way to do that. It'd be more efficient to ramp up the queue depth per job instead. A single job should suffice, I'd go with that, and then set: engine=io_uring and whatever rw= you want, depending on whether it's random reads or writes. bs=4k probably the most reasonable buffer size for that. If this is a flash based drive, I would not use an IO scheduler. This isn't a property of the fio job, it's a property of the system setup. I'd just use "none" as the IO scheduler. If you're using a rotational drive, then mq-deadline is a good choice. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your suggestion and the detailed explanation. The ZNS drive I'm using has a maximum of 32 open zones, and it requires at least 8 zones to be open to reach its maximum performance. Yes, it's a flash-based drive. Currently, I'm using the psync I/O engine and the "none" I/O scheduler for the 32 jobs write to mimic the randwrite in a conventional SSD. The I/O depth is set to 1 for the psync I/O engine, which might explain why 32 jobs with the psync I/O engine cannot fully saturate the IOPS. I am considering increasing the I/O depth by using an async I/O engine. However, with the IO scheduler set to mq-deadline, can I use libaio or io_uring to test a ZNS drive? Does it meet the write constraints of a ZNS SSD. Just to confirm, you mentioned 'whatever rw= you want'. Can I set 'rw=randwrite' on a ZNS drive? It seems that a ZNS drive requires sequential writing on its zones. |
Beta Was this translation helpful? Give feedback.
-
I'm going to figure out the maximum IOPS of the drive. It seems that it can not fully saturate the writing IOPS with 32 jobs on my machine. Is it possible to use libaio or io_uring to run multiple write jobs to a ZNS drive using FIO with mq-deadline?
Beta Was this translation helpful? Give feedback.
All reactions