Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

very slow read speed with encryption #9786

Closed
mabod opened this issue Dec 30, 2019 · 7 comments
Closed

very slow read speed with encryption #9786

mabod opened this issue Dec 30, 2019 · 7 comments
Labels
Type: Performance Performance improvement or performance problem

Comments

@mabod
Copy link

mabod commented Dec 30, 2019

System information

Type Version/Name
Distribution Name Manjaro
Distribution Version Testing
Linux Kernel 5.4.6
Architecture x86_64 (Ryzen 7 3700x)
ZFS Version 0.8.2
SPL Version 0.8.2

Describe the problem you're observing

With native encryption enabled the sequential read speed is dropping to ca. 20 % of the speed I see without encryption. The seq. write speed is not so much impacted. It stays at ca. 80 %.

And at the same time CPU load is going up to almost 100 %.

The encryption algorithm does not make a big difference. I tested with aes-128-gcm, aes-128-ccm and aes-256-ccm

I tested on a M2 SSD Samsung 970 EVO Plus 500GB

Describe how to reproduce the problem

fio benchmark seq. read and seq. write with these option files

17# cat fio-bench-generic-seq-read.options
[global]
bs=1M
ioengine=sync
invalidate=1
refill_buffers
numjobs=1
fallocate=none
size=32G

[seq-read]
rw=read
stonewall
18# cat fio-bench-generic-seq-write.options
[global]
bs=1M
ioengine=sync
invalidate=1
refill_buffers
numjobs=1
fallocate=none
size=32G

[seq-write]
rw=write
stonewall

This is the fio result I see without encryption (just one representative example out of many tries):

  read: IOPS=25.8k, BW=3219MiB/s (3376MB/s)(32.0GiB/10178msec)
  write: IOPS=8743, BW=1093MiB/s (1146MB/s)(32.0GiB/29980msec); 0 zone resets

While this is the result with encryption (just one representative example out of many tries):

  read: IOPS=507, BW=507MiB/s (532MB/s)(32.0GiB/64610msec)
  write: IOPS=780, BW=781MiB/s (819MB/s)(32.0GiB/41972msec); 0 zone resets

I also tested with kernel 4.19.91. The performance is slightly better, the CPU load is also slightly better compared to kernel 5.4.6 but the GUI is stuttering (XFCE). If I move a window while fio is running, the window stops moving for a fraction of a second every other second. This does not happen with kernel 5.4.6.

fio output with kernel 4.19.91:

  read: IOPS=1025, BW=1026MiB/s (1075MB/s)(32.0GiB/31952msec)
  write: IOPS=1248, BW=1249MiB/s (1310MB/s)(32.0GiB/26237msec); 0 zone resets

zfs parameters of the dataset:

NAME      PROPERTY              VALUE                 SOURCE
zM2/test  type                  filesystem            -
zM2/test  creation              Sa Dez 28 16:44 2019  -
zM2/test  used                  372K                  -
zM2/test  available             226G                  -
zM2/test  referenced            372K                  -
zM2/test  compressratio         1.00x                 -
zM2/test  mounted               no                    -
zM2/test  quota                 none                  default
zM2/test  reservation           none                  default
zM2/test  recordsize            1M                    local
zM2/test  mountpoint            /mnt/zM2/test         inherited from zM2
zM2/test  sharenfs              off                   default
zM2/test  checksum              on                    default
zM2/test  compression           off                   local
zM2/test  atime                 on                    inherited from zM2
zM2/test  devices               on                    default
zM2/test  exec                  on                    default
zM2/test  setuid                on                    default
zM2/test  readonly              off                   default
zM2/test  zoned                 off                   default
zM2/test  snapdir               hidden                default
zM2/test  aclinherit            restricted            default
zM2/test  createtxg             3456                  -
zM2/test  canmount              on                    default
zM2/test  xattr                 sa                    inherited from zM2
zM2/test  copies                1                     default
zM2/test  version               5                     -
zM2/test  utf8only              off                   -
zM2/test  normalization         none                  -
zM2/test  casesensitivity       sensitive             -
zM2/test  vscan                 off                   default
zM2/test  nbmand                off                   default
zM2/test  sharesmb              off                   default
zM2/test  refquota              none                  default
zM2/test  refreservation        none                  default
zM2/test  guid                  874606911686816885    -
zM2/test  primarycache          all                   default
zM2/test  secondarycache        all                   default
zM2/test  usedbysnapshots       0B                    -
zM2/test  usedbydataset         372K                  -
zM2/test  usedbychildren        0B                    -
zM2/test  usedbyrefreservation  0B                    -
zM2/test  logbias               latency               default
zM2/test  objsetid              397                   -
zM2/test  dedup                 off                   default
zM2/test  mlslabel              none                  default
zM2/test  sync                  standard              default
zM2/test  dnodesize             legacy                default
zM2/test  refcompressratio      1.00x                 -
zM2/test  written               372K                  -
zM2/test  logicalused           172K                  -
zM2/test  logicalreferenced     172K                  -
zM2/test  volmode               default               default
zM2/test  filesystem_limit      none                  default
zM2/test  snapshot_limit        none                  default
zM2/test  filesystem_count      none                  default
zM2/test  snapshot_count        none                  default
zM2/test  snapdev               hidden                default
zM2/test  acltype               posixacl              inherited from zM2
zM2/test  context               none                  default
zM2/test  fscontext             none                  default
zM2/test  defcontext            none                  default
zM2/test  rootcontext           none                  default
zM2/test  relatime              on                    inherited from zM2
zM2/test  redundant_metadata    all                   default
zM2/test  overlay               off                   default
zM2/test  encryption            aes-128-gcm           -
zM2/test  keylocation           file:///root/keyfile  local
zM2/test  keyformat             raw                   -
zM2/test  pbkdf2iters           0                     default
zM2/test  encryptionroot        zM2/test              -
zM2/test  keystatus             unavailable           -
zM2/test  special_small_blocks  0                     default
@behlendorf behlendorf added the Type: Performance Performance improvement or performance problem label Dec 30, 2019
@behlendorf
Copy link
Contributor

There is optimization work underway for the AES-GCM case in #9749 which should help considerably.

@ovizii
Copy link

ovizii commented Feb 22, 2020

@behlendorf Thanks for the link, I really hope something will happen to improve this.

@mabod I feel your pain. I moved a small and underpowered NAS from freenas to Debian and at the same time kept using ZFS and the encryption speed went from acceptable to horrendous. (I was using GELI encryption on freenas).
So now I upgraded the whole NAS to a new build with an AMD Ryzen 5 3600 and all cores are at 100% just by copying a file to an encrypted dataset - I'm torn between laughing and crying :-/

@mabod
Copy link
Author

mabod commented May 2, 2020

Today I tested zfs native encryption again. I used zfs master v0.8.0-753_g6ed4391da

I am very happy with the results. Native encryption performance has significantly improved. I tested without compression on my Samsung SSD 970 EVO Plus 1TB.

The fio results are as follows:

without encryption. 4 runs gives average
read 3430 MB/s and
write 1688 MB/s:

  read: IOPS=3245, BW=3246MiB/s (3403MB/s)(32.0GiB/10096msec)
  write: IOPS=1659, BW=1659MiB/s (1740MB/s)(32.0GiB/19751msec); 0 zone resets

  read: IOPS=3147, BW=3148MiB/s (3301MB/s)(32.0GiB/10410msec)
  write: IOPS=1577, BW=1578MiB/s (1654MB/s)(32.0GiB/20768msec); 0 zone resets

  read: IOPS=3330, BW=3330MiB/s (3492MB/s)(32.0GiB/9839msec)
  write: IOPS=1620, BW=1621MiB/s (1699MB/s)(32.0GiB/20220msec); 0 zone resets

  read: IOPS=3361, BW=3361MiB/s (3524MB/s)(32.0GiB/9749msec)
  write: IOPS=1588, BW=1589MiB/s (1666MB/s)(32.0GiB/20624msec); 0 zone resets

with encryption enabled (using default aes-256-gcm) 4 runs gives average
read 2882 MB/s and
write 1690 MB/s:

  read: IOPS=2792, BW=2792MiB/s (2928MB/s)(32.0GiB/11736msec)
  write: IOPS=1723, BW=1724MiB/s (1807MB/s)(32.0GiB/19010msec); 0 zone resets

  read: IOPS=2750, BW=2751MiB/s (2884MB/s)(32.0GiB/11913msec)
  write: IOPS=1619, BW=1619MiB/s (1698MB/s)(32.0GiB/20234msec); 0 zone resets

  read: IOPS=2781, BW=2782MiB/s (2917MB/s)(32.0GiB/11780msec)
  write: IOPS=1551, BW=1552MiB/s (1627MB/s)(32.0GiB/21117msec); 0 zone resets

  read: IOPS=2667, BW=2668MiB/s (2797MB/s)(32.0GiB/12284msec)
  write: IOPS=1553, BW=1554MiB/s (1629MB/s)(32.0GiB/21087msec); 0 zone resets

There is basically no difference in the write speed. And read performance with encryption is at 84 %. That is not bad! Thank you developers for your continuous efforts!

@orange888
Copy link

Today I tested zfs native encryption again. I used zfs master v0.8.0-753_g6ed4391da

I am very happy with the results. Native encryption performance has significantly improved. I tested without compression on my Samsung SSD 970 EVO Plus 1TB.

The fio results are as follows:

without encryption. 4 runs gives average
read 3430 MB/s and
write 1688 MB/s:

  read: IOPS=3245, BW=3246MiB/s (3403MB/s)(32.0GiB/10096msec)
  write: IOPS=1659, BW=1659MiB/s (1740MB/s)(32.0GiB/19751msec); 0 zone resets

  read: IOPS=3147, BW=3148MiB/s (3301MB/s)(32.0GiB/10410msec)
  write: IOPS=1577, BW=1578MiB/s (1654MB/s)(32.0GiB/20768msec); 0 zone resets

  read: IOPS=3330, BW=3330MiB/s (3492MB/s)(32.0GiB/9839msec)
  write: IOPS=1620, BW=1621MiB/s (1699MB/s)(32.0GiB/20220msec); 0 zone resets

  read: IOPS=3361, BW=3361MiB/s (3524MB/s)(32.0GiB/9749msec)
  write: IOPS=1588, BW=1589MiB/s (1666MB/s)(32.0GiB/20624msec); 0 zone resets

with encryption enabled (using default aes-256-gcm) 4 runs gives average
read 2882 MB/s and
write 1690 MB/s:

  read: IOPS=2792, BW=2792MiB/s (2928MB/s)(32.0GiB/11736msec)
  write: IOPS=1723, BW=1724MiB/s (1807MB/s)(32.0GiB/19010msec); 0 zone resets

  read: IOPS=2750, BW=2751MiB/s (2884MB/s)(32.0GiB/11913msec)
  write: IOPS=1619, BW=1619MiB/s (1698MB/s)(32.0GiB/20234msec); 0 zone resets

  read: IOPS=2781, BW=2782MiB/s (2917MB/s)(32.0GiB/11780msec)
  write: IOPS=1551, BW=1552MiB/s (1627MB/s)(32.0GiB/21117msec); 0 zone resets

  read: IOPS=2667, BW=2668MiB/s (2797MB/s)(32.0GiB/12284msec)
  write: IOPS=1553, BW=1554MiB/s (1629MB/s)(32.0GiB/21087msec); 0 zone resets

There is basically no difference in the write speed. And read performance with encryption is at 84 %. That is not bad! Thank you developers for your continuous efforts!

What is your CPU and mem speed when you did this test?
Thanks.

@mabod
Copy link
Author

mabod commented Oct 17, 2020

I am testing on a Ryzen 7 3700x with 64 GB DDR4 RAM with 3200 MHz.

Here are my most recent test results with

kernel 5.8.14
Samsung SSD 970 EVO Plus 1TB (NVME)
zfs 0.8.5
fio numjobs=320 and size=200M

The following numbers are reproducible with little variation up and down.

no encryption:

Run status group 0 (all jobs):
   READ: bw=5047MiB/s (5292MB/s), 15.8MiB/s-104MiB/s (16.5MB/s-109MB/s), io=62.5GiB (67.1GB), run=1918-12680msec
--
Run status group 0 (all jobs):
  WRITE: bw=1478MiB/s (1549MB/s), 4728KiB/s-5705KiB/s (4842kB/s-5842kB/s), io=62.5GiB (67.1GB), run=35896-43314msec

with encryption:

Run status group 0 (all jobs):
   READ: bw=2692MiB/s (2823MB/s), 8615KiB/s-274MiB/s (8822kB/s-288MB/s), io=62.5GiB (67.1GB), run=729-23772msec
--
Run status group 0 (all jobs):
  WRITE: bw=1472MiB/s (1543MB/s), 4709KiB/s-5951KiB/s (4822kB/s-6094kB/s), io=62.5GiB (67.1GB), run=34413-43491msec

That does not look too bad. In fact I am not complaining anymore.

PS
The difference to my earlier fio benchmarks is, that the older values where done with 32 GB of RAM and numjobs=1 and size=32GB

@gmelikov
Copy link
Member

@mabod looks like this issue may be closed then.

@mabod
Copy link
Author

mabod commented Oct 17, 2020

yes. The performance is a lot better.

@mabod mabod closed this as completed Oct 17, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Performance Performance improvement or performance problem
Projects
None yet
Development

No branches or pull requests

5 participants