Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance #10

Closed
Vad1mo opened this issue Dec 6, 2017 · 4 comments
Closed

Performance #10

Vad1mo opened this issue Dec 6, 2017 · 4 comments

Comments

@Vad1mo
Copy link

Vad1mo commented Dec 6, 2017

I quickly did a performance test:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=container-ssd-centos --filename=test3 --directory=/mnt/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
container-ssd-centos: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
container-ssd-centos: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [5920KB/2020KB/0KB /s] [1480/505/0 iops] [eta 00m:00s]
container-ssd-centos: (groupid=0, jobs=1): err= 0: pid=573: Wed Dec  6 23:34:37 2017
  read : io=3071.7MB, bw=5191.6KB/s, iops=1297, runt=605871msec
  write: io=1024.4MB, bw=1731.3KB/s, iops=432, runt=605871msec
  cpu          : usr=1.31%, sys=4.32%, ctx=1049638, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=5191KB/s, minb=5191KB/s, maxb=5191KB/s, mint=605871msec, maxt=605871msec
  WRITE: io=1024.4MB, aggrb=1731KB/s, minb=1731KB/s, maxb=1731KB/s, mint=605871msec, maxt=605871msec

For the reference, I did the same test but this time with an mounted local gluster host volume.

docker run -ti --rm -v /mnt/gv0/:/mnt/gv0 ubuntu bash
apt-get update && apt-get install -y fio

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=container-ssd-centos --filename=test3 --directory=/mnt/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
container-ssd-centos: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
container-ssd-centos: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [65504KB/21732KB/0KB /s] [16.4K/5433/0 iops] [eta 00m:00s]
container-ssd-centos: (groupid=0, jobs=1): err= 0: pid=556: Wed Dec  6 23:39:35 2017
  read : io=3071.7MB, bw=68224KB/s, iops=17055, runt= 46104msec
  write: io=1024.4MB, bw=22751KB/s, iops=5687, runt= 46104msec
  cpu          : usr=10.48%, sys=46.19%, ctx=1073291, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=68223KB/s, minb=68223KB/s, maxb=68223KB/s, mint=46104msec, maxt=46104msec
  WRITE: io=1024.4MB, aggrb=22751KB/s, minb=22751KB/s, maxb=22751KB/s, mint=46104msec, maxt=46104msec
root@52a4cc5b356d:/# container-ssd-centos: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64

The difference is 68MB/s to 5MB/s so it is more or less a factor 10 difference between the two results.

Do you have an idea if and how the drive can be improved so it is on par with fuse mounted glusterfs?

@sapk
Copy link
Owner

sapk commented Dec 6, 2017

Hi, I can't say exactly from where the performance draw back is coming from.
From what I know it could come from various reason :

  • Additional time due to communication via fuse device
  • Different gluster version between plugin container and host
  • Limited access to acceleration inside docker -plugin container
  • a lot more causes ^^

I assume that you run via docker-plugin, have you try via cli (not a plugin running in a container) ? https://github.com/sapk/docker-volume-gluster#legacy-plugin-installation
This will permit to eliminate any limitation imposed by gluster plugin runnning in a container.

@Vad1mo
Copy link
Author

Vad1mo commented Dec 7, 2017

I am not sure what happened with my first test, but I did a retry today with all the different variations. All the results are now in the same range. If you want you can share that info with the community.

  1. Mount of a gluster Volume into Container via bind mount. -v /mnt/gv0/:/mnt/gv0
  2. Old School Plugin use
  3. New Style Plugin use

Test Scenario:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=gv0-plugin --filename=gv0-native --directory=/mnt/plugin-gv0/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

Bind Mount

gv0-native: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [66104KB/21908KB/0KB /s] [16.6K/5477/0 iops] [eta 00m:00s]
gv0-native: (groupid=0, jobs=1): err= 0: pid=566: Thu Dec  7 22:55:45 2017
  read : io=3071.7MB, bw=65519KB/s, iops=16379, runt= 48007msec
  write: io=1024.4MB, bw=21849KB/s, iops=5462, runt= 48007msec
  cpu          : usr=8.96%, sys=48.35%, ctx=1184441, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=65519KB/s, minb=65519KB/s, maxb=65519KB/s, mint=48007msec, maxt=48007msec
  WRITE: io=1024.4MB, aggrb=21849KB/s, minb=21849KB/s, maxb=21849KB/s, mint=48007msec, maxt=48007msec
root@27643d5b64e0:/mnt# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=gv0-plugin --filename=gv0-native --directory=/mnt/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
gv0/        gv0-native  plugin-gv0/ 

Old School Plugin usage

gv0-plugin: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
Jobs: 1 (f=1): [m(1)] [100.0% done] [54708KB/18584KB/0KB /s] [13.7K/4646/0 iops] [eta 00m:00s]
gv0-plugin: (groupid=0, jobs=1): err= 0: pid=577: Thu Dec  7 23:01:32 2017
  read : io=3071.7MB, bw=65857KB/s, iops=16464, runt= 47761msec
  write: io=1024.4MB, bw=21962KB/s, iops=5490, runt= 47761msec
  cpu          : usr=9.98%, sys=47.28%, ctx=1183727, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=65856KB/s, minb=65856KB/s, maxb=65856KB/s, mint=47761msec, maxt=47761msec
  WRITE: io=1024.4MB, aggrb=21961KB/s, minb=21961KB/s, maxb=21961KB/s, mint=47761msec, maxt=47761msec

New Style Plugin

gv0-new-plugin-style: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [64236KB/21320KB/0KB /s] [16.6K/5330/0 iops] [eta 00m:00s]
gv0-new-plugin-style: (groupid=0, jobs=1): err= 0: pid=555: Thu Dec  7 23:19:45 2017
  read : io=3071.7MB, bw=65578KB/s, iops=16394, runt= 47964msec
  write: io=1024.4MB, bw=21869KB/s, iops=5467, runt= 47964msec
  cpu          : usr=9.88%, sys=46.06%, ctx=1184805, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=65578KB/s, minb=65578KB/s, maxb=65578KB/s, mint=47964msec, maxt=47964msec
  WRITE: io=1024.4MB, aggrb=21868KB/s, minb=21868KB/s, maxb=21868KB/s, mint=47964msec, maxt=47964msec

@sapk
Copy link
Owner

sapk commented Dec 8, 2017

Thanks for the insigth,

@sapk
Copy link
Owner

sapk commented Dec 11, 2017

I have referenced this issue in the readme for insigth.

@sapk sapk closed this as completed Dec 11, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants