Guys, i'am confused.
Marc, the results of fio rbd tests are very interesting...
root@storage04:~# fio --name iops --rw randwrite --bs 4k --filename /dev/rbd2 --numjobs 12 --ioengine=libaio --group_reporting --direct=1
iops: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
...
fio-2.2.10
Starting 12 processes
^Cbs: 12 (f=12): [w(12)] [0.5% done] [0KB/14304KB/0KB /s] [0/3576/0 iops] [eta 03h:42m:23s]]
fio: terminating on signal 2
iops: (groupid=0, jobs=12): err= 0: pid=31552: Mon Feb 12 00:02:39 2018
write: io=617132KB, bw=9554.8KB/s, iops=2388, runt= 64589msec
slat (usec): min=3, max=463, avg=11.56, stdev=14.17
clat (usec): min=930, max=999957, avg=5004.58, stdev=30299.43
lat (usec): min=938, max=999975, avg=5016.50, stdev=30299.58
root@storage04:~# fio --name iops --rw randwrite --bs 4k --filename /dev/rbd2 --numjobs 12 --iodepth=32 --ioengine=libaio --group_reporting --direct=1 --runtime=100
iops: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
...
fio-2.2.10
Starting 12 processes
Jobs: 12 (f=12): [w(12)] [100.0% done] [0KB/49557KB/0KB /s] [0/12.4K/0 iops] [eta 00m:00s]
iops: (groupid=0, jobs=12): err= 0: pid=32632: Mon Feb 12 00:04:51 2018
write: io=3093.6MB, bw=31675KB/s, iops=7918, runt=100009msec
slat (usec): min=2, max=973942, avg=1510.77, stdev=18783.09
clat (msec): min=1, max=1719, avg=46.97, stdev=111.64
lat (msec): min=1, max=1719, avg=48.49, stdev=113.62
clat percentiles (msec):
| 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 13],
| 30.00th=[ 16], 40.00th=[ 18], 50.00th=[ 21], 60.00th=[ 25],
| 70.00th=[ 31], 80.00th=[ 43], 90.00th=[ 68], 95.00th=[ 131],
| 99.00th=[ 742], 99.50th=[ 824], 99.90th=[ 963], 99.95th=[ 988],
| 99.99th=[ 1045]
bw (KB /s): min= 6, max= 8320, per=9.51%, avg=3011.84, stdev=1956.64
lat (msec) : 2=0.07%, 4=0.55%, 10=10.28%, 20=37.47%, 50=35.89%
lat (msec) : 100=9.47%, 250=3.45%, 500=0.61%, 750=1.26%, 1000=0.92%
lat (msec) : 2000=0.03%
cpu : usr=0.28%, sys=0.58%, ctx=265641, majf=0, minf=146
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=791939/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32
but i can't see high latency on ssd, only 60-85% utilization.
okay, we will increase ssd osd op_threads, but how can we increase SCST iodepth to rbd back-end?
четверг, 8 февраля 2018 г., 19:03:23 UTC+3 пользователь
gray...@gmail.com написал: