Local SSD raid performance

309 views
Skip to first unread message

Juan Manuel

unread,
Aug 29, 2016, 8:32:42 AM8/29/16
to gce-discussion
Hi guys,
I'm testing out local ssds with some software raid to try boost performance. So far the results doesn't make sense. If you put a single local ssd you get about 120k iops for read and 30k iops for writes(normal fio benchmark) but if you create a raid 5(which should boost a lot the iops) with 8 local ssds, you get even lower stats like 60k iops for reads and 20k iops for writes(same fio command). so I tried raid 0 just to see what were the results, and I got almost the same iops as a single drive. this was tested on highmem n1 instances running centos 7. 32vcores 208gb of ram.

I wonder if there is some sort of limitation on iops for local ssd per instance, no matter if you put 8 together on different raid configs you get the same amount of iops as a single drive plus the penalty for the raid.

George (Google Cloud Support)

unread,
Aug 29, 2016, 7:52:02 PM8/29/16
to gce-discussion
Hello Juan,

By default, most Compute Engine-provided Linux images will automatically run an optimization script that configures the instance for peak local SSD performance. The script enables certain Queue sysfs files settings that enhance the overall performance of your machine and masks interrupt requests(IRQs) to specific virtual CPUs(vCPUs). This script only optimizes performance for Compute Engine local SSD devices.

You can run the script manually. More information about this matter can be found in this Help Center article.

I hope this helps.

Sincerely,
George

Mani Gandham

unread,
Aug 31, 2016, 4:07:12 AM8/31/16
to gce-discussion
The chart at the top of this page shows max read & write IOPS per instance for each drive type:


Looks like there is a limit for the entire instance, regardless of SSDs attached.
Reply all
Reply to author
Forward
0 new messages