mako and manta Support

114 views
Skip to first unread message

Chris (osm0sis)

unread,
Apr 7, 2013, 1:57:37 AM4/7/13
to

So I reached out to a couple of people in the Nexus 10 and Nexus 4 f.K threads via PM to see how all of our values so far translate to those devices in terms of number scales, etc. fozzie and blipp; good guys.

So far so good. The N10 seems to be roughly the same as the N7, but with things being factors of 5 instead of 10 in cfq, there is an extra variable though, target_latency (default 300); so we might want to look into what would be a reasonable number so that things are still hopefully applicable over there. We actually read about target_latency back in the documentation at the beginning of the cfq thread:

low_latency: When low latency is enabled (set to 1), CFQ attempts to provide a maximum wait time of 300 ms for each process issuing I/O on a device. This favors fairness over throughput. Disabling low latency (setting it to 0) ignores target latency, allowing each process in the system to get a full time slice. Low latency is enabled by default.
 
I've got blipp testing out 50, 100, 150, 200, 250, and 450 on target_latency so we can just go with what he says for the N10 or leave it at 300 since that's apparently what we've all been running under the hood anyway.
 
Otherwise, Francisco hasn't enabled row on his N10 kernel so nothing much to comment on there. Deadline scale seems to have worked out fine, at least with our current 3500/750/1/1/1 test values so I think we can assume that it's not drastically different either. One extra tidbit of interest, Francisco's apparently also got them running ondemand over there because I guess Exynos chips like it better than interactive.
 
Now the N4 on the other hand has been nothing but trouble :P, but only because it's got a newer version of row than any of the other f.K devices do. In this version read_idle and read_idle_freq are renamed rd_idle_data and rd_idle_data_freq, respectively. The weirdness of lp_swrite_quantum's scale is also apparently fixed and it's a direct entry tunable now (so no 10 -> 2; in the new row 2 = 2!). Now this means the best solution for us to be able to support all f.K devices is actually for Francisco to update the GN and N7 to the newer version of row, maybe throw the N10 kids a bone and add it to their kernel too. ;)
 
fozzie is going to get back to me on the scales tomorrow but interesting stuff so far!

Chris (osm0sis)

unread,
Apr 7, 2013, 2:00:56 AM4/7/13
to
For benches with different target_latency values blipp said he got "86/20 on 300 consistently

Low 81, high 92 read/ 19/20 write on 50

Low 81 high 87 read, 17/18 write on 450"

So my inclination is to chalk it up to margin of error and stick with 300 across the board.

Steve (Gingerbread Man)

unread,
Apr 6, 2013, 8:56:30 PM4/6/13
to francos...@googlegroups.com
I think from a development point of view it might be best if we can keep all versions of stuff equal as possible, ie hot plug, io schedulars and for a testing point of view if we could bring in say 2/3 respectable / reputable from n4 and n10 each as none of us own those devices except Franco himself. So means some work to get all these drivers synced across devices and making sure we get the right people in which would mean time needs to spend scanning through the xda threads. When we have sufficient testers accross all devices then we probably won't need to recruit again, once a new device is released its almost guaranteed one or two of us will get it

Chris (osm0sis)

unread,
Apr 6, 2013, 11:21:09 PM4/6/13
to

Something to consider in the future. I think Francisco would be the best to make those picks though since he knows the crowds in the N4 and N10 threads. For now I think we're okay just making sure the values don't get thrown way off by scale. Ktoonzes has already incorporated our test values from a while back into his KTManta kernel so that's some good evidence what we're doing is reasonably universal. Pretty cool we're getting that notice across other devices. :)

What we definitely could use is for Francisco to give us the updated fixed row on the N7 and GN though. For now I can work around it with a check in the script.

Zach (malaroth)

unread,
Apr 7, 2013, 2:56:15 PM4/7/13
to
This was a bit of support from Chris to cheer me up...
BUT I thought it might cheer you up a bit to know that your work and the work you started is getting notice from across the Android community. :D

EniGmA1987 wrote:
Also, many of your values are already being integrated into the next version of KTManta kernel because the settings have already been tested on the Nexus 10. They do require some minor tweaking, but overall it is good to go.
 
osm0sis wrote:
Nice! Thanks. Good to know. Can you point me to KTManta and/or the final settings so I can see how/where they've been tweaked? Hopefully we got credit for the settings in some way? ;)
 
EniGmA1987 wrote:
KTManta thread:
http://forum.xda-developers.com/showthread.php?t=2015874
 
Settings:
# General Tweaks, thanks to Malaroths and Osmosis for most of this
echo 512 > /sys/block/mmcblk0/queue/nr_requests;
echo 256 > /sys/block/mmcblk0/queue/read_ahead_kb;
echo 2 > /sys/block/mmcblk0/queue/rq_affinity;
echo 0 > /sys/block/mmcblk0/queue/nomerges;
echo 0 > /sys/block/mmcblk0/queue/rotational;
echo 0 > /sys/block/mmcblk0/queue/add_random;
echo 0 > /sys/block/mmcblk0/queue/iostats;
echo 8192 > /proc/sys/vm/min_free_kbytes 
 
# ROW Tweaks
echo 100 > /sys/block/mmcblk0/queue/iosched/hp_read_quantum;
echo 4 > /sys/block/mmcblk0/queue/iosched/hp_swrite_quantum;
echo 3 > /sys/block/mmcblk0/queue/iosched/lp_read_quantum;
echo 12 > /sys/block/mmcblk0/queue/iosched/lp_swrite_quantum;
echo 5 > /sys/block/mmcblk0/queue/iosched/read_idle;
echo 25 > /sys/block/mmcblk0/queue/iosched/read_idle_freq;
echo 75 > /sys/block/mmcblk0/queue/iosched/rp_read_quantum;
echo 3 > /sys/block/mmcblk0/queue/iosched/rp_swrite_quantum;
echo 3 > /sys/block/mmcblk0/queue/iosched/rp_write_quantum; 
 
# deadline tweaks, thanks to Malaroths and Osmosis for all of these
echo 350 > /sys/block/mmcblk0/queue/iosched/read_expire;
echo 3500 > /sys/block/mmcblk0/queue/iosched/write_expire;
echo 4 > /sys/block/mmcblk0/queue/iosched/writes_starved;
echo 1 > /sys/block/mmcblk0/queue/iosched/front_merges;
echo 2 > /sys/block/mmcblk0/queue/iosched/fifo_batch; 
 
# cfq tweaks, thanks to Malaroths and Osmosis for all of these
echo 8 > /sys/block/mmcblk0/queue/iosched/quantum;
echo 120 > /sys/block/mmcblk0/queue/iosched/fifo_expire_sync;
echo 250 > /sys/block/mmcblk0/queue/iosched/fifo_expire_async;
echo 12582912 > /sys/block/mmcblk0/queue/iosched/back_seek_max;
echo 1 > /sys/block/mmcblk0/queue/iosched/back_seek_penalty;
echo 93 > /sys/block/mmcblk0/queue/iosched/slice_sync;
echo 39 > /sys/block/mmcblk0/queue/iosched/slice_async;
echo 2 > /sys/block/mmcblk0/queue/iosched/slice_async_rq;
echo 0 > /sys/block/mmcblk0/queue/iosched/slice_idle;
echo 7 > /sys/block/mmcblk0/queue/iosched/group_idle;
echo 1 > /sys/block/mmcblk0/queue/iosched/low_latency; 
 
# Cache Tweaks, thanks to brees75 for this stuff
echo 2048 > /sys/devices/virtual/bdi/0:18/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/0:19/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/1:0/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:1/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:2/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:3/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:4/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:5/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:6/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:7/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:8/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:9/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:10/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:11/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:12/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:13/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:14/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:15/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/7:0/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:1/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:2/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:3/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:4/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:5/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:6/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:7/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:8/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:9/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:10/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:11/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:12/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:13/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:14/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:15/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:16/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:17/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:18/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:19/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:20/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:21/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:22/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:23/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:24/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:25/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:26/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:27/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:28/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:29/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:30/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:31/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:32/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:33/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:34/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:35/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:36/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:37/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:38/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/179:0/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/179:8/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/179:16/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/253:0/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/254:0/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:1/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:2/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:3/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:4/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:5/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:6/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:7/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:8/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:9/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:10/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:11/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:12/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:13/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:14/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:15/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:16/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:17/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:18/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:19/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:20/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:21/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:22/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:23/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:24/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:25/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:26/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:27/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:28/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:29/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:30/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:31/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:32/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:33/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:34/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:35/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:36/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:37/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:38/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/default/read_ahead_kb
 
 If you can find better settings then let me know.
  
 osm0sis wrote:
Thanks! Wow. That cache sh*t is kind of crazy, and I'd hazard to say borderline unnecessary since I believe those numbers can change. Interesting.. :) ;)
 
Other than that, looks like all they've really changed from ours is change the read_ahead_kb to 256 from 512, and adding the min_free_kbytes vm setting to the general queue tweaks. Also changing echo 5 > /sys/block/mmcblk0/queue/iosched/read_idle for row, where we have it at 10 (so that it will become 15 on the GN and stay at 10 on the N7). Is it actually at 5 for the N10? Can you verify that's better than 10 (or whatever the next highest is for manta)?
 
That's also our old, half-finished cfq. Here are our absolute latest, let me know how they compare: http://d-h.st/xZP
 
Please don't pass on the ones that are different just yet, since like I said some of them are still in testing.
 
Very cool to see that our settings are getting noticed and adoption across the community. :o :D
Thanks in advance for the help.

 

Chris (osm0sis)

unread,
Apr 7, 2013, 5:24:29 PM4/7/13
to francos...@googlegroups.com
fozzie got back to me after using today's 950iosettings (just posted). mako seems to behave about the same as manta and grouper in terms of scale so I'd say we're good for now. We might check in again when Francisco's added row to manta and when we're done with deadline to make sure the final values still work out well for both mako and manta. :)
 
noop:
None
deadline:
read_expire 3500
write_expire 750
writes_starved 1
front_merges 1
fifo_batch 1
row:
hp_read_quantum 100
rp_read_quantum 75
hp_swrite_quantum 5
rp_write_quantum 4
rp_swrite_quantum 4
lp_read_quantum 3
lp_swrite_quantum 2
rd_idle_data 10
rd_idle_data_freq 25
cfq:
quantum 8
fifo_expire_sync 80
fifo_expire_async 330
back_seek_max 1258292
back_seek_penalty 1
slice_sync 60
slice_async 50
slice_async_rq 2
slice_idle 0
group_idle 10
low_latency 1
target_latency 300

franciscofranco1990

unread,
Apr 11, 2013, 2:26:46 PM4/11/13
to francos...@googlegroups.com
Theres a bit of a problem to add updated ROW on GN because GN has 3.0 and Mako/Manta uses 3.4 and there are a bunch of differences in the Block subsystem... I don't know if I can port it.

I can definitely try these values on my Mako and Manta and get back to you.

Chris (osm0sis)

unread,
Apr 25, 2013, 11:04:15 PM4/25/13
to
That'd be great! fozzie and blipp were really helpful guys too so I think we managed to answer all our questions at this point, but when we hammer something out for deadline we might need to check again.

Chris (osm0sis)

unread,
May 8, 2013, 1:43:29 PM5/8/13
to
Just checked and our finalized deadline should work fine here. I believe the scale is 50 like grouper, not 125 as on the GN.
 
 
Edit: Yup, turned out there were differences for mako/manta in the source due to HZ being different (128 on tuna, 100 on all others?) which is also responsible for that weird 7,15,etc. scale for the tunables. Here are the changes:
 

Chris (osm0sis)

unread,
Apr 27, 2013, 1:46:02 AM4/27/13
to francos...@googlegroups.com
Whoops. I see Francisco hardcoded row on manta and mako 4 days back but put lp_swrite as 12 again instead of 2. I left notes on the commits for him. :)

Chris (osm0sis)

unread,
Apr 28, 2013, 1:10:22 PM4/28/13
to
Franco fixed it for mako. Looks like Siyah (and now Dorimanx) is grabbing them for the S2 too. ;)

https://github.com/voku/siyahkernel3/commits/
https://github.com/dorimanx/Dorimanx-SG2-I9100-Kernel/commits/

Edit: And "Houcha" for the N4 which appears forked from faux originally.

https://github.com/chadouming/Houcha/commits/test2

This is fun! Since Franco @ tagged me in the fix commit I get notified of all the cherry picks. :D

Chris (osm0sis)

unread,
May 12, 2013, 8:53:58 AM5/12/13
to francos...@googlegroups.com
Looks like manta isn't the same as mako for HZ after all. They increase by 5 so I think HZ must be 200.

That means deadline will need 50/500 to get 250/2500. Likely similar changes for cfq.

Reply all
Reply to author
Forward
0 new messages