So I reached out to a couple of people in the Nexus 10 and Nexus 4 f.K threads via PM to see how all of our values so far translate to those devices in terms of number scales, etc. fozzie and blipp; good guys.
So far so good. The N10 seems to be roughly the same as the N7, but with things being factors of 5 instead of 10 in cfq, there is an extra variable though, target_latency (default 300); so we might want to look into what would be a reasonable number so that things are still hopefully applicable over there. We actually read about target_latency back in the documentation at the beginning of the cfq thread:
Low 81, high 92 read/ 19/20 write on 50
Low 81 high 87 read, 17/18 write on 450"
So my inclination is to chalk it up to margin of error and stick with 300 across the board.
Something to consider in the future. I think Francisco would be the best to make those picks though since he knows the crowds in the N4 and N10 threads. For now I think we're okay just making sure the values don't get thrown way off by scale. Ktoonzes has already incorporated our test values from a while back into his KTManta kernel so that's some good evidence what we're doing is reasonably universal. Pretty cool we're getting that notice across other devices. :)
Also, many of your values are already being integrated into the next version of KTManta kernel because the settings have already been tested on the Nexus 10. They do require some minor tweaking, but overall it is good to go.
Nice! Thanks. Good to know. Can you point me to KTManta and/or the final settings so I can see how/where they've been tweaked? Hopefully we got credit for the settings in some way? ;)
KTManta thread:
http://forum.xda-developers.com/showthread.php?t=2015874
Settings:
# General Tweaks, thanks to Malaroths and Osmosis for most of this
echo 512 > /sys/block/mmcblk0/queue/nr_requests;
echo 256 > /sys/block/mmcblk0/queue/read_ahead_kb;
echo 2 > /sys/block/mmcblk0/queue/rq_affinity;
echo 0 > /sys/block/mmcblk0/queue/nomerges;
echo 0 > /sys/block/mmcblk0/queue/rotational;
echo 0 > /sys/block/mmcblk0/queue/add_random;
echo 0 > /sys/block/mmcblk0/queue/iostats;
echo 8192 > /proc/sys/vm/min_free_kbytes# ROW Tweaks
echo 100 > /sys/block/mmcblk0/queue/iosched/hp_read_quantum;
echo 4 > /sys/block/mmcblk0/queue/iosched/hp_swrite_quantum;
echo 3 > /sys/block/mmcblk0/queue/iosched/lp_read_quantum;
echo 12 > /sys/block/mmcblk0/queue/iosched/lp_swrite_quantum;
echo 5 > /sys/block/mmcblk0/queue/iosched/read_idle;
echo 25 > /sys/block/mmcblk0/queue/iosched/read_idle_freq;
echo 75 > /sys/block/mmcblk0/queue/iosched/rp_read_quantum;
echo 3 > /sys/block/mmcblk0/queue/iosched/rp_swrite_quantum;
echo 3 > /sys/block/mmcblk0/queue/iosched/rp_write_quantum;# deadline tweaks, thanks to Malaroths and Osmosis for all of these
echo 350 > /sys/block/mmcblk0/queue/iosched/read_expire;
echo 3500 > /sys/block/mmcblk0/queue/iosched/write_expire;
echo 4 > /sys/block/mmcblk0/queue/iosched/writes_starved;
echo 1 > /sys/block/mmcblk0/queue/iosched/front_merges;
echo 2 > /sys/block/mmcblk0/queue/iosched/fifo_batch;
# cfq tweaks, thanks to Malaroths and Osmosis for all of these
echo 8 > /sys/block/mmcblk0/queue/iosched/quantum;
echo 120 > /sys/block/mmcblk0/queue/iosched/fifo_expire_sync;
echo 250 > /sys/block/mmcblk0/queue/iosched/fifo_expire_async;
echo 12582912 > /sys/block/mmcblk0/queue/iosched/back_seek_max;
echo 1 > /sys/block/mmcblk0/queue/iosched/back_seek_penalty;
echo 93 > /sys/block/mmcblk0/queue/iosched/slice_sync;
echo 39 > /sys/block/mmcblk0/queue/iosched/slice_async;
echo 2 > /sys/block/mmcblk0/queue/iosched/slice_async_rq;
echo 0 > /sys/block/mmcblk0/queue/iosched/slice_idle;
echo 7 > /sys/block/mmcblk0/queue/iosched/group_idle;
echo 1 > /sys/block/mmcblk0/queue/iosched/low_latency;# Cache Tweaks, thanks to brees75 for this stuff
echo 2048 > /sys/devices/virtual/bdi/0:18/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/0:19/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/1:0/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:1/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:2/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:3/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:4/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:5/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:6/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:7/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:8/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:9/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:10/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:11/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:12/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:13/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:14/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/1:15/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/7:0/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:1/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:2/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:3/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:4/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:5/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:6/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:7/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:8/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:9/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:10/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:11/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:12/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:13/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:14/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:15/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:16/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:17/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:18/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:19/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:20/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:21/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:22/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:23/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:24/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:25/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:26/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:27/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:28/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:29/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:30/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:31/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:32/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:33/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:34/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:35/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:36/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:37/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/7:38/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/179:0/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/179:8/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/179:16/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/253:0/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/254:0/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:1/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:2/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:3/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:4/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:5/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:6/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:7/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:8/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:9/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:10/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:11/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:12/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:13/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:14/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:15/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:16/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:17/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:18/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:19/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:20/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:21/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:22/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:23/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:24/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:25/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:26/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:27/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:28/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:29/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:30/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:31/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:32/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:33/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:34/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:35/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:36/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:37/read_ahead_kb
echo 256 > /sys/devices/virtual/bdi/254:38/read_ahead_kb
echo 2048 > /sys/devices/virtual/bdi/default/read_ahead_kb
If you can find better settings then let me know.
Thanks! Wow. That cache sh*t is kind of crazy, and I'd hazard to say borderline unnecessary since I believe those numbers can change. Interesting.. :) ;)Other than that, looks like all they've really changed from ours is change the read_ahead_kb to 256 from 512, and adding the min_free_kbytes vm setting to the general queue tweaks. Also changing echo 5 > /sys/block/mmcblk0/queue/iosched/read_idle for row, where we have it at 10 (so that it will become 15 on the GN and stay at 10 on the N7). Is it actually at 5 for the N10? Can you verify that's better than 10 (or whatever the next highest is for manta)?That's also our old, half-finished cfq. Here are our absolute latest, let me know how they compare: http://d-h.st/xZPPlease don't pass on the ones that are different just yet, since like I said some of them are still in testing.Very cool to see that our settings are getting noticed and adoption across the community. :o :DThanks in advance for the help.
noop: | |
None | |
deadline: | |
read_expire | 3500 |
write_expire | 750 |
writes_starved | 1 |
front_merges | 1 |
fifo_batch | 1 |
row: | |
hp_read_quantum | 100 |
rp_read_quantum | 75 |
hp_swrite_quantum | 5 |
rp_write_quantum | 4 |
rp_swrite_quantum | 4 |
lp_read_quantum | 3 |
lp_swrite_quantum | 2 |
rd_idle_data | 10 |
rd_idle_data_freq | 25 |
cfq: | |
quantum | 8 |
fifo_expire_sync | 80 |
fifo_expire_async | 330 |
back_seek_max | 1258292 |
back_seek_penalty | 1 |
slice_sync | 60 |
slice_async | 50 |
slice_async_rq | 2 |
slice_idle | 0 |
group_idle | 10 |
low_latency | 1 |
target_latency | 300 |
https://github.com/voku/siyahkernel3/commits/
https://github.com/dorimanx/Dorimanx-SG2-I9100-Kernel/commits/
Edit: And "Houcha" for the N4 which appears forked from faux originally.
https://github.com/chadouming/Houcha/commits/test2
This is fun! Since Franco @ tagged me in the fix commit I get notified of all the cherry picks. :D
That means deadline will need 50/500 to get 250/2500. Likely similar changes for cfq.