Under memory pressure, the page allocator and kswapd can go to sleep using
congestion_wait(). In two of these cases, it may not be the appropriate
action as congestion may not be the problem. This patchset replaces two
sets of instances of congestion_wait() usage with a waitqueue sleep with
the view of having the VM behaviour depend on the relevant state of the
zone instead of on congestion which may or may not be a factor. A third
patch updates the frequency zone pressure is checked.
The first patch addresses the page allocator which calls congestion_wait()
to back off. The patch adds a zone->pressure_wq to sleep on instead of the
congestion queues.. If a direct reclaimer or kswapd brings the zone over
the min watermark, processes on the waitqueue are woken up.
The second patch checks zone pressure when a batch of pages from the PCP lists
are freed. The assumption is that there is a reasonable change if processes
are sleeping that a batch of frees can push a zone above its watermark.
The third patch address kswapd going to sleep when it is raising priority. As
vmscan makes more appropriate checks on congestion elsewhere, this patch
puts kswapd back on its own waitqueue to wait for either a timeout or
another process to call wakeup_kswapd.
The tricky problem is determining if this patch is really doing the right
thing. I took three machines X86, X86-64 and PPC64 booted with 1GB of RAM and
ran a series of tests including sysbench, iozone and a desktop latency test.
The performance results that did not involve memory pressure were fine -
no major impact due to the zone_pressure check in the free path. However,
the sysbench and iozone results varied wildly and depended somewhat on the
starting state of the machine. The objective was to see if the tests completed
faster because less time was needlessly spent waiting on congestion but the
fact is the benchmarks were really IO-bound meant there was little difference
with the patch applied. For the record, there were both massive gains and
losses with the patch applied but it was not consistently reproducible.
I'm somewhat at an impasse to identify a reasonable scenario the patch
can make a real difference to. It might depend on a setup like Christian's
with many disks which I cannot reproduce unfortunately. Otherwise, it's a
case of eyeballing the patch and stating whether it makes sense or not.
Nick, I haven't implemented the
queueing-if-a-process-is-already-waiting-for-fairness yet largely because
a proper way has to be devised to measure how "good" or "bad" this patch is.
Any comments on whether this patch is really doing the right thing or
suggestions on how it should be properly tested? Christian, minimally it
would be nice if you could retest your iozone tests to confirm the symptoms
of your problem are still being dealt with.
include/linux/mmzone.h | 3 ++
mm/internal.h | 4 +++
mm/mmzone.c | 47 ++++++++++++++++++++++++++++++++++++++++++
mm/page_alloc.c | 53 +++++++++++++++++++++++++++++++++++++++++++----
mm/vmscan.c | 13 ++++++++---
5 files changed, 111 insertions(+), 9 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
> Under memory pressure, the page allocator and kswapd can go to sleep using
> congestion_wait(). In two of these cases, it may not be the appropriate
> action as congestion may not be the problem.
clear_bdi_congested() is called each time a write completes and the
queue is below the congestion threshold.
So if the page allocator or kswapd call congestion_wait() against a
non-congested queue, they'll wake up on the very next write completion.
Hence the above-quoted claim seems to me to be a significant mis-analysis and
perhaps explains why the patchset didn't seem to help anything?
Andrew Morton wrote:
> On Mon, 8 Mar 2010 11:48:20 +0000
> Mel Gorman <m...@csn.ul.ie> wrote:
>
>> Under memory pressure, the page allocator and kswapd can go to sleep using
>> congestion_wait(). In two of these cases, it may not be the appropriate
>> action as congestion may not be the problem.
>
> clear_bdi_congested() is called each time a write completes and the
> queue is below the congestion threshold.
>
> So if the page allocator or kswapd call congestion_wait() against a
> non-congested queue, they'll wake up on the very next write completion.
Well the issue came up in all kind of loads where you don't have any
writes at all that can wake up congestion_wait.
Thats true for several benchmarks, but also real workload as well e.g. A
backup job reading almost all files sequentially and pumping out stuff
via network.
> Hence the above-quoted claim seems to me to be a significant mis-analysis and
> perhaps explains why the patchset didn't seem to help anything?
While I might have misunderstood you and it is a mis-analysis in your
opinion, it fixes a -80% Throughput regression on sequential read
workloads, thats not nothing - its more like absolutely required :-)
You might check out the discussion with the subject "Performance
regression in scsi sequential throughput (iozone) due to "e084b -
page-allocator: preserve PFN ordering when __GFP_COLD is set"".
While the original subject is misleading from todays point of view, it
contains a lengthy discussion about exactly when/why/where time is lost
due to congestion wait with a lot of traces, counters, data attachments
and such stuff.
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
Where you appear to get a kicking is if you want on "congestion" but no
writes are involved. In that case you potentially sleep for the whole timeout
waiting on an event that is not going to occur.
> So if the page allocator or kswapd call congestion_wait() against a
> non-congested queue, they'll wake up on the very next write completion.
>
> Hence the above-quoted claim seems to me to be a significant mis-analysis and
> perhaps explains why the patchset didn't seem to help anything?
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
>
>
> Andrew Morton wrote:
> > On Mon, 8 Mar 2010 11:48:20 +0000
> > Mel Gorman <m...@csn.ul.ie> wrote:
> >
> >> Under memory pressure, the page allocator and kswapd can go to sleep using
> >> congestion_wait(). In two of these cases, it may not be the appropriate
> >> action as congestion may not be the problem.
> >
> > clear_bdi_congested() is called each time a write completes and the
> > queue is below the congestion threshold.
> >
> > So if the page allocator or kswapd call congestion_wait() against a
> > non-congested queue, they'll wake up on the very next write completion.
>
> Well the issue came up in all kind of loads where you don't have any
> writes at all that can wake up congestion_wait.
> Thats true for several benchmarks, but also real workload as well e.g. A
> backup job reading almost all files sequentially and pumping out stuff
> via network.
Why is reclaim going into congestion_wait() at all if there's heaps of
clean reclaimable pagecache lying around?
(I don't thing the read side of the congestion_wqh[] has ever been used, btw)
> > Hence the above-quoted claim seems to me to be a significant mis-analysis and
> > perhaps explains why the patchset didn't seem to help anything?
>
> While I might have misunderstood you and it is a mis-analysis in your
> opinion, it fixes a -80% Throughput regression on sequential read
> workloads, thats not nothing - its more like absolutely required :-)
>
> You might check out the discussion with the subject "Performance
> regression in scsi sequential throughput (iozone) due to "e084b -
> page-allocator: preserve PFN ordering when __GFP_COLD is set"".
> While the original subject is misleading from todays point of view, it
> contains a lengthy discussion about exactly when/why/where time is lost
> due to congestion wait with a lot of traces, counters, data attachments
> and such stuff.
Well if we're not encountering lots of dirty pages in reclaim then we
shouldn't be waiting for writes to retire, of course.
But if we're not encountering lots of dirty pages in reclaim, we should
be reclaiming pages, normally.
I could understand reclaim accidentally going into congestion_wait() if
it hit a large pile of pages which are unreclaimable for reasons other
than being dirty, but is that happening in this case?
If not, we broke it again.
I believe it's a race albeit one that has been there a long time.
In __alloc_pages_direct_reclaim, a process does approximately the
following
1. Enters direct reclaim
2. Calls cond_reched()
3. Drain pages if necessary
4. Attempt to allocate a page
Between steps 2 and 3, it's possible to have reclaimed the pages but
another process allocate them. It then proceeds and decides try again
but calls congestion_wait() before it loops around.
Plenty of read cache reclaimed but no forward progress.
> > > Hence the above-quoted claim seems to me to be a significant mis-analysis and
> > > perhaps explains why the patchset didn't seem to help anything?
> >
> > While I might have misunderstood you and it is a mis-analysis in your
> > opinion, it fixes a -80% Throughput regression on sequential read
> > workloads, thats not nothing - its more like absolutely required :-)
> >
> > You might check out the discussion with the subject "Performance
> > regression in scsi sequential throughput (iozone) due to "e084b -
> > page-allocator: preserve PFN ordering when __GFP_COLD is set"".
> > While the original subject is misleading from todays point of view, it
> > contains a lengthy discussion about exactly when/why/where time is lost
> > due to congestion wait with a lot of traces, counters, data attachments
> > and such stuff.
>
> Well if we're not encountering lots of dirty pages in reclaim then we
> shouldn't be waiting for writes to retire, of course.
>
> But if we're not encountering lots of dirty pages in reclaim, we should
> be reclaiming pages, normally.
>
We probably are.
> I could understand reclaim accidentally going into congestion_wait() if
> it hit a large pile of pages which are unreclaimable for reasons other
> than being dirty, but is that happening in this case?
>
Probably not. It's almost certainly the race I described above.
> If not, we broke it again.
>
We were broken with respect to this in the first place. That
cond_reched() is badly placed and waiting on congestion when congestion
might not be involved is also a bit odd.
It's possible that Christian's specific problem would also be addressed
by the following patch. Christian, willing to test?
It still feels a bit unnatural though that the page allocator waits on
congestion when what it really cares about is watermarks. Even if this
patch works for Christian, I think it still has merit so will kick it a
few more times.
==== CUT HERE ====
page-allocator: Attempt page allocation immediately after direct reclaim
After a process completes direct reclaim it calls cond_resched() as
potentially it has been running a long time. When it wakes up, it
attempts to allocate a page. There is a large window during which
another process can allocate the pages reclaimed by direct reclaim. This
patch attempts to allocate a page immediately after direct reclaim but
will still go to sleep afterwards if its quantum has expired.
Signed-off-by: Mel Gorman <m...@csn.ul.ie>
---
mm/page_alloc.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a8182c8..973b7fc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1721,8 +1721,6 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
lockdep_clear_current_reclaim_state();
p->flags &= ~PF_MEMALLOC;
- cond_resched();
-
if (order != 0)
drain_all_pages();
@@ -1731,6 +1729,9 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
+
+ cond_resched();
+
return page;
>> If not, we broke it again.
>>
>
> We were broken with respect to this in the first place. That
> cond_reched() is badly placed and waiting on congestion when congestion
> might not be involved is also a bit odd.
>
> It's possible that Christian's specific problem would also be addressed
> by the following patch. Christian, willing to test?
Will is here, but no chance before monday/tuesday to get a free machine
slot - I'll post results as soon as I get them.
> It still feels a bit unnatural though that the page allocator waits on
> congestion when what it really cares about is watermarks. Even if this
> patch works for Christian, I think it still has merit so will kick it a
> few more times.
In whatever way I can look at it watermark_wait should be supperior to
congestion_wait. Because as Mel points out waiting for watermarks is
what is semantically correct there.
If there eventually some day comes a solution without any of those waits
I'm fine too - e.g. by closing whatever races we have and fixing that
one context can never run into this in direct_reclaim:
1. free pages with try_to_free
2. not getting one in the subsequent get_page call
But as long as we have a wait - watermark waiting > congestion waiting
(IMHO).
> ==== CUT HERE ====
> page-allocator: Attempt page allocation immediately after direct reclaim
[...]
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
> > It still feels a bit unnatural though that the page allocator waits on
> > congestion when what it really cares about is watermarks. Even if this
> > patch works for Christian, I think it still has merit so will kick it a
> > few more times.
>
> In whatever way I can look at it watermark_wait should be supperior to
> congestion_wait. Because as Mel points out waiting for watermarks is
> what is semantically correct there.
If a direct-reclaimer waits for some thresholds to be achieved then what
task is doing reclaim?
Ultimately, kswapd. This will introduce a hard dependency upon kswapd
activity. This might introduce scalability problems. And latency
problems if kswapd if off doodling with a slow device (say), or doing a
journal commit. And perhaps deadlocks if kswapd tries to take a lock
which one of the waiting-for-watermark direct relcaimers holds.
Generally, kswapd is an optional, best-effort latency optimisation
thing and we haven't designed for it to be a critical service.
Probably stuff would break were we to do so.
This is one of the reasons why we avoided creating such dependencies in
reclaim. Instead, what we do when a reclaimer is encountering lots of
dirty or in-flight pages is
msleep(100);
then try again. We're waiting for the disks, not kswapd.
Only the hard-wired 100 is a bit silly, so we made the "100" variable,
inversely dependent upon the number of disks and their speed. If you
have more and faster disks then you sleep for less time.
And that's what congestion_wait() does, in a very simplistic fashion.
It's a facility which direct-reclaimers use to ratelimit themselves in
inverse proportion to the speed with which the system can retire writes.
So then why not letting the process do something about it if no writes
are outstanding instead of going to sleep. It might be able to
take care of its bad situation alone, maybe by calling try_to_free again.
> Generally, kswapd is an optional, best-effort latency optimisation
> thing and we haven't designed for it to be a critical service.
> Probably stuff would break were we to do so.
>
>
> This is one of the reasons why we avoided creating such dependencies in
> reclaim. Instead, what we do when a reclaimer is encountering lots of
> dirty or in-flight pages is
>
> msleep(100);
>
> then try again. We're waiting for the disks, not kswapd.
>
> Only the hard-wired 100 is a bit silly, so we made the "100" variable,
> inversely dependent upon the number of disks and their speed. If you
> have more and faster disks then you sleep for less time.
>
> And that's what congestion_wait() does, in a very simplistic fashion.
> It's a facility which direct-reclaimers use to ratelimit themselves in
> inverse proportion to the speed with which the system can retire writes.
I would totally agree if I wouldn't have that scenario suffering so much
from that mechanism.
In the scenario Mel, Nick and I discussed for a while are no writes at
all, but a lot of page cache reads.
In this scenario direct_reclaimer runs quite frequently into the case of
"did_some_progress && !page" which leads to congestion_wait calls in the
caller of direct_reclaim - eventually waiting always the full timeout as
there are no writes.
I think reclaim in this case is just done by dropping clean page cache
pages in try_to_free_pages in this case -> so still no writes.
For the solution it is hard to find the right layer, as the race is in
direct_reclaim but the wait call is outside of it.
The alternatives we have so far are:
a) congestion_wait which works fine with writes in flight in the system,
but with a huge drawback for non writing systems.
b) watermark wait which covers writes like congestion_wait (if they free
up enough) but also any other kind of reclaimers like processes freeing
up stuff, other page cache droppers.
new suggestions:
These ideas came up when trying to view it from your position. I don't
know exactly if all are doable/feasible, but as we are going to wait
anyway so we could do complex things in that path.
c) If direct reclaim did reasonable progress in try_to_free but did not
get a page, AND there is no write in flight at all then let it try again
to free up something.
This could be extended by some kind of max retry to avoid some weird
looping cases as well.
d) Another way might be as easy as letting congestion_wait return
immediately if there are no outstanding writes - this would keep the
behavior for cases with write and avoid the "running always in full
timeout" issue without writes.
e) like d, but let it go to the watermark wait if no writes exist.
So I don't consider option a) a solution as we have real world scenarios
with huge impacts, even putting more burden on top of kswapd's shoulders
b) is still better - remember as long as writes are there its almost the
same as congestion_wait, but waiting for the right time to wake up
(awoken allocs will still fail if below watermark).
And c-e) well I'm not sure yet, just things that came to my mind.
For the moment I would suggest going forward with Mels watermark wait
towards the stable tree as it "fixes" a huge issue there (or better its
symptoms) and the patch is small, neat and matching .32.
We can then separately continue discuss without any pressure how we can
finally get rid of all that race/latency/kswap issues at all in 2.6.3n+1
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
--
Well, not quite. The direct reclaimer will still wake up after a timeout
and try again regardless of whether watermarks have been met or not. The
intention is to back after after direct reclaim has failed. Granted, the
window during which a direct reclaim finishes and an allocation attempt
occurs is unnecessarily large. This may be addressed by the patch that
changes where cond_resched() is called.
> This will introduce a hard dependency upon kswapd
> activity. This might introduce scalability problems. And latency
> problems if kswapd if off doodling with a slow device (say), or doing a
> journal commit. And perhaps deadlocks if kswapd tries to take a lock
> which one of the waiting-for-watermark direct relcaimers holds.
>
What lock could they be holding? Even if that is the case, the direct
reclaimers do not wait indefinitily.
> Generally, kswapd is an optional, best-effort latency optimisation
> thing and we haven't designed for it to be a critical service.
> Probably stuff would break were we to do so.
>
No disagreements there.
> This is one of the reasons why we avoided creating such dependencies in
> reclaim. Instead, what we do when a reclaimer is encountering lots of
> dirty or in-flight pages is
>
> msleep(100);
>
> then try again. We're waiting for the disks, not kswapd.
>
> Only the hard-wired 100 is a bit silly, so we made the "100" variable,
> inversely dependent upon the number of disks and their speed. If you
> have more and faster disks then you sleep for less time.
>
> And that's what congestion_wait() does, in a very simplistic fashion.
> It's a facility which direct-reclaimers use to ratelimit themselves in
> inverse proportion to the speed with which the system can retire writes.
>
The problem being hit is when a direct reclaimer goes to sleep waiting
on congestion when in reality there were not lots of dirty or in-flight
pages. It goes to sleep for the wrong reasons and doesn't get woken up
again until the timeout expires.
Bear in mind that even if congestion clears, it just means that dirty
pages are now clean although I admit that the next direct reclaim it
does is going to encounter clean pages and should succeed.
Lets see how the other patch that changes when cond_reched() gets called
gets on. If it also works out, then it's harder to justify this patch.
If it doesn't work out then it'll need to be kicked another few times.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
Unfortunately "page-allocator: Attempt page allocation immediately after
direct reclaim" don't help. No improvement in the regression we had
fixed with the watermark wait patch.
-> *kick*^^
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
> c) If direct reclaim did reasonable progress in try_to_free but did not
> get a page, AND there is no write in flight at all then let it try again
> to free up something.
> This could be extended by some kind of max retry to avoid some weird
> looping cases as well.
>
> d) Another way might be as easy as letting congestion_wait return
> immediately if there are no outstanding writes - this would keep the
> behavior for cases with write and avoid the "running always in full
> timeout" issue without writes.
They're pretty much equivalent and would work. But there are two
things I still don't understand:
1: Why is direct reclaim calling congestion_wait() at all? If no
writes are going on there's lots of clean pagecache around so reclaim
should trivially succeed. What's preventing it from doing so?
2: This is, I think, new behaviour. A regression. What caused it?
Unfortunately, this regression is very poorly understood. I haven't been able
to reproduce it locally and while Christian has provided various debugging
information, it still isn't clear why the problem occurs now.
> 1: Why is direct reclaim calling congestion_wait() at all? If no
> writes are going on there's lots of clean pagecache around so reclaim
> should trivially succeed. What's preventing it from doing so?
>
Memory pressure I think. The workload involves 16 processes (see
http://lkml.org/lkml/2009/12/7/237). I suspect they are all direct reclaimers
and some processes are getting their pages stolen before they have a
chance to allocate them. It's knowing that adding a small amount of
memory "fixes" this problem.
> 2: This is, I think, new behaviour. A regression. What caused it?
>
Short answer, I don't know.
Longer answer. Initially, this was reported as being caused by commit e084b2d:
page-allocator: preserve PFN ordering when __GFP_COLD is set but it was never
established why and reverting it was unpalatable because it fixed another
performance problem. According to Christian, the controller does nothing
with the merging of IO requests and he was very sure about this. As all the
patch does is change the order that pages are returned in and the timing
slightly due to differences in cache hotness, although the fact that such
a small change could make a big difference in reclaim later was surprising.
There were other bugs that might have complicated this such as errors in free
page counters but they were fixed up and the problem still did not go away.
It was after much debugging that it was found that direct reclaim was
returning, the subsequent allocation attempt failed and congestion_wait()
was called but without dirty pages, congestion or writes, it waits for
the full timeout. congestion_wait() was also being called a lot more
frequently so something was causing reclaim to fail more frequently
(http://lkml.org/lkml/2009/12/18/150). Again, I couldn't figure out why
e084b2d would make a difference.
Later, it got even worse because patches e084b2d and 5f8dcc21 had to be
reverted in 2.6.33 to "resolve" the problem. 5f8dcc21 was more plausible as it
affected how many pages were on the per-cpu lists but making it behave like
2.6.32 did not help the situation. Again, it looked like a very small timing
problem but it could not be isolated exactly why reclaim would fail. Again,
other bugs were found and fixed but made no difference.
What lead to this patch was recognising we could enter congestion_wait()
and wait the entire timeout because no writes were in progress or dirty
pages to be cleaned. As what was really of interest was watermarks in
this path, the patch intended to make the page allocator care about
watermarks instead of congestion. We know it was treating symptoms
rather than understanding the underlying problem but I was somewhat at a
loss to explain why small changes in timing made such a large
difference.
Any new insight is welcome.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
I looked at this a bit closer using an iozone test very similar to
Christian's. Despite buying a number of disks, I still can't reproduce his
problem but I instrumented congestion_wait counts and times similar to
what he did.
2.6.29-instrument:congestion_waittime 990
2.6.30-instrument:congestion_waittime 2823
2.6.31-instrument:congestion_waittime 193169
2.6.32-instrument:congestion_waittime 228890
2.6.33-instrument:congestion_waittime 785529
2.6.34-rc1-instrument:congestion_waittime 797178
So in the problem window, there was *definite* increases in the time spent
in congestion_wait and the number of times it was called. I'll look
closer at this tomorrow and Monday and see can I pin down what is
happening.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
120+ kernels and a lot of hurt later;
Short summary - The number of times kswapd and the page allocator have been
calling congestion_wait and the length of time it spends in there
has been increasing since 2.6.29. Oddly, it has little to do
with the page allocator itself.
Test scenario
=============
X86-64 machine 1 socket 4 cores
4 consumer-grade disks connected as RAID-0 - software raid. RAID controller
on-board and a piece of crap, and a decent RAID card could blow
the budget.
Booted mem=256 to ensure it is fully IO-bound and match closer to what
Christian was doing
At each test, the disks are partitioned, the raid arrays created and an
ext2 filesystem created. iozone sequential read/write tests are run with
increasing number of processes up to 64. Each test creates 8G of files. i.e.
1 process = 8G. 2 processes = 2x4G etc
iozone -s 8388608 -t 1 -r 64 -i 0 -i 1
iozone -s 4194304 -t 2 -r 64 -i 0 -i 1
etc.
Metrics
=======
Each kernel was instrumented to collected the following stats
pg-Stall Page allocator stalled calling congestion_wait
pg-Wait The amount of time spent in congestion_wait
pg-Rclm Pages reclaimed by direct reclaim
ksd-stall balance_pgdat() (ie kswapd) staled on congestion_wait
ksd-wait Time spend by balance_pgdat in congestion_wait
Large differences in this do not necessarily show up in iozone because the
disks are so slow that the stalls are a tiny percentage overall. However, in
the event that there are many disks, it might be a greater problem. I believe
Christian is hitting a corner case where small delays trigger a much larger
stall.
Why The Increases
=================
The big problem here is that there was no one change. Instead, it has been
a steady build-up of a number of problems. The ones I identified are in the
block IO, CFQ IO scheduler, tty and page reclaim. Some of these are fixed
but need backporting and others I expect are a major surprise. Whether they
are worth backporting or not heavily depends on whether Christian's problem
is resolved.
Some of the "fixes" below are obviously not fixes at all. Gathering this data
took a significant amount of time. It'd be nice if people more familiar with
the relevant problem patches could spring a theory or patch.
The Problems
============
1. Block layer congestion queue async/sync difficulty
fix title: asyncconfusion
fixed in mainline? yes, in 2.6.31
affects: 2.6.30
2.6.30 replaced congestion queues based on read/write with sync/async
in commit 1faa16d2. Problems were identified with this and fixed in
2.6.31 but not backported. Backporting 8aa7e847 and 373c0a7e brings
2.6.30 in line with 2.6.29 performance. It's not an issue for 2.6.31.
2. TTY using high order allocations more frequently
fix title: ttyfix
fixed in mainline? yes, in 2.6.34-rc2
affects: 2.6.31 to 2.6.34-rc1
2.6.31 made pty's use the same buffering logic as tty. Unfortunately,
it was also allowed to make high-order GFP_ATOMIC allocations. This
triggers some high-order reclaim and introduces some stalls. It's
fixed in 2.6.34-rc2 but needs back-porting.
3. Page reclaim evict-once logic from 56e49d21 hurts really badly
fix title: revertevict
fixed in mainline? no
affects: 2.6.31 to now
For reasons that are not immediately obvious, the evict-once patches
*really* hurt the time spent on congestion and the number of pages
reclaimed. Rik, I'm afaid I'm punting this to you for explanation
because clearly you tested this for AIM7 and might have some
theories. For the purposes of testing, I just reverted the changes.
4. CFQ scheduler fairness commit 718eee057 causes some hurt
fix title: none available
fixed in mainline? no
affects: 2.6.33 to now
A bisection finger printed this patch as being a problem introduced
between 2.6.32 and 2.6.33. It increases a small amount the number of
times the page allocator stalls but drastically increased the number
of pages reclaimed. It's not clear why the commit is such a problem.
Unfortunately, I could not test a revert of this patch. The CFQ and
block IO changes made in this window were extremely convulated and
overlapped heavily with a large number of patches altering the same
code as touched by commit 718eee057. I tried reverting everything
made on and after this commit but the results were unsatisfactory.
Hence, there is no fix in the results below
Results
=======
Here are the highlights of kernels tested. I'm omitting the bisection
results for obvious reasons. The metrics were gathered at two points;
after filesystem creation and after IOZone completed.
The lower the number for each metric, the better.
After Filesystem Setup After IOZone
pg-Stall pg-Wait pg-Rclm ksd-stall ksd-wait pg-Stall pg-Wait pg-Rclm ksd-stall ksd-wait
2.6.29 0 0 0 2 1 4 3 183 152 0
2.6.30 1 5 34 1 25 783 3752 31939 76 0
2.6.30-asyncconfusion 0 0 0 3 1 44 60 2656 893 0
2.6.30.10 0 0 0 2 43 777 3699 32661 74 0
2.6.30.10-asyncconfusion 0 0 0 2 1 36 88 1699 1114 0
asyncconfusion can be back-ported easily to 2.6.30.10. Performance is not
perfectly in line with 2.6.29 but it's better.
2.6.31 0 0 0 3 1 49175 245727 2730626 176344 0
2.6.31-revertevict 0 0 0 3 2 31 147 1887 114 0
2.6.31-ttyfix 0 0 0 2 2 46238 231000 2549462 170912 0
2.6.31-ttyfix-revertevict 0 0 0 3 0 7 35 448 121 0
2.6.31.12 0 0 0 2 0 68897 344268 4050646 183523 0
2.6.31.12-revertevict 0 0 0 3 1 18 87 1009 147 0
2.6.31.12-ttyfix 0 0 0 2 0 62797 313805 3786539 173398 0
2.6.31.12-ttyfix-revertevict 0 0 0 3 2 7 35 448 199 0
Applying the tty fixes from 2.6.34-rc2 and getting rid of the evict-once
patches bring things back in line with 2.6.29 again.
Rik, any theory on evict-once?
2.6.32 0 0 0 3 2 44437 221753 2760857 132517 0
2.6.32-revertevict 0 0 0 3 2 35 14 1570 460 0
2.6.32-ttyfix 0 0 0 2 0 60770 303206 3659254 166293 0
2.6.32-ttyfix-revertevict 0 0 0 3 0 55 62 2496 494 0
2.6.32.10 0 0 0 2 1 90769 447702 4251448 234868 0
2.6.32.10-revertevict 0 0 0 3 2 148 597 8642 478 0
2.6.32.10-ttyfix 0 0 0 3 0 91729 453337 4374070 238593 0
2.6.32.10-ttyfix-revertevict 0 0 0 3 1 65 146 3408 347 0
Again, fixing tty and reverting evict-once helps bring figures more in line
with 2.6.29.
2.6.33 0 0 0 3 0 152248 754226 4940952 267214 0
2.6.33-revertevict 0 0 0 3 0 883 4306 28918 507 0
2.6.33-ttyfix 0 0 0 3 0 157831 782473 5129011 237116 0
2.6.33-ttyfix-revertevict 0 0 0 2 0 1056 5235 34796 519 0
2.6.33.1 0 0 0 3 1 156422 776724 5078145 234938 0
2.6.33.1-revertevict 0 0 0 2 0 1095 5405 36058 477 0
2.6.33.1-ttyfix 0 0 0 3 1 136324 673148 4434461 236597 0
2.6.33.1-ttyfix-revertevict 0 0 0 1 1 1339 6624 43583 466 0
At this point, the CFQ commit "cfq-iosched: fairness for sync no-idle
queues" has lodged itself deep within CGQ and I couldn't tear it out or
see how to fix it. Fixing tty and reverting evict-once helps but the number
of stalls is significantly increased and a much larger number of pages get
reclaimed overall.
Corrado?
2.6.34-rc1 0 0 0 1 1 150629 746901 4895328 239233 0
2.6.34-rc1-revertevict 0 0 0 1 0 2595 12901 84988 622 0
2.6.34-rc1-ttyfix 0 0 0 1 1 159603 791056 5186082 223458 0
2.6.34-rc1-ttyfix-revertevict 0 0 0 0 0 1549 7641 50484 679 0
Again, ttyfix and revertevict help a lot but CFQ needs to be fixed to get
back to 2.6.29 performance.
Next Steps
==========
Jens, any problems with me backporting the async/sync fixes from 2.6.31 to
2.6.30.x (assuming that is still maintained, Greg?)?
Rik, any suggestions on what can be done with evict-once?
Corrado, any suggestions on what can be done with CFQ?
Christian, can you test the following amalgamated patch on 2.6.32.10 and
2.6.33 please? Note it's 2.6.32.10 because the patches below will not apply
cleanly to 2.6.32 but it will against 2.6.33. It's a combination of ttyfix
and revertevict. If your problem goes away, it implies that the stalls I
can measure are roughly correlated to the more significant problem you have.
===== CUT HERE =====
From d9661adfb8e53a7647360140af3b92284cbe52d4 Mon Sep 17 00:00:00 2001
From: Alan Cox <al...@linux.intel.com>
Date: Thu, 18 Feb 2010 16:43:47 +0000
Subject: [PATCH] tty: Keep the default buffering to sub-page units
We allocate during interrupts so while our buffering is normally diced up
small anyway on some hardware at speed we can pressure the VM excessively
for page pairs. We don't really need big buffers to be linear so don't try
so hard.
In order to make this work well we will tidy up excess callers to request_room,
which cannot itself enforce this break up.
Signed-off-by: Alan Cox <al...@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gre...@suse.de>
diff --git a/drivers/char/tty_buffer.c b/drivers/char/tty_buffer.c
index 66fa4e1..f27c4d6 100644
--- a/drivers/char/tty_buffer.c
+++ b/drivers/char/tty_buffer.c
@@ -247,7 +247,8 @@ int tty_insert_flip_string(struct tty_struct *tty, const unsigned char *chars,
{
int copied = 0;
do {
- int space = tty_buffer_request_room(tty, size - copied);
+ int goal = min(size - copied, TTY_BUFFER_PAGE);
+ int space = tty_buffer_request_room(tty, goal);
struct tty_buffer *tb = tty->buf.tail;
/* If there is no space then tb may be NULL */
if (unlikely(space == 0))
@@ -283,7 +284,8 @@ int tty_insert_flip_string_flags(struct tty_struct *tty,
{
int copied = 0;
do {
- int space = tty_buffer_request_room(tty, size - copied);
+ int goal = min(size - copied, TTY_BUFFER_PAGE);
+ int space = tty_buffer_request_room(tty, goal);
struct tty_buffer *tb = tty->buf.tail;
/* If there is no space then tb may be NULL */
if (unlikely(space == 0))
diff --git a/include/linux/tty.h b/include/linux/tty.h
index 6abfcf5..d96e588 100644
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -68,6 +68,16 @@ struct tty_buffer {
unsigned long data[0];
};
+/*
+ * We default to dicing tty buffer allocations to this many characters
+ * in order to avoid multiple page allocations. We assume tty_buffer itself
+ * is under 256 bytes. See tty_buffer_find for the allocation logic this
+ * must match
+ */
+
+#define TTY_BUFFER_PAGE ((PAGE_SIZE - 256) / 2)
+
+
struct tty_bufhead {
struct delayed_work work;
spinlock_t lock;
From 352fa6ad16b89f8ffd1a93b4419b1a8f2259feab Mon Sep 17 00:00:00 2001
From: Mel Gorman <m...@csn.ul.ie>
Date: Tue, 2 Mar 2010 22:24:19 +0000
Subject: [PATCH] tty: Take a 256 byte padding into account when buffering below sub-page units
The TTY layer takes some care to ensure that only sub-page allocations
are made with interrupts disabled. It does this by setting a goal of
"TTY_BUFFER_PAGE" to allocate. Unfortunately, while TTY_BUFFER_PAGE takes the
size of tty_buffer into account, it fails to account that tty_buffer_find()
rounds the buffer size out to the next 256 byte boundary before adding on
the size of the tty_buffer.
This patch adjusts the TTY_BUFFER_PAGE calculation to take into account the
size of the tty_buffer and the padding. Once applied, tty_buffer_alloc()
should not require high-order allocations.
Signed-off-by: Mel Gorman <m...@csn.ul.ie>
Cc: stable <sta...@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gre...@suse.de>
diff --git a/include/linux/tty.h b/include/linux/tty.h
index 568369a..593228a 100644
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -70,12 +70,13 @@ struct tty_buffer {
/*
* We default to dicing tty buffer allocations to this many characters
- * in order to avoid multiple page allocations. We assume tty_buffer itself
- * is under 256 bytes. See tty_buffer_find for the allocation logic this
- * must match
+ * in order to avoid multiple page allocations. We know the size of
+ * tty_buffer itself but it must also be taken into account that the
+ * the buffer is 256 byte aligned. See tty_buffer_find for the allocation
+ * logic this must match
*/
-#define TTY_BUFFER_PAGE ((PAGE_SIZE - 256) / 2)
+#define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF)
struct tty_bufhead {
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index bf9213b..5ba0d9a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -94,7 +94,6 @@ extern void mem_cgroup_note_reclaim_priority(struct mem_cgroup *mem,
extern void mem_cgroup_record_reclaim_priority(struct mem_cgroup *mem,
int priority);
int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg);
-int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg);
unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
struct zone *zone,
enum lru_list lru);
@@ -243,12 +242,6 @@ mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg)
return 1;
}
-static inline int
-mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg)
-{
- return 1;
-}
-
static inline unsigned long
mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg, struct zone *zone,
enum lru_list lru)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 66035bf..bbb0eda 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -843,17 +843,6 @@ int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg)
return 0;
}
-int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg)
-{
- unsigned long active;
- unsigned long inactive;
-
- inactive = mem_cgroup_get_local_zonestat(memcg, LRU_INACTIVE_FILE);
- active = mem_cgroup_get_local_zonestat(memcg, LRU_ACTIVE_FILE);
-
- return (active > inactive);
-}
-
unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
struct zone *zone,
enum lru_list lru)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 692807f..5512301 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1428,59 +1428,13 @@ static int inactive_anon_is_low(struct zone *zone, struct scan_control *sc)
return low;
}
-static int inactive_file_is_low_global(struct zone *zone)
-{
- unsigned long active, inactive;
-
- active = zone_page_state(zone, NR_ACTIVE_FILE);
- inactive = zone_page_state(zone, NR_INACTIVE_FILE);
-
- return (active > inactive);
-}
-
-/**
- * inactive_file_is_low - check if file pages need to be deactivated
- * @zone: zone to check
- * @sc: scan control of this context
- *
- * When the system is doing streaming IO, memory pressure here
- * ensures that active file pages get deactivated, until more
- * than half of the file pages are on the inactive list.
- *
- * Once we get to that situation, protect the system's working
- * set from being evicted by disabling active file page aging.
- *
- * This uses a different ratio than the anonymous pages, because
- * the page cache uses a use-once replacement algorithm.
- */
-static int inactive_file_is_low(struct zone *zone, struct scan_control *sc)
-{
- int low;
-
- if (scanning_global_lru(sc))
- low = inactive_file_is_low_global(zone);
- else
- low = mem_cgroup_inactive_file_is_low(sc->mem_cgroup);
- return low;
-}
-
-static int inactive_list_is_low(struct zone *zone, struct scan_control *sc,
- int file)
-{
- if (file)
- return inactive_file_is_low(zone, sc);
- else
- return inactive_anon_is_low(zone, sc);
-}
-
static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
struct zone *zone, struct scan_control *sc, int priority)
{
int file = is_file_lru(lru);
- if (is_active_lru(lru)) {
- if (inactive_list_is_low(zone, sc, file))
- shrink_active_list(nr_to_scan, zone, sc, priority, file);
+ if (lru == LRU_ACTIVE_FILE) {
+ shrink_active_list(nr_to_scan, zone, sc, priority, file);
return 0;
}
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
Thanks for all your effort in searching the real cause behind
congestion_wait becoming such a time sink for some benchmarks.
>
> 2.6.32 0 0 0 3 2 44437 221753 2760857 132517 0
> 2.6.32-revertevict 0 0 0 3 2 35 14 1570 460 0
> 2.6.32-ttyfix 0 0 0 2 0 60770 303206 3659254 166293 0
> 2.6.32-ttyfix-revertevict 0 0 0 3 0 55 62 2496 494 0
> 2.6.32.10 0 0 0 2 1 90769 447702 4251448 234868 0
> 2.6.32.10-revertevict 0 0 0 3 2 148 597 8642 478 0
> 2.6.32.10-ttyfix 0 0 0 3 0 91729 453337 4374070 238593 0
> 2.6.32.10-ttyfix-revertevict 0 0 0 3 1 65 146 3408 347 0
>
> Again, fixing tty and reverting evict-once helps bring figures more in line
> with 2.6.29.
>
> 2.6.33 0 0 0 3 0 152248 754226 4940952 267214 0
> 2.6.33-revertevict 0 0 0 3 0 883 4306 28918 507 0
> 2.6.33-ttyfix 0 0 0 3 0 157831 782473 5129011 237116 0
> 2.6.33-ttyfix-revertevict 0 0 0 2 0 1056 5235 34796 519 0
> 2.6.33.1 0 0 0 3 1 156422 776724 5078145 234938 0
> 2.6.33.1-revertevict 0 0 0 2 0 1095 5405 36058 477 0
> 2.6.33.1-ttyfix 0 0 0 3 1 136324 673148 4434461 236597 0
> 2.6.33.1-ttyfix-revertevict 0 0 0 1 1 1339 6624 43583 466 0
>
[...]
>
> Christian, can you test the following amalgamated patch on 2.6.32.10 and
> 2.6.33 please? Note it's 2.6.32.10 because the patches below will not apply
> cleanly to 2.6.32 but it will against 2.6.33. It's a combination of ttyfix
> and revertevict. If your problem goes away, it implies that the stalls I
> can measure are roughly correlated to the more significant problem you have.
While your tty&evict patch might fix something as seen by your numbers,
it unfortunately doesn't affect my big throughput loss.
Again the scenario was 4,8 and 16 threads iozone sequential read with
2Gb files and one disk per process, running on a s390x machine with 4
cpus and 256m.
My table shows the throughput deviation to plain 2.6.32 git in percent.
percentage 4thr 8thr 16thr
2.6.32 0.00% 0.00% 0.00%
2.6.32.10 (stable) 4.44% 7.97% 4.11%
2.6.32.10-ttyfix-revertevict 3.33% 6.64% 5.07%
2.6.33 5.33% -2.82% -10.87%
2.6.33-ttyfix-revertevict 3.33% -3.32% -10.51%
2.6.32-watermarkwait 40.00% 58.47% 42.03%
In terms of throughput for my load your patch doesn't change anything
significantly above the noise level of the test case (which is around
~1%). The fix probably even has a slight performance decrease in low
thread cases.
For better comparison I added a 2.6.32 run with your watermark wait
patch which is still the only one fixing the issue.
That said I'd still love to see watermark wait getting accepted :-)
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
> Test scenario
> =============
> X86-64 machine 1 socket 4 cores
> 4 consumer-grade disks connected as RAID-0 - software raid. RAID controller
> on-board and a piece of crap, and a decent RAID card could blow
> the budget.
> Booted mem=256 to ensure it is fully IO-bound and match closer to what
> Christian was doing
With that many disks, you can easily have dozens of megabytes
of data in flight to the disk at once. That is a major
fraction of memory.
In fact, you might have all of the inactive file pages under
IO...
> 3. Page reclaim evict-once logic from 56e49d21 hurts really badly
> fix title: revertevict
> fixed in mainline? no
> affects: 2.6.31 to now
>
> For reasons that are not immediately obvious, the evict-once patches
> *really* hurt the time spent on congestion and the number of pages
> reclaimed. Rik, I'm afaid I'm punting this to you for explanation
> because clearly you tested this for AIM7 and might have some
> theories. For the purposes of testing, I just reverted the changes.
The patch helped IO tests with reasonable amounts of memory
available, because the VM can cache frequently used data
much more effectively.
This comes at the cost of caching less recently accessed
use-once data, which should not be an issue since the data
is only used once...
> Rik, any theory on evict-once?
No real theories yet, just the observation that your revert
appears to be buggy (see below) and the possibility that your
test may have all of the inactive file pages under IO...
Can you reproduce the stall if you lower the dirty limits?
> static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
> struct zone *zone, struct scan_control *sc, int priority)
> {
> int file = is_file_lru(lru);
>
> - if (is_active_lru(lru)) {
> - if (inactive_list_is_low(zone, sc, file))
> - shrink_active_list(nr_to_scan, zone, sc, priority, file);
> + if (lru == LRU_ACTIVE_FILE) {
> + shrink_active_list(nr_to_scan, zone, sc, priority, file);
> return 0;
> }
Your revert is buggy. With this change, anonymous pages will
never get deactivated via shrink_list.
It will go to the other stable kernels for their next round of releases
now that it is in Linus's tree.
> Next Steps
> ==========
>
> Jens, any problems with me backporting the async/sync fixes from 2.6.31 to
> 2.6.30.x (assuming that is still maintained, Greg?)?
No, .30 is no longer being maintained.
thanks,
greg k-h
This is true. The CFQ and block IO changes in that window are almost
impossible to properly bisect and isolate individual changes. There were
multiple dependant patches that modified each others changes. It's unclear
if this modification can even be isolated although your suggestion below
is the best bet.
> * reads (and sync writes):
> * before, we serviced a single process for 100ms, then switched to
> an other, and so on.
> * after, we go round robin for random requests (they get a unified
> time slice, like buffered writes do), and we have consecutive time
> slices for sequential requests, but the length of the slice is reduced
> when the number of concurrent processes doing I/O increases.
>
> This means that with 16 processes doing sequential I/O on the same
> disk, before you were switching between processes every 100ms, and now
> every 32ms. The old behaviour can be brought back by setting
> /sys/block/sd*/queue/iosched/low_latency to 0.
Will try this and see what happens.
> For random I/O, the situation (going round robin, it will translate to
> switching every 8 ms on average) is not revertable via flags.
>
At the moment, I'm not testing random IO so it shouldn't be a factor in
the tests.
> >
> > 2.6.34-rc1 0 0 0 1 1 150629 746901 4895328 239233 0
> > 2.6.34-rc1-revertevict 0 0 0 1 0 2595 12901 84988 622 0
> > 2.6.34-rc1-ttyfix 0 0 0 1 1 159603 791056 5186082 223458 0
> > 2.6.34-rc1-ttyfix-revertevict 0 0 0 0 0 1549 7641 50484 679 0
> >
> > Again, ttyfix and revertevict help a lot but CFQ needs to be fixed to get
> > back to 2.6.29 performance.
> >
> > Next Steps
> > ==========
> >
> > Jens, any problems with me backporting the async/sync fixes from 2.6.31 to
> > 2.6.30.x (assuming that is still maintained, Greg?)?
> >
> > Rik, any suggestions on what can be done with evict-once?
> >
> > Corrado, any suggestions on what can be done with CFQ?
>
> If my intuition that switching between processes too often is
> detrimental when you have memory pressure (higher probability to need
> to re-page-in some of the pages that were just discarded), I suggest
> trying setting low_latency to 0, and maybe increasing the slice_sync
> (to get more slice to a single process before switching to an other),
> slice_async (to give more uninterruptible time to buffered writes) and
> slice_async_rq (to higher the limit of consecutive write requests can
> be sent to disk).
> While this would normally lead to a bad user experience on a system
> with plenty of memory, it should keep things acceptable when paging in
> / swapping / dirty page writeback is overwhelming.
>
Christian, would you be able to follow the same instructions and see can
you make a difference to your test? It is known for your situation that
memory is unusually low for size of your workload so it's a possibility.
Thanks Corrado.
> Corrado
Great.
> > Next Steps
> > ==========
> >
> > Jens, any problems with me backporting the async/sync fixes from 2.6.31 to
> > 2.6.30.x (assuming that is still maintained, Greg?)?
>
> No, .30 is no longer being maintained.
>
Right, I won't lose any sleep over 2.6.30.dodo so :)
Thanks
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
An other parameter that is worth tweaking in this case is the
readahead size. If readahead size is too large for the available
memory, we might be reading, then discarding, and then reading again
the same pages.
I would also like to see some iostats output (iostats -kx 5 >
iostats.log) during the experiment run, to better understand what's
happening.
Thanks,
Corrado
On Mon, Mar 22, 2010 at 11:50:54PM +0000, Mel Gorman wrote:
[...]
> 3. Page reclaim evict-once logic from 56e49d21 hurts really badly
> fix title: revertevict
> fixed in mainline? no
> affects: 2.6.31 to now
>
> For reasons that are not immediately obvious, the evict-once patches
> *really* hurt the time spent on congestion and the number of pages
> reclaimed. Rik, I'm afaid I'm punting this to you for explanation
> because clearly you tested this for AIM7 and might have some
> theories. For the purposes of testing, I just reverted the changes.
>
[...]
> Results
> =======
>
> Here are the highlights of kernels tested. I'm omitting the bisection
> results for obvious reasons. The metrics were gathered at two points;
> after filesystem creation and after IOZone completed.
>
> The lower the number for each metric, the better.
>
> After Filesystem Setup After IOZone
> pg-Stall pg-Wait pg-Rclm ksd-stall ksd-wait pg-Stall pg-Wait pg-Rclm ksd-stall ksd-wait
[...]
> Again, fixing tty and reverting evict-once helps bring figures more in line
> with 2.6.29.
>
> 2.6.33 0 0 0 3 0 152248 754226 4940952 267214 0
> 2.6.33-revertevict 0 0 0 3 0 883 4306 28918 507 0
> 2.6.33-ttyfix 0 0 0 3 0 157831 782473 5129011 237116 0
> 2.6.33-ttyfix-revertevict 0 0 0 2 0 1056 5235 34796 519 0
> 2.6.33.1 0 0 0 3 1 156422 776724 5078145 234938 0
> 2.6.33.1-revertevict 0 0 0 2 0 1095 5405 36058 477 0
> 2.6.33.1-ttyfix 0 0 0 3 1 136324 673148 4434461 236597 0
> 2.6.33.1-ttyfix-revertevict 0 0 0 1 1 1339 6624 43583 466 0
>
> At this point, the CFQ commit "cfq-iosched: fairness for sync no-idle
> queues" has lodged itself deep within CGQ and I couldn't tear it out or
> see how to fix it. Fixing tty and reverting evict-once helps but the number
> of stalls is significantly increased and a much larger number of pages get
> reclaimed overall.
>
> Corrado?
>
> 2.6.34-rc1 0 0 0 1 1 150629 746901 4895328 239233 0
> 2.6.34-rc1-revertevict 0 0 0 1 0 2595 12901 84988 622 0
I was wondering why kswapd would not make any progress and stall without
dirty pages, luckily Rik has better eyes than me.
So if he is right and most inactive pages are under IO (thus locked and
skipped) when kswapd is running, we have two choices:
1) deactivate pages and reclaim them instead
2) sleep and wait for IO to finish
The patch in question changes 1) to 2) because it won't scan small active
lists and the inactive list does not shrink in size when rotating busy
pages.
You said pg-Rclm is only direct reclaim. I assume the sum of reclaimed
pages from kswapd and direct reclaim stays in the same ballpark, only
the ratio shifted towards direct reclaim?
Waiting for the disks seems to be better than going after the working set
but I have a feeling we are waiting for the wrong event to happen there.
I am amazingly ignorant when it comes to the block layer, but glancing over
the queue congestion code, it seems we are waiting for the queue to shrink
below a certain threshold. Is this correct?
When it comes to the reclaim scanner, however, aren't we more interested in
single completions than in the overall state of the queue?
With such a constant stream of IO as in Mel's test, I could imagine that
the queue never really gets below that threshold (here goes the ignorance part)
and we always hit the timeout. While what we really want is to be woken
up when, say, SWAP_CLUSTER_MAX pages finished since we went to sleep.
Because at that point there is a chance to reclaim some pages again,
even if a lot of requests are still pending.
Hannes
That is easily possible. Note, I'm not maintaining this workload configuration
is a good idea.
The background to this problem is Christian running a disk-intensive iozone
workload over many CPUs and disks with limited memory. It's already known
that if he added a small amount of extra memory, the problem went away.
The problem was a massive throughput regression and a bisect pinpointed
two patches (both mine) but neither make sense. One altered the order pages
come back from lists but not availability and his hardware does no automatic
merging. A second does alter the availility of pages via the per-cpu lists
but reverting the behaviour didn't help.
The first fix to this was to replace congestion_wait with a waitqueue
that woke up processes if the watermarks were met. This fixed
Christian's problem but Andrew wants to pin the underlying cause.
I strongly suspect that evict-once behaves sensibly when memory is ample
but in this particular case, it's not helping.
> In fact, you might have all of the inactive file pages under
> IO...
>
Possibly. The tests have a write and a read phase but I wasn't
collecting the data with sufficient granularity to see which of the
tests are actually stalling.
>> 3. Page reclaim evict-once logic from 56e49d21 hurts really badly
>> fix title: revertevict
>> fixed in mainline? no
>> affects: 2.6.31 to now
>>
>> For reasons that are not immediately obvious, the evict-once patches
>> *really* hurt the time spent on congestion and the number of pages
>> reclaimed. Rik, I'm afaid I'm punting this to you for explanation
>> because clearly you tested this for AIM7 and might have some
>> theories. For the purposes of testing, I just reverted the changes.
>
> The patch helped IO tests with reasonable amounts of memory
> available, because the VM can cache frequently used data
> much more effectively.
>
> This comes at the cost of caching less recently accessed
> use-once data, which should not be an issue since the data
> is only used once...
>
Indeed. With or without evict-once, I'd have an expectation of all the
pages being recycled anyway because of the amount of data involved.
>> Rik, any theory on evict-once?
>
> No real theories yet, just the observation that your revert
> appears to be buggy (see below) and the possibility that your
> test may have all of the inactive file pages under IO...
>
Bah. I had the initial revert right and screwed up reverting from
2.6.32.10 on. I'm rerunning the tests. Is this right?
- if (is_active_lru(lru)) {
- if (inactive_list_is_low(zone, sc, file))
- shrink_active_list(nr_to_scan, zone, sc, priority, file);
+ if (is_active_lru(lru)) {
+ shrink_active_list(nr_to_scan, zone, sc, priority, file);
return 0;
> Can you reproduce the stall if you lower the dirty limits?
>
I'm rerunning the revertevict patches at the moment. When they complete,
I'll experiment with dirty limits. Any suggested values or will I just
increase it by some arbitrary amount and see what falls out? e.g.
increse dirty_ratio to 80.
>> static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
>> struct zone *zone, struct scan_control *sc, int priority)
>> {
>> int file = is_file_lru(lru);
>>
>> - if (is_active_lru(lru)) {
>> - if (inactive_list_is_low(zone, sc, file))
>> - shrink_active_list(nr_to_scan, zone, sc, priority, file);
>> + if (lru == LRU_ACTIVE_FILE) {
>> + shrink_active_list(nr_to_scan, zone, sc, priority, file);
>> return 0;
>> }
>
> Your revert is buggy. With this change, anonymous pages will
> never get deactivated via shrink_list.
>
/me slaps self
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
For the requested CFQ scheduler tuning, its deadline what is here :-)
So I can't apply all that. But in the past I was already able to show that all the "slowdown" occurs above the block device layer (read back through our threads if interessted about details). But eventually that leaves all lower layer tuning out of the critical zone.
Corrado also asked for iostat data, due to the reason explained above (issue above BDL) it doesn't contain anything much useful as expected.
So I'll just add a one liner of good/bad case to show that things like req-sz etc are the same, but just slower.
This "being slower" is caused by the request arriving in the BDL at a lower rate - caused by our beloved full timeouts in congestion_wait.
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
bad sdb 0.00 0.00 154.50 0.00 70144.00 0.00 908.01 0.62 4.05 2.72 42.00
good sdb 0.00 0.00 270.50 0.00 122624.00 0.00 906.65 1.32 4.94 2.92 79.00
So now coming to the probably most critical part - the evict once discussion in this thread.
I'll try to explain what I found in the meanwhile - let me know whats unclear and I'll add data etc.
In the past we identified that "echo 3 > /proc/sys/vm/drop_caches" helps to improve the accuracy of the used testcase by lowering the noise from 5-8% to <1%.
Therefore I ran all tests and verifications with that drops.
In the meanwhile I unfortunately discovered that Mel's fix only helps for the cases when the caches are dropped.
Without it seems to be bad all the time. So don't cast the patch away due to that discovery :-)
On the good side I was also able to analyze a few more things due to that insight - and it might give us new data to debug the root cause.
Like Mel I also had identified "56e49d21 vmscan: evict use-once pages first" to be related in the past. But without the watermark wait fix, unapplying it 56e49d21 didn't change much for my case so I left this analysis path.
But now after I found dropping caches is the key to "get back good performance" and "subsequent writes for bad performance" even with watermark wait applied I checked what else changes:
- first write/read load after reboot or dropping caches -> read TP good
- second write/read load after reboot or dropping caches -> read TP bad
=> so what changed.
I went through all kind of logs and found something in the system activity report which very probably is related to 56e49d21.
When issuing subsequent writes after I dropped caches to get a clean start I get this in Buffers/Caches from Meminfo:
pre write 1
Buffers: 484 kB
Cached: 5664 kB
pre write 2
Buffers: 33500 kB
Cached: 149856 kB
pre write 3
Buffers: 65564 kB
Cached: 115888 kB
pre write 4
Buffers: 85556 kB
Cached: 97184 kB
It stays at ~85M with more writes which is approx 50% of my free 160M memory.
It can be said that once Buffers reached the 65M level all (no matter how much read load I throw at the system) following read loads will have the bad throughput.
Dropping caches - and by that removing these buffers - gives back the good performance.
So far I found no alternative to a manual drop_caches, but recommending a 30 second cron job dropping caches to get good read performance for customers is not that good anyway.
I checked if the buffers get cleaned some when, but neither a lot of subsequent read loads pushing the pressure towards read page cache (I hoped the buffers would age or something to eventually get thrown out) nor waiting a long time helped.
The system seems to be totally unable to get rid of these buffers without my manual help via drop_caches.
I imagine a huge customer DB running wirtes&reads fine at day, with a nightly large backup that losses 50% read throughput because the kernel keeps 50% buffers all the night - and by that doesn't fit in their night slot - just to draw one realistic scenario.
Is there anything to avoid that behavior to "never free these buffers", but still get all/some of the intended benefits of 56e49d21?
Ideas welcome
P.S. This is still a .32 stable kernel + Mels watermark wait patch based analysis - I plan to check current kernels as well once I find the time, but let me know if there are known obvious fixes related to this issue I should test asap.
--
Grï¿œsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
Ok, so I am the idiot that got quoted on 'the active set is not too big, so
buffer heads are not a problem when avoiding to scan it' in eternal history.
But the threshold inactive/active ratio for skipping active file pages is
actually 1:1.
The easiest 'fix' is probably to change that ratio, 2:1 (or even 3:1?) appears
to be a bit more natural anyway? Below is a patch that changes it to 2:1.
Christian, can you check if it fixes your regression?
Additionally, we can always scan active file pages but only deactivate them
when the ratio is off and otherwise strip buffers of clean pages.
What do people think?
Hannes
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f4ede99..a4aea76 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -898,7 +898,7 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg)
inactive = mem_cgroup_get_local_zonestat(memcg, LRU_INACTIVE_FILE);
active = mem_cgroup_get_local_zonestat(memcg, LRU_ACTIVE_FILE);
- return (active > inactive);
+ return (active > inactive / 2);
}
unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3ff3311..8f1a846 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1466,7 +1466,7 @@ static int inactive_file_is_low_global(struct zone *zone)
active = zone_page_state(zone, NR_ACTIVE_FILE);
inactive = zone_page_state(zone, NR_INACTIVE_FILE);
- return (active > inactive);
+ return (active > inactive / 2);
}
/**
I'll check it out.
from the numbers I have up to now I know that the good->bad transition
for my case is somewhere between 30M/60M e.g. first and second write.
The ratio 2:1 will eat max 53M of my ~160M that gets split up.
That means setting the ratio to 2:1 or whatever else might help or not,
but eventually there is just another setting of workload vs. memory
constraints that would still be affected. Still I guess 3:1 (and I'll
try that as well) should be enough to be a bit more towards the save side.
> Additionally, we can always scan active file pages but only deactivate them
> when the ratio is off and otherwise strip buffers of clean pages.
In think we need something that allows the system to forget its history
somewhen - be it 1:1 or x:1 - if the workload changes "long enough"(tm)
it should eventually throw all old things out.
Like I described before many systems have different usage patterns when
e.g. comparing day/night workload. So it is far from optimal if e.g. day
write loads eat so much cache and never give it back for nightly huge
reads tasks or something similar.
Would your suggestion achieve that already?
If not what kind change could?
> What do people think?
>
> Hannes
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index f4ede99..a4aea76 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -898,7 +898,7 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg)
> inactive = mem_cgroup_get_local_zonestat(memcg, LRU_INACTIVE_FILE);
> active = mem_cgroup_get_local_zonestat(memcg, LRU_ACTIVE_FILE);
>
> - return (active > inactive);
> + return (active > inactive / 2);
> }
>
> unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 3ff3311..8f1a846 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1466,7 +1466,7 @@ static int inactive_file_is_low_global(struct zone *zone)
> active = zone_page_state(zone, NR_ACTIVE_FILE);
> inactive = zone_page_state(zone, NR_INACTIVE_FILE);
>
> - return (active > inactive);
> + return (active > inactive / 2);
> }
>
> /**
>
--
Gr�sse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
Christian Ehrhardt wrote:
>
>
> Johannes Weiner wrote:
[...]
>>>
>>> It stays at ~85M with more writes which is approx 50% of my free 160M
>>> memory.
>>
>> Ok, so I am the idiot that got quoted on 'the active set is not too
>> big, so
>> buffer heads are not a problem when avoiding to scan it' in eternal
>> history.
>>
>> But the threshold inactive/active ratio for skipping active file pages is
>> actually 1:1.
>>
>> The easiest 'fix' is probably to change that ratio, 2:1 (or even 3:1?)
>> appears
>> to be a bit more natural anyway? Below is a patch that changes it to
>> 2:1.
>> Christian, can you check if it fixes your regression?
>
> I'll check it out.
> from the numbers I have up to now I know that the good->bad transition
> for my case is somewhere between 30M/60M e.g. first and second write.
> The ratio 2:1 will eat max 53M of my ~160M that gets split up.
>
> That means setting the ratio to 2:1 or whatever else might help or not,
> but eventually there is just another setting of workload vs. memory
> constraints that would still be affected. Still I guess 3:1 (and I'll
> try that as well) should be enough to be a bit more towards the save side.
For "my case" 2:1 is not enough, 3:1 almost and 4:1 fixes the issue.
Still as I mentioned before I think any value carved in stone can and
will be bad to some use case - as 1:1 is for mine.
If we end up being unable to fix it internally by allowing the system to
"forget" and eventually free old unused buffers at least somewhen - then
we should neither implement it as 2:1 nor 3:1 nor whatsoever, but as
userspace configurable e.g. /proc/sys/vm/active_inactive_ratio.
I hope your suggestion below or an extension to it will allow the kernel
to free the buffers somewhen. Depending on how good/fast this solution
then will work we can still modify the ratio if needed.
>> Additionally, we can always scan active file pages but only deactivate
>> them
>> when the ratio is off and otherwise strip buffers of clean pages.
>
> In think we need something that allows the system to forget its history
> somewhen - be it 1:1 or x:1 - if the workload changes "long enough"(tm)
> it should eventually throw all old things out.
> Like I described before many systems have different usage patterns when
> e.g. comparing day/night workload. So it is far from optimal if e.g. day
> write loads eat so much cache and never give it back for nightly huge
> reads tasks or something similar.
>
> Would your suggestion achieve that already?
> If not what kind change could?
>
[...]
--
Grüsse / regards, Christian Ehrhardt
> What do people think?
It has potential advantages and disadvantages.
On smaller desktop systems, it is entirely possible that
the working set is close to half of the page cache. Your
patch reduces the amount of memory that is protected on
the active file list, so it may cause part of the working
set to get evicted.
On the other hand, having a smaller active list frees up
more memory for sequential (streaming, use-once) disk IO.
This can be useful on systems with large IO subsystems
and small memory (like Christian's s390 virtual machine,
with 256MB RAM and 4 disks!).
I wonder if we could not find some automatic way to
balance between these two situations, for example by
excluding currently-in-flight pages from the calculations.
In Christian's case, he could have 160MB of cache (buffer
+ page cache), of which 70MB is in flight to disk at a
time. It may be worthwhile to exclude that 70MB from the
total and aim for 45MB active file and 45MB inactive file
pages on his system. That way IO does not get starved.
On a desktop system, which needs the working set protected
and does less IO, we will automatically protect more of
the working set - since there is no IO to starve.
The idea is that it pans out on its own. If the workload changes, new
pages get activated and when that set grows too large, we start shrinking
it again.
Of course, right now this unscanned set is way too large and we can end
up wasting up to 50% of usable page cache on false active pages.
A fixed ratio does not scale with varying workloads, obviously, but having
it at a safe level still seems like a good trade-off.
We can still do the optimization, and in the worst case the amount of
memory wasted on false active pages is small enough that it should leave
the system performant.
You have a rather extreme page cache load. If 4:1 works for you, I think
this is a safe bet for now because we only frob the knobs into the
direction of earlier kernel behaviour.
We still have a nice amount of pages we do not need to scan regularly
(up to 50k file pages for a streaming IO load on a 1G machine).
Hannes
> The idea is that it pans out on its own. If the workload changes, new
> pages get activated and when that set grows too large, we start shrinking
> it again.
>
> Of course, right now this unscanned set is way too large and we can end
> up wasting up to 50% of usable page cache on false active pages.
Thing is, changing workloads often change back.
Specifically, think of a desktop system that is doing
work for the user during the day and gets backed up
at night.
You do not want the backup to kick the working set
out of memory, because when the user returns in the
morning the desktop should come back quickly after
the screensaver is unlocked.
The big question is, what workload suffers from
having the inactive list at 50% of the page cache?
So far the only big problem we have seen is on a
very unbalanced virtual machine, with 256MB RAM
and 4 fast disks. The disks simply have more IO
in flight at once than what fits in the inactive
list.
This is a very untypical situation, and we can
probably solve it by excluding the in-flight pages
from the active/inactive file calculation.
Rik van Riel wrote:
> On 04/20/2010 11:32 AM, Johannes Weiner wrote:
>
>> The idea is that it pans out on its own. If the workload changes, new
>> pages get activated and when that set grows too large, we start shrinking
>> it again.
>>
>> Of course, right now this unscanned set is way too large and we can end
>> up wasting up to 50% of usable page cache on false active pages.
>
> Thing is, changing workloads often change back.
>
> Specifically, think of a desktop system that is doing
> work for the user during the day and gets backed up
> at night.
>
> You do not want the backup to kick the working set
> out of memory, because when the user returns in the
> morning the desktop should come back quickly after
> the screensaver is unlocked.
IMHO it is fine to prevent that nightly backup job from not being
finished when the user arrives at morning because we didn't give him
some more cache - and e.g. a 30 sec transition from/to both optimized
states is fine.
But eventually I guess the point is that both behaviors are reasonable
to achieve - depending on the users needs.
What we could do is combine all our thoughts we had so far:
a) Rik could create an experimental patch that excludes the in flight pages
b) Johannes could create one for his suggestion to "always scan active
file pages but only deactivate them when the ratio is off and otherwise
strip buffers of clean pages"
c) I would extend the patch from Johannes setting the ratio of
active/inactive pages to be a userspace tunable
a,b,a+b would then need to be tested if they achieve a better behavior.
c on the other hand would be a fine tunable to let administrators
(knowing their workloads) or distributions (e.g. different values for
Desktop/Server defaults) adapt their installations.
In theory a,b and c should work fine together in case we need all of them.
> The big question is, what workload suffers from
> having the inactive list at 50% of the page cache?
>
> So far the only big problem we have seen is on a
> very unbalanced virtual machine, with 256MB RAM
> and 4 fast disks. The disks simply have more IO
> in flight at once than what fits in the inactive
> list.
Did I get you right that this means the write case - explaining why it
is building up buffers to the 50% max?
Note: It even uses up to 64 disks, with 1 disk per thread so e.g. 16
threads => 16 disks.
For being "unbalanced" I'd like to mention that over the years I learned
that sometimes, after a while, virtualized systems look that way without
being intended - this happens by adding more and more guests and let
guest memory balooning take care of it.
> This is a very untypical situation, and we can
> probably solve it by excluding the in-flight pages
> from the active/inactive file calculation.
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
A first revision of patch c is attached.
I tested assigning different percentages, so far e.g. 50 really behave
like before and 25 protects ~42M Buffers in my example which would match
the intended behavior - see patch for more details.
Checkpatch and some basic function tests went fine.
While it may be not perfect yet, I think it is ready for feedback now.
> a,b,a+b would then need to be tested if they achieve a better behavior.
>
> c on the other hand would be a fine tunable to let administrators
> (knowing their workloads) or distributions (e.g. different values for
> Desktop/Server defaults) adapt their installations.
>
> In theory a,b and c should work fine together in case we need all of them.
>
>> The big question is, what workload suffers from
>> having the inactive list at 50% of the page cache?
>>
>> So far the only big problem we have seen is on a
>> very unbalanced virtual machine, with 256MB RAM
>> and 4 fast disks. The disks simply have more IO
>> in flight at once than what fits in the inactive
>> list.
>
> Did I get you right that this means the write case - explaining why it
> is building up buffers to the 50% max?
>
Thinking about it I wondered for what these Buffers are protected.
If the intention to save these buffers is for reuse with similar loads I
wonder why I "need" three iozones to build up the 85M in my case.
Buffers start at ~0, after iozone run 1 they are at ~35, then after #2
~65 and after run #3 ~85.
Shouldn't that either allocate 85M for the first directly in case that
much is needed for a single run - or if not the second and third run
just "resuse" the 35M Buffers from the first run still held?
Note - "1 iozone run" means "iozone ... -i 0" which sequentially writes
and then rewrites a 2Gb file on 16 disks in my current case.
looking forward especially to patch b as I'd really like to see a kernel
able to win back these buffers if they are no more used for a longer
period while still allowing to grow&protect them while needed.
For batched work maybe :-)
> What we could do is combine all our thoughts we had so far:
> a) Rik could create an experimental patch that excludes the in flight pages
> b) Johannes could create one for his suggestion to "always scan active
> file pages but only deactivate them when the ratio is off and otherwise
> strip buffers of clean pages"
Please drop that idea, that 'Buffers:' is a red herring. It's just pages
that do not back files but block devices. Stripping buffer_heads won't
achieve anything, we need to get rid of the pages. Sorry, I should have
slept and thought before writing that suggestion.
> IMHO it is fine to prevent that nightly backup job from not being
> finished when the user arrives at morning because we didn't give him
> some more cache
How on earth would a backup job benefit from cache?
It only accesses each bit of data once, so caching the
to-be-backed-up data is a waste of memory.
I think you are confusing "buffer heads" with "buffers".
You can strip buffer heads off pages, but that is not
your problem.
"buffers" in /proc/meminfo stands for cached metadata,
eg. the filesystem journal, inodes, directories, etc...
Caching such metadata is legitimate, because it reduces
the number of disk seeks down the line.
Yeah I mixed that as well, thanks for clarification (Johannes wrote a
similar response effectively kicking b) from the list of things we could
do).
Regarding your question from thread reply#3
> How on earth would a backup job benefit from cache?
>
> It only accesses each bit of data once, so caching the
> to-be-backed-up data is a waste of memory.
If it is a low memory system with a lot of disks (like in my case)
giving it more cache allows e.g. larger readaheads or less cache
trashing - but it might be ok, as it might be rare case to hit all those
constraints at once.
But as we discussed before on virtual servers it can happen from time to
time due to balooning and much more disk attachments etc.
So definitely not the majority of cases around, but some corner cases
here and there that would benefit at least from making the preserved
ratio configurable if we don't find a good way to let it take the memory
back without hurting the intended preservation functionality.
For that reason - how about the patch I posted yesterday (to consolidate
this spread out thread I attach it here again)
And finally I still would like to understand why writing the same files
three times increase the active file pages each time instead of reusing
those already brought into memory by the first run.
To collect that last open thread as well I'll cite my own question here:
> Thinking about it I wondered for what these Buffers are protected.
> If the intention to save these buffers is for reuse with similar
loads > I wonder why I "need" three iozones to build up the 85M in my case.
> Buffers start at ~0, after iozone run 1 they are at ~35, then after
#2 > ~65 and after run #3 ~85.
> Shouldn't that either allocate 85M for the first directly in case
that > much is needed for a single run - or if not the second and third
run > > just "resuse" the 35M Buffers from the first run still held?
> Note - "1 iozone run" means "iozone ... -i 0" which sequentially
> writes and then rewrites a 2Gb file on 16 disks in my current case.
Trying to answering this question my self using your buffer details
above doesn't completely fit without further clarification, as the same
files should have the same dir, inode, ... (all ext2 in my case, so no
journal data as well).
From: Christian Ehrhardt <ehrh...@linux.vnet.ibm.com>
*updates in v2*
- use do_div
This patch creates a knob to help users that have workloads suffering from the
fix 1:1 active inactive ratio brought into the kernel by "56e49d21 vmscan:
evict use-once pages first".
It also provides the tuning mechanisms for other users that want an even bigger
working set to be protected.
To be honest the best solution would be to allow a system not using the working
set to regain that memory *somewhen*, and therefore without drawbacks to the
scenarios it was implemented for e.g. UI interactivity while copying a lot of
data. But up to now there was no idea how to get that behaviour implemented.
In the old thread started by Elladan that finally led to 56e49d21 Wu Fengguang
wrote:
"In the worse scenario, it could waste half the memory that could
otherwise be used for readahead buffer and to prevent thrashing, in a
server serving large datasets that are hardly reused, but still slowly
builds up its active list during the long uptime (think about a slowly
performance downgrade that can be fixed by a crude dropcache action).
That said, the actual performance degradation could be much smaller -
say 15% - all memories are not equal."
We now identified a case with up to -60% Throughput, therefore this patch tries
to provide a more gentle interface than drop_caches to help a system stuck in
this.
In discussion with Rik van Riel and Joannes Weiner we came up that there are
cases that want the current "save 50%" for the working set all the time and
others that would benefit from protectig only a smaller amount.
Eventually no "carved in stone" in kernel ratio will match all use cases,
therefore this patch makes the value tunable via a /proc/sys/vm/ interface
named active_inactive_ratio.
Example configurations might be:
- 50% - like the current kernel
- 0% - like a kernel pre 56e49d21
- x% - allow customizing the system to someones needs
Due to our experiments the suggested default in this patch is 25%, but if
preferred I'm fine keeping 50% and letting admins/distros adapt as needed.
Signed-off-by: Christian Ehrhardt <ehrh...@linux.vnet.ibm.com>
---
[diffstat]
Documentation/sysctl/vm.txt | 10 ++++++++++
include/linux/mm.h | 2 ++
kernel/sysctl.c | 9 +++++++++
mm/memcontrol.c | 9 ++++++---
mm/vmscan.c | 17 ++++++++++++++---
5 files changed, 41 insertions(+), 6 deletions(-)
[diff]
Index: linux-2.6/Documentation/sysctl/vm.txt
===================================================================
--- linux-2.6.orig/Documentation/sysctl/vm.txt 2010-04-21 06:32:23.000000000 +0200
+++ linux-2.6/Documentation/sysctl/vm.txt 2010-04-21 07:24:35.000000000 +0200
@@ -18,6 +18,7 @@
Currently, these files are in /proc/sys/vm:
+- active_inactive_ratio
- block_dump
- dirty_background_bytes
- dirty_background_ratio
@@ -57,6 +58,15 @@
==============================================================
+active_inactive_ratio
+
+The kernel tries to protect the active working set. Therefore a portion of the
+file pages is protected, meaning they are omitted when eviting pages until this
+ratio is reached.
+This tunable represents that ratio in percent and specifies the protected part
+
+==============================================================
+
block_dump
block_dump enables block I/O debugging when set to a nonzero value. More
Index: linux-2.6/kernel/sysctl.c
===================================================================
--- linux-2.6.orig/kernel/sysctl.c 2010-04-21 06:33:43.000000000 +0200
+++ linux-2.6/kernel/sysctl.c 2010-04-21 07:26:35.000000000 +0200
@@ -1271,6 +1271,15 @@
.extra2 = &one,
},
#endif
+ {
+ .procname = "active_inactive_ratio",
+ .data = &sysctl_active_inactive_ratio,
+ .maxlen = sizeof(sysctl_active_inactive_ratio),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &one_hundred,
+ },
/*
* NOTE: do not add new entries to this table unless you have read
Index: linux-2.6/mm/memcontrol.c
===================================================================
--- linux-2.6.orig/mm/memcontrol.c 2010-04-21 06:31:29.000000000 +0200
+++ linux-2.6/mm/memcontrol.c 2010-04-26 12:45:46.000000000 +0200
@@ -893,12 +893,15 @@
int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg)
{
unsigned long active;
- unsigned long inactive;
+ unsigned long activetoprotect;
- inactive = mem_cgroup_get_local_zonestat(memcg, LRU_INACTIVE_FILE);
active = mem_cgroup_get_local_zonestat(memcg, LRU_ACTIVE_FILE);
+ activetoprotect = active
+ + mem_cgroup_get_local_zonestat(memcg, LRU_INACTIVE_FILE)
+ * sysctl_active_inactive_ratio;
+ activetoprotect = do_div(activetoprotect, 100);
- return (active > inactive);
+ return (active > activetoprotect);
}
unsigned long mem_cgroup_zone_nr_pages(struct mem_cgroup *memcg,
Index: linux-2.6/mm/vmscan.c
===================================================================
--- linux-2.6.orig/mm/vmscan.c 2010-04-21 06:31:29.000000000 +0200
+++ linux-2.6/mm/vmscan.c 2010-04-26 12:50:47.000000000 +0200
@@ -1459,14 +1459,25 @@
return low;
}
+/*
+ * sysctl_active_inactive_ratio
+ *
+ * Defines the portion of file pages within the active working set is going to
+ * be protected. The value represents the percentage that will be protected.
+ */
+int sysctl_active_inactive_ratio __read_mostly = 25;
+
static int inactive_file_is_low_global(struct zone *zone)
{
- unsigned long active, inactive;
+ unsigned long active, activetoprotect;
active = zone_page_state(zone, NR_ACTIVE_FILE);
- inactive = zone_page_state(zone, NR_INACTIVE_FILE);
+ activetoprotect = zone_page_state(zone, NR_FILE)
+ * sysctl_active_inactive_ratio;
+ activetoprotect = do_div(activetoprotect, 100);
+
+ return (active > activetoprotect);
- return (active > inactive);
}
/**
Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h 2010-04-21 09:02:37.000000000 +0200
+++ linux-2.6/include/linux/mm.h 2010-04-21 09:02:51.000000000 +0200
@@ -1467,5 +1467,7 @@
extern void dump_page(struct page *page);
+extern int sysctl_active_inactive_ratio;
+
#endif /* __KERNEL__ */
#endif /* _LINUX_MM_H */
I've quick reviewed your patch. but unfortunately I can't write my
reviewed-by sign.
> Subject: [PATCH][RFC] mm: make working set portion that is protected tunable v2
> From: Christian Ehrhardt <ehrh...@linux.vnet.ibm.com>
>
> *updates in v2*
> - use do_div
>
> This patch creates a knob to help users that have workloads suffering from the
> fix 1:1 active inactive ratio brought into the kernel by "56e49d21 vmscan:
> evict use-once pages first".
> It also provides the tuning mechanisms for other users that want an even bigger
> working set to be protected.
We certainly need no knob. because typical desktop users use various
application,
various workload. then, the knob doesn't help them.
Probably, I've missed previous discussion. I'm going to find your previous mail.
KOSAKI Motohiro wrote:
> Hi
>
> I've quick reviewed your patch. but unfortunately I can't write my
> reviewed-by sign.
Not a problem, atm I'm happy about any review and comment :-)
>> Subject: [PATCH][RFC] mm: make working set portion that is protected tunable v2
>> From: Christian Ehrhardt <ehrh...@linux.vnet.ibm.com>
>>
>> *updates in v2*
>> - use do_div
>>
>> This patch creates a knob to help users that have workloads suffering from the
>> fix 1:1 active inactive ratio brought into the kernel by "56e49d21 vmscan:
>> evict use-once pages first".
>> It also provides the tuning mechanisms for other users that want an even bigger
>> working set to be protected.
>
> We certainly need no knob. because typical desktop users use various
> application,
> various workload. then, the knob doesn't help them.
Briefly - We had discussed non desktop scenarios where like a day load
that builds up the working set to 50% and a nightly backup job which
then is unable to use that protected 50% when sequentially reading a lot
of disks and due to that doesn't finish before morning.
The knob should help those people that know their system would suffer
from this or similar cases to e.g. set the protected ratio smaller or
even to zero if wanted.
As mentioned before, being able to gain back those protected 50% would
be even better - if it can be done in a way not hurting the original
intention of protecting them.
I personally just don't feel too good knowing that 50% of my memory
might hang around unused for many hours while they could be of some use.
I absolutely agree with the old intention and see how the patch helped
with the latency issue Elladan brought up in the past - but it just
looks way too aggressive to protect it "forever" for some server use cases.
> Probably, I've missed previous discussion. I'm going to find your previous mail.
The discussion ends at http://lkml.org/lkml/2010/4/22/38 - feel free to
click through it.
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
>>> This patch creates a knob to help users that have workloads suffering
>>> from the
>>> fix 1:1 active inactive ratio brought into the kernel by "56e49d21
>>> vmscan:
>>> evict use-once pages first".
>>> It also provides the tuning mechanisms for other users that want an
>>> even bigger
>>> working set to be protected.
>>
>> We certainly need no knob. because typical desktop users use various
>> application,
>> various workload. then, the knob doesn't help them.
>
> Briefly - We had discussed non desktop scenarios where like a day load
> that builds up the working set to 50% and a nightly backup job which
> then is unable to use that protected 50% when sequentially reading a lot
> of disks and due to that doesn't finish before morning.
This is a red herring. A backup touches all of the
data once, so it does not need a lot of page cache
and will not "not finish before morning" due to the
working set being protected.
You're going to have to come up with a more realistic
scenario than that.
> I personally just don't feel too good knowing that 50% of my memory
> might hang around unused for many hours while they could be of some use.
> I absolutely agree with the old intention and see how the patch helped
> with the latency issue Elladan brought up in the past - but it just
> looks way too aggressive to protect it "forever" for some server use cases.
So far we have seen exactly one workload where it helps
to reduce the size of the active file list, and that is
not due to any need for caching more inactive pages.
On the contrary, it is because ALL OF THE INACTIVE PAGES
are in flight to disk, all under IO at the same time.
Caching has absolutely nothing to do with the regression
you ran into.
Rik van Riel wrote:
> On 04/26/2010 08:43 AM, Christian Ehrhardt wrote:
>
>>>> This patch creates a knob to help users that have workloads suffering
>>>> from the
>>>> fix 1:1 active inactive ratio brought into the kernel by "56e49d21
>>>> vmscan:
>>>> evict use-once pages first".
>>>> It also provides the tuning mechanisms for other users that want an
>>>> even bigger
>>>> working set to be protected.
>>>
>>> We certainly need no knob. because typical desktop users use various
>>> application,
>>> various workload. then, the knob doesn't help them.
>>
>> Briefly - We had discussed non desktop scenarios where like a day load
>> that builds up the working set to 50% and a nightly backup job which
>> then is unable to use that protected 50% when sequentially reading a lot
>> of disks and due to that doesn't finish before morning.
>
> This is a red herring. A backup touches all of the
> data once, so it does not need a lot of page cache
> and will not "not finish before morning" due to the
> working set being protected.
>
> You're going to have to come up with a more realistic
> scenario than that.
I completely agree that a backup case is read once and therefore doesn't
benefit from caching itself, but you know my scenario from the thread
where this patch emerged from.
="Parallel iozone sequential read - resembling the classic backup case
(read once + sequential)."
While caching isn't helping the classic way, by having data in cache
ready on the next access it is still used transparently as the system
is reading ahead into page cache to assist the sequentially reading
process.
Yes it doesn't happen with direct IO and some, but unfortunately not
all backup tools use DIO. Additionally not all backup jobs have a whole
night, and this can really be a decision maker if you can quickly pump
out your 100 TB main database in 10 or 20 minutes.
So here comes the problem, due to the 50% preserved I assume it comes
into trouble allocating that page cache memory in time. So much that it
even slows down the load - meaning long enough to let the application
completely consume the data already read and then still letting it wait.
More about that below.
Now IMHO this feels comparable to a classic backup job, and by loosing
60% Throughput (more than a Gb/s) is seems neither red nor smells like
fish to me.
>> I personally just don't feel too good knowing that 50% of my memory
>> might hang around unused for many hours while they could be of some use.
>> I absolutely agree with the old intention and see how the patch helped
>> with the latency issue Elladan brought up in the past - but it just
>> looks way too aggressive to protect it "forever" for some server use
>> cases.
>
> So far we have seen exactly one workload where it helps
> to reduce the size of the active file list, and that is
> not due to any need for caching more inactive pages.
>
> On the contrary, it is because ALL OF THE INACTIVE PAGES
> are in flight to disk, all under IO at the same time.
Ok this time I think I got your point much better - sorry for
being confused.
Discard my patch, but I'd really like to clarify and verify your
assumption in conjunction with my findings and would be happy
if you can help me with that.
As mentioned the case that suffers from the 50% memory protected is
iozone read - so it would be "in flight FROM disk", but I guess that
it is not important if it is from or to right ?
Effectively I have two read cases, one with caches dropped which then
has almost full memory for page cache in the read case. And the other
one with a few writes before filling up the protected 50% leading to a
read case with only half of the memory for page cache.
Now if I really got you right this time the issue is caused by the
fact that the parallel read ahead on all 16 disks creates so much I/O
in flight that the 128M (=50% that are left) are not enough.
From the past we know that the time lost for the -60% Throughput was
spent in a loop around direct_reclaim&congestion_wait trying to get the
memory for the page cache reads - would you consider it possible that
we now run into a scenario splitting the memory like this?:
- 50% active file protected
- a lot of the other half related to I/O that is currently
in flight from the disk -> not free-able too?
- almost nothing to free when allocating for the next read to page
cache (can only take pages above low watermark) -> waiting
I updated my old counter patch, that I used to verify the old issue were
we spent so much time in a full timeout of congestion wait. Thanks to
Mel this was fixed (I have his watermark wait patch applied), but I
assume having 50% protected I just run into the shortened wait more
often or wait longer for watermarks to still be an issue (due to 50%
not free-able).
See the patch inlined at the end of the mail for details what/how
it is exactly counted.
As before the scenario is iozone on 16 disks in parallel with 1 iozone
child per disk.
I ran:
- write, write, write, read -> bad case
- drop cache, read -> good case
Read throughput still drops by ~60% comparing good to bad case.
Here are the numbers I got for those two cases by my counters and
meminfo:
Value Initial state Write 1 Write 2 Write 3 Read after writes (bad) Read after DC (good)
watermark_wait_duration (ns) 0 9,902,333,643 12,288,444,574 24,197,098,221 317,175,021,553 35,002,926,894
watermark_wait 0 24102 26708 35285 29720 15515
pages_direct_reclaim 0 59195 65010 86777 90883 66672
failed_pages_direct_reclaim 0 24144 26768 35343 29733 15525
failed_pages_direct_reclaim_but_progress 0 24144 26768 35343 29733 15525
MemTotal: 248912 248912 248912 248912 248912 248912
MemFree: 185732 4868 5028 3780 3064 7136
Buffers: 536 33588 65660 84296 81868 32072
Cached: 9480 145252 111672 93736 98424 149724
Active: 11052 43920 76032 89084 87780 38024
Inactive: 6860 142628 108980 96528 100280 151572
Active(anon): 5092 4452 4428 4364 4516 4492
Inactive(anon): 6480 6608 6604 6604 6604 6604
Active(file): 5960 39468 71604 84720 83264 33532
Inactive(file): 380 136020 102376 89924 93676 144968
Unevictable: 3952 3952 3952 3952 3952 3952
Real Time passed in seconds 48.83 49.38 50.35 40.62 22.61
AVG wait time waitduration/# 410,851 460,104 685,762 10,672,107 2,256,070 => x5 longer waits in avg
-52.20% bad case runs about twice as often into waits
These numbers seem to point toward my assumption, that the 50% preserved
cause the system to be unable to find memory fast enough.
Happening twice as often to run into the wait after a direct_reclaim
that made progress, but not finding a free page.
And then in average waiting about 5 times longer to get things freed up
enough reaching the watermark and get woken up.
####
Eventually I'd also really like to completely understand why the active
file pages grow when I execute the same iozone write load three times.
They effectively write the same files in the same directories without
being a journaling file system (The effect can be seen in the table
above as well).
If one of these write runs would use more than ~30M active file pages
they would be allocated and afterwards protected, but they aren't.
Then after the second run I see ~60M active file pages.
As mentioned before I would assume that it either just reuses what is
in memory from the first run, or if it really uses new stuff then the
time has come to throw the old away.
Therefore I would assume that it should never get much more after the
first run as long as they are essentially doing the same.
Does someone already know or has a good assumption what might be
growing in these buffers?
Is there a good interface to check what is buffered and protected atm?
> Caching has absolutely nothing to do with the regression
> you ran into.
As mentioned above not by means of "having it in the cache for another
fast access" yes.
But maybe by "not getting memory for reads into page cache fast enough".
--
Grüsse / regards, Christian Ehrhardt
IBM Linux Technology Center, System z Linux Performance
#### patch for the counters shown in table above ######
Subject: [PATCH][DEBUGONLY] mm: track allocation waits
From: Christian Ehrhardt <ehrh...@linux.vnet.ibm.com>
This patch adds some debug counters to track how often a system runs into
waits after direct reclaim (happens in case of did_some_progress & !page)
and how much time it spends there waiting.
#for debugging only#
Signed-off-by: Christian Ehrhardt <ehrh...@linux.vnet.ibm.com>
---
[diffstat]
include/linux/sysctl.h | 1
kernel/sysctl.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++
mm/page_alloc.c | 17 ++++++++++++++
3 files changed, 75 insertions(+)
[diff]
diff -Naur linux-2.6.32.11-0.3.99.6.626e022.orig/include/linux/sysctl.h linux-2.6.32.11-0.3.99.6.626e022/include/linux/sysctl.h
--- linux-2.6.32.11-0.3.99.6.626e022.orig/include/linux/sysctl.h 2010-04-27 12:01:54.000000000 +0200
+++ linux-2.6.32.11-0.3.99.6.626e022/include/linux/sysctl.h 2010-04-27 12:03:56.000000000 +0200
@@ -68,6 +68,7 @@
CTL_BUS=8, /* Busses */
CTL_ABI=9, /* Binary emulation */
CTL_CPU=10, /* CPU stuff (speed scaling, etc) */
+ CTL_PERF=11, /* Performance counters and timer sums for debugging */
CTL_XEN=123, /* Xen info and control */
CTL_ARLAN=254, /* arlan wireless driver */
CTL_S390DBF=5677, /* s390 debug */
diff -Naur linux-2.6.32.11-0.3.99.6.626e022.orig/kernel/sysctl.c linux-2.6.32.11-0.3.99.6.626e022/kernel/sysctl.c
--- linux-2.6.32.11-0.3.99.6.626e022.orig/kernel/sysctl.c 2010-04-27 14:26:04.000000000 +0200
+++ linux-2.6.32.11-0.3.99.6.626e022/kernel/sysctl.c 2010-04-27 15:44:54.000000000 +0200
@@ -183,6 +183,7 @@
.default_set.list = LIST_HEAD_INIT(root_table_header.ctl_entry),
};
+static struct ctl_table perf_table[];
static struct ctl_table kern_table[];
static struct ctl_table vm_table[];
static struct ctl_table fs_table[];
@@ -236,6 +237,13 @@
.mode = 0555,
.child = dev_table,
},
+ {
+ .ctl_name = CTL_PERF,
+ .procname = "perf",
+ .mode = 0555,
+ .child = perf_table,
+ },
+
/*
* NOTE: do not add new entries to this table unless you have read
* Documentation/sysctl/ctl_unnumbered.txt
@@ -254,6 +262,55 @@
static int max_sched_shares_ratelimit = NSEC_PER_SEC; /* 1 second */
#endif
+extern unsigned long perf_count_watermark_wait;
+extern unsigned long perf_count_pages_direct_reclaim;
+extern unsigned long perf_count_failed_pages_direct_reclaim;
+extern unsigned long perf_count_failed_pages_direct_reclaim_but_progress;
+extern unsigned long perf_count_watermark_wait_duration;
+static struct ctl_table perf_table[] = {
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "perf_count_watermark_wait_duration",
+ .data = &perf_count_watermark_wait_duration,
+ .mode = 0666,
+ .maxlen = sizeof(unsigned long),
+ .proc_handler = &proc_doulongvec_minmax,
+ },
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "perf_count_watermark_wait",
+ .data = &perf_count_watermark_wait,
+ .mode = 0666,
+ .maxlen = sizeof(unsigned long),
+ .proc_handler = &proc_doulongvec_minmax,
+ },
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "perf_count_pages_direct_reclaim",
+ .data = &perf_count_pages_direct_reclaim,
+ .maxlen = sizeof(unsigned long),
+ .mode = 0666,
+ .proc_handler = &proc_doulongvec_minmax,
+ },
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "perf_count_failed_pages_direct_reclaim",
+ .data = &perf_count_failed_pages_direct_reclaim,
+ .maxlen = sizeof(unsigned long),
+ .mode = 0666,
+ .proc_handler = &proc_doulongvec_minmax,
+ },
+ {
+ .ctl_name = CTL_UNNUMBERED,
+ .procname = "perf_count_failed_pages_direct_reclaim_but_progress",
+ .data = &perf_count_failed_pages_direct_reclaim_but_progress,
+ .maxlen = sizeof(unsigned long),
+ .mode = 0666,
+ .proc_handler = &proc_doulongvec_minmax,
+ },
+ { .ctl_name = 0 }
+};
+
static struct ctl_table kern_table[] = {
{
.ctl_name = CTL_UNNUMBERED,
diff -Naur linux-2.6.32.11-0.3.99.6.626e022.orig/mm/page_alloc.c linux-2.6.32.11-0.3.99.6.626e022/mm/page_alloc.c
--- linux-2.6.32.11-0.3.99.6.626e022.orig/mm/page_alloc.c 2010-04-27 12:01:55.000000000 +0200
+++ linux-2.6.32.11-0.3.99.6.626e022/mm/page_alloc.c 2010-04-27 14:06:40.000000000 +0200
@@ -191,6 +191,7 @@
wake_up_interruptible(&watermark_wq);
}
+unsigned long perf_count_watermark_wait = 0;
/**
* watermark_wait - Wait for watermark to go above low
* @timeout: Wait until watermark is reached or this timeout is reached
@@ -202,6 +203,7 @@
long ret;
DEFINE_WAIT(wait);
+ perf_count_watermark_wait++;
prepare_to_wait(&watermark_wq, &wait, TASK_INTERRUPTIBLE);
/*
@@ -1725,6 +1727,10 @@
return page;
}
+unsigned long perf_count_pages_direct_reclaim = 0;
+unsigned long perf_count_failed_pages_direct_reclaim = 0;
+unsigned long perf_count_failed_pages_direct_reclaim_but_progress = 0;
+
/* The really slow allocator path where we enter direct reclaim */
static inline struct page *
__alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
@@ -1761,6 +1767,13 @@
zonelist, high_zoneidx,
alloc_flags, preferred_zone,
migratetype);
+
+ perf_count_pages_direct_reclaim++;
+ if (!page)
+ perf_count_failed_pages_direct_reclaim++;
+ if (!page && *did_some_progress)
+ perf_count_failed_pages_direct_reclaim_but_progress++;
+
return page;
}
@@ -1841,6 +1854,7 @@
return alloc_flags;
}
+unsigned long perf_count_watermark_wait_duration = 0;
static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct zonelist *zonelist, enum zone_type high_zoneidx,
@@ -1961,8 +1975,11 @@
/* Check if we should retry the allocation */
pages_reclaimed += did_some_progress;
if (should_alloc_retry(gfp_mask, order, pages_reclaimed)) {
+ unsigned long t1;
/* Too much pressure, back off a bit at let reclaimers do work */
+ t1 = get_clock();
watermark_wait(HZ/50);
+ perf_count_watermark_wait_duration += ((get_clock() - t1) * 125) >> 9;
goto rebalance;