Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[PATCH] perf_events: improve task_sched_in()

0 views
Skip to first unread message

era...@google.com

unread,
Mar 11, 2010, 1:30:02 AM3/11/10
to
This patch is an optimization in perf_event_task_sched_in() to avoid scheduling
the events twice in a row. Without it, the perf_disable()/perf_enable() pair
is invoked twice, thereby pinned events counts while scheduling flexible events
and we go throuh hw_perf_enable() twice. By encapsulating, the whole sequence
into perf_disable()/perf_enable() we ensure, hw_perf_enable() is going to be
invoked only once because of the refcount protection.

Signed-off-by: Stephane Eranian <era...@google.com>
--
perf_event.c | 4 ++++
1 file changed, 4 insertions(+)

--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1382,6 +1382,8 @@ void perf_event_task_sched_in(struct task_struct *task)
if (cpuctx->task_ctx == ctx)
return;

+ perf_disable();
+
/*
* We want to keep the following priority order:
* cpu pinned (that don't need to move), task pinned,
@@ -1394,6 +1396,8 @@ void perf_event_task_sched_in(struct task_struct *task)
ctx_sched_in(ctx, cpuctx, EVENT_FLEXIBLE);

cpuctx->task_ctx = ctx;
+
+ perf_enable();
}

#define MAX_INTERRUPTS (~0ULL)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Peter Zijlstra

unread,
Mar 11, 2010, 3:40:02 AM3/11/10
to
On Wed, 2010-03-10 at 22:26 -0800, era...@google.com wrote:
> This patch is an optimization in perf_event_task_sched_in() to avoid scheduling
> the events twice in a row. Without it, the perf_disable()/perf_enable() pair
> is invoked twice, thereby pinned events counts while scheduling flexible events
> and we go throuh hw_perf_enable() twice. By encapsulating, the whole sequence
> into perf_disable()/perf_enable() we ensure, hw_perf_enable() is going to be
> invoked only once because of the refcount protection.

Agreed, this makes perfect sense.

Acked-by: Peter Zijlstra <a.p.zi...@chello.nl>

tip-bot for eranian@google.com

unread,
Mar 11, 2010, 9:50:02 AM3/11/10
to
Commit-ID: 9b33fa6ba0e2f90fdf407501db801c2511121564
Gitweb: http://git.kernel.org/tip/9b33fa6ba0e2f90fdf407501db801c2511121564
Author: era...@google.com <era...@google.com>
AuthorDate: Wed, 10 Mar 2010 22:26:05 -0800
Committer: Ingo Molnar <mi...@elte.hu>
CommitDate: Thu, 11 Mar 2010 15:23:28 +0100

perf_events: Improve task_sched_in()

This patch is an optimization in perf_event_task_sched_in() to avoid
scheduling the events twice in a row.

Without it, the perf_disable()/perf_enable() pair is invoked twice,
thereby pinned events counts while scheduling flexible events and we go
throuh hw_perf_enable() twice.

By encapsulating, the whole sequence into perf_disable()/perf_enable() we
ensure, hw_perf_enable() is going to be invoked only once because of the
refcount protection.

Signed-off-by: Stephane Eranian <era...@google.com>
Signed-off-by: Peter Zijlstra <a.p.zi...@chello.nl>
LKML-Reference: <1268288765-5326-1-gi...@google.com>
Signed-off-by: Ingo Molnar <mi...@elte.hu>
---
kernel/perf_event.c | 4 ++++
1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 52c69a3..3853d49 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -1368,6 +1368,8 @@ void perf_event_task_sched_in(struct task_struct *task)


if (cpuctx->task_ctx == ctx)
return;

+ perf_disable();
+
/*
* We want to keep the following priority order:
* cpu pinned (that don't need to move), task pinned,

@@ -1380,6 +1382,8 @@ void perf_event_task_sched_in(struct task_struct *task)

0 new messages