Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[patch 0/4] sched: Replace read_lock(&tasklist_lock) with RCU - the easy part

16 views
Skip to first unread message

Thomas Gleixner

unread,
Dec 9, 2009, 5:20:02 AM12/9/09
to
First batch of patches which replace read_lock(&tasklist_lock) with
RCU.

Thanks,

tglx

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Thomas Gleixner

unread,
Dec 9, 2009, 5:20:02 AM12/9/09
to
sched-use-rcu-in-affinity-get-set.patch

Thomas Gleixner

unread,
Dec 9, 2009, 5:20:02 AM12/9/09
to
sched-use-rcu-in-sched_getscheduler.patch

Thomas Gleixner

unread,
Dec 9, 2009, 5:20:02 AM12/9/09
to
sched-use-rcu-in-sched-get-rr-param.patch

Thomas Gleixner

unread,
Dec 9, 2009, 5:20:02 AM12/9/09
to
sched-fix-bogus-label.patch

tip-bot for Thomas Gleixner

unread,
Dec 14, 2009, 11:40:02 AM12/14/09
to
Commit-ID: 1a551ae715825bb2a2107a2dd68de024a1fa4e32
Gitweb: http://git.kernel.org/tip/1a551ae715825bb2a2107a2dd68de024a1fa4e32
Author: Thomas Gleixner <tg...@linutronix.de>
AuthorDate: Wed, 9 Dec 2009 10:15:11 +0000
Committer: Ingo Molnar <mi...@elte.hu>
CommitDate: Mon, 14 Dec 2009 17:11:35 +0100

sched: Use rcu in sched_get_rr_param()

read_lock(&tasklist_lock) does not protect
sys_sched_get_rr_param() against a concurrent update of the
policy or scheduler parameters as do_sched_scheduler() does not
take the tasklist_lock.

The access to task->sched_class->get_rr_interval is protected by
task_rq_lock(task).

Use rcu_read_lock() to protect find_task_by_vpid() and prevent
the task struct from going away.

Signed-off-by: Thomas Gleixner <tg...@linutronix.de>
Cc: Peter Zijlstra <pet...@infradead.org>
LKML-Reference: <200912091007...@linutronix.de>
Signed-off-by: Ingo Molnar <mi...@elte.hu>
---
kernel/sched.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 7989312..db5c266 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6873,7 +6873,7 @@ SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
return -EINVAL;

retval = -ESRCH;
- read_lock(&tasklist_lock);
+ rcu_read_lock();
p = find_process_by_pid(pid);
if (!p)
goto out_unlock;
@@ -6886,13 +6886,13 @@ SYSCALL_DEFINE2(sched_rr_get_interval, pid_t, pid,
time_slice = p->sched_class->get_rr_interval(rq, p);
task_rq_unlock(rq, &flags);

- read_unlock(&tasklist_lock);
+ rcu_read_unlock();
jiffies_to_timespec(time_slice, &t);
retval = copy_to_user(interval, &t, sizeof(t)) ? -EFAULT : 0;
return retval;

out_unlock:
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();
return retval;

tip-bot for Thomas Gleixner

unread,
Dec 14, 2009, 11:40:02 AM12/14/09
to
Commit-ID: 5fe85be081edf0ac92d83f9c39e0ab5c1371eb82
Gitweb: http://git.kernel.org/tip/5fe85be081edf0ac92d83f9c39e0ab5c1371eb82
Author: Thomas Gleixner <tg...@linutronix.de>
AuthorDate: Wed, 9 Dec 2009 10:14:58 +0000
Committer: Ingo Molnar <mi...@elte.hu>
CommitDate: Mon, 14 Dec 2009 17:11:34 +0100

sched: Use rcu in sys_sched_getscheduler/sys_sched_getparam()

read_lock(&tasklist_lock) does not protect
sys_sched_getscheduler and sys_sched_getparam() against a


concurrent update of the policy or scheduler parameters as

do_sched_setscheduler() does not take the tasklist_lock. The
accessed integers can be retrieved w/o locking and are snapshots
anyway.

Using rcu_read_lock() to protect find_task_by_vpid() and prevent
the task struct from going away is not changing the above
situation.

Signed-off-by: Thomas Gleixner <tg...@linutronix.de>
Cc: Peter Zijlstra <pet...@infradead.org>
LKML-Reference: <200912091007...@linutronix.de>
Signed-off-by: Ingo Molnar <mi...@elte.hu>
---

kernel/sched.c | 10 +++++-----
1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 258c73c..1782bee 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6458,7 +6458,7 @@ SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)


return -EINVAL;

retval = -ESRCH;
- read_lock(&tasklist_lock);
+ rcu_read_lock();
p = find_process_by_pid(pid);

if (p) {
retval = security_task_getscheduler(p);
@@ -6466,7 +6466,7 @@ SYSCALL_DEFINE1(sched_getscheduler, pid_t, pid)
retval = p->policy
| (p->sched_reset_on_fork ? SCHED_RESET_ON_FORK : 0);


}
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();
return retval;
}

@@ -6484,7 +6484,7 @@ SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
if (!param || pid < 0)
return -EINVAL;



- read_lock(&tasklist_lock);
+ rcu_read_lock();
p = find_process_by_pid(pid);

retval = -ESRCH;
if (!p)
@@ -6495,7 +6495,7 @@ SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)
goto out_unlock;

lp.sched_priority = p->rt_priority;
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();

/*
* This one might sleep, we cannot do it with a spinlock held ...
@@ -6505,7 +6505,7 @@ SYSCALL_DEFINE2(sched_getparam, pid_t, pid, struct sched_param __user *, param)

tip-bot for Thomas Gleixner

unread,
Dec 14, 2009, 11:40:02 AM12/14/09
to
Commit-ID: 23f5d142519621b16cf2b378cf8adf4dcf01a616
Gitweb: http://git.kernel.org/tip/23f5d142519621b16cf2b378cf8adf4dcf01a616
Author: Thomas Gleixner <tg...@linutronix.de>
AuthorDate: Wed, 9 Dec 2009 10:15:01 +0000
Committer: Ingo Molnar <mi...@elte.hu>
CommitDate: Mon, 14 Dec 2009 17:11:35 +0100

sched: Use rcu in sched_get/set_affinity()

tasklist_lock is held read locked to protect the
find_task_by_vpid() call and to prevent the task going away.
sched_setaffinity acquires a task struct ref and drops tasklist
lock right away. The access to the cpus_allowed mask is
protected by rq->lock.

rcu_read_lock() provides the same protection here.

Signed-off-by: Thomas Gleixner <tg...@linutronix.de>
Cc: Peter Zijlstra <pet...@infradead.org>
LKML-Reference: <200912091007...@linutronix.de>
Signed-off-by: Ingo Molnar <mi...@elte.hu>
---

kernel/sched.c | 16 ++++++----------
1 files changed, 6 insertions(+), 10 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 1782bee..7989312 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6516,22 +6516,18 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
int retval;

get_online_cpus();


- read_lock(&tasklist_lock);
+ rcu_read_lock();

p = find_process_by_pid(pid);

if (!p) {
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();
put_online_cpus();
return -ESRCH;
}

- /*
- * It is not safe to call set_cpus_allowed with the
- * tasklist_lock held. We will bump the task_struct's
- * usage count and then drop tasklist_lock.
- */
+ /* Prevent p going away */
get_task_struct(p);
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();

if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) {
retval = -ENOMEM;
@@ -6617,7 +6613,7 @@ long sched_getaffinity(pid_t pid, struct cpumask *mask)
int retval;

get_online_cpus();
- read_lock(&tasklist_lock);
+ rcu_read_lock();

retval = -ESRCH;
p = find_process_by_pid(pid);
@@ -6633,7 +6629,7 @@ long sched_getaffinity(pid_t pid, struct cpumask *mask)
task_rq_unlock(rq, &flags);



out_unlock:
- read_unlock(&tasklist_lock);
+ rcu_read_unlock();

put_online_cpus();

0 new messages