[PATCH RT 1/4] sched: ttwu: Return success when only changing the saved_state value

9 views
Skip to first unread message

Steven Rostedt

unread,
Jan 17, 2012, 10:45:32 PM1/17/12
to linux-...@vger.kernel.org, linux-rt-users, Thomas Gleixner, Carsten Emde, John Kacur, stab...@vger.kernel.org
From: Thomas Gleixner <tg...@linutronix.de>

When a task blocks on a rt lock, it saves the current state in
p->saved_state, so a lock related wake up will not destroy the
original state.

When a real wakeup happens, while the task is running due to a lock
wakeup already, we update p->saved_state to TASK_RUNNING, but we do
not return success, which might cause another wakeup in the waitqueue
code and the task remains in the waitqueue list. Return success in
that case as well.

Signed-off-by: Thomas Gleixner <tg...@linutronix.de>
Cc: stab...@vger.kernel.org
Signed-off-by: Steven Rostedt <ros...@goodmis.org>
---
kernel/sched.c | 4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/kernel/sched.c b/kernel/sched.c
index 63aeba0..e5ef7a8 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2693,8 +2693,10 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
* if the wakeup condition is true.
*/
if (!(wake_flags & WF_LOCK_SLEEPER)) {
- if (p->saved_state & state)
+ if (p->saved_state & state) {
p->saved_state = TASK_RUNNING;
+ success = 1;
+ }
}
goto out;
}
--
1.7.8.3


signature.asc

Steven Rostedt

unread,
Jan 17, 2012, 10:45:34 PM1/17/12
to linux-...@vger.kernel.org, linux-rt-users, Thomas Gleixner, Carsten Emde, John Kacur
From: Thomas Gleixner <tg...@linutronix.de>

Signed-off-by: Thomas Gleixner <tg...@linutronix.de>


Signed-off-by: Steven Rostedt <ros...@goodmis.org>
---

drivers/acpi/ec.c | 8 ++++----
drivers/acpi/internal.h | 4 +++-
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index 5812e01..db0e6c3 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -222,7 +222,7 @@ static int ec_poll(struct acpi_ec *ec)
if (ec_transaction_done(ec))
return 0;
} else {
- if (wait_event_timeout(ec->wait,
+ if (swait_event_timeout(ec->wait,
ec_transaction_done(ec),
msecs_to_jiffies(1)))
return 0;
@@ -272,7 +272,7 @@ static int ec_wait_ibf0(struct acpi_ec *ec)
unsigned long delay = jiffies + msecs_to_jiffies(ec_delay);
/* interrupt wait manually if GPE mode is not active */
while (time_before(jiffies, delay))
- if (wait_event_timeout(ec->wait, ec_check_ibf0(ec),
+ if (swait_event_timeout(ec->wait, ec_check_ibf0(ec),
msecs_to_jiffies(1)))
return 0;
return -ETIME;
@@ -612,7 +612,7 @@ static u32 acpi_ec_gpe_handler(acpi_handle gpe_device,
advance_transaction(ec, acpi_ec_read_status(ec));
if (ec_transaction_done(ec) &&
(acpi_ec_read_status(ec) & ACPI_EC_FLAG_IBF) == 0) {
- wake_up(&ec->wait);
+ swait_wake(&ec->wait);
ec_check_sci(ec, acpi_ec_read_status(ec));
}
return ACPI_INTERRUPT_HANDLED | ACPI_REENABLE_GPE;
@@ -676,7 +676,7 @@ static struct acpi_ec *make_acpi_ec(void)
return NULL;
ec->flags = 1 << EC_FLAGS_QUERY_PENDING;
mutex_init(&ec->lock);
- init_waitqueue_head(&ec->wait);
+ init_swait_head(&ec->wait);
INIT_LIST_HEAD(&ec->list);
raw_spin_lock_init(&ec->curr_lock);
return ec;
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index 68ed95f..2519b6e 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -23,6 +23,8 @@

#define PREFIX "ACPI: "

+#include <linux/wait-simple.h>
+
int init_acpi_device_notify(void);
int acpi_scan_init(void);
int acpi_sysfs_init(void);
@@ -59,7 +61,7 @@ struct acpi_ec {
unsigned long global_lock;
unsigned long flags;
struct mutex lock;
- wait_queue_head_t wait;
+ struct swait_head wait;
struct list_head list;
struct transaction *curr;
raw_spinlock_t curr_lock;
--
1.7.8.3


signature.asc

Steven Rostedt

unread,
Jan 17, 2012, 10:45:31 PM1/17/12
to linux-...@vger.kernel.org, linux-rt-users, Thomas Gleixner, Carsten Emde, John Kacur

Dear RT Folks,

This is the RT stable review cycle of patch 3.0.14-rt32-rc1.

Please scream at me if I messed something up. Please test the patches too.

The -rc release will be uploaded to kernel.org and will be deleted when
the final release is out. This is just a review release (or release candidate).

The pre-releases will not be pushed to the git repository, only the
final release is.

If all goes well, this patch will be converted to the next main release
on 1/20/2012.

Enjoy,

-- Steve


To build 3.0.14-rt32-rc1 directly, the following patches should be applied:

http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.0.tar.xz

http://www.kernel.org/pub/linux/kernel/v3.0/patch-3.0.14.xz

http://www.kernel.org/pub/linux/kernel/projects/rt/3.0/patch-3.0.14-rt32-rc1.patch.xz

You can also build from 3.0.14-rt31 by applying the incremental patch:

http://www.kernel.org/pub/linux/kernel/projects/rt/3.0/incr/patch-3.0.14-rt31-rt32-rc1.patch.xz


Changes from 3.0.14-rt31:

---


Steven Rostedt (1):
Linux 3.0.14-rt32-rc1

Thomas Gleixner (3):
sched: ttwu: Return success when only changing the saved_state value
wait-simple: Simple waitqueue implementation
acpi-gpe-use-wait-simple.patch

----
drivers/acpi/ec.c | 8 +-
drivers/acpi/internal.h | 4 +-
include/linux/wait-simple.h | 152 +++++++++++++++++++++++++++++++++++++++++++
kernel/Makefile | 2 +-
kernel/sched.c | 4 +-
kernel/wait-simple.c | 62 +++++++++++++++++
localversion-rt | 2 +-
7 files changed, 226 insertions(+), 8 deletions(-)

signature.asc

Steven Rostedt

unread,
Jan 17, 2012, 10:45:35 PM1/17/12
to linux-...@vger.kernel.org, linux-rt-users, Thomas Gleixner, Carsten Emde, John Kacur
From: Steven Rostedt <sros...@redhat.com>

---
localversion-rt | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/localversion-rt b/localversion-rt
index a68b433..2744705 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@
--rt31
+-rt32-rc1
--
1.7.8.3


signature.asc

Steven Rostedt

unread,
Jan 17, 2012, 10:45:33 PM1/17/12
to linux-...@vger.kernel.org, linux-rt-users, Thomas Gleixner, Carsten Emde, John Kacur
From: Thomas Gleixner <tg...@linutronix.de>

wait_queue is a swiss army knife and in most of the cases the
complexity is not needed. For RT waitqueues are a constant source of
trouble as we can't convert the head lock to a raw spinlock due to
fancy and long lasting callbacks.

Provide a slim version, which allows RT to replace wait queues. This
should go mainline as well, as it lowers memory consumption and
runtime overhead.

Signed-off-by: Thomas Gleixner <tg...@linutronix.de>
Signed-off-by: Steven Rostedt <ros...@goodmis.org>
---

include/linux/wait-simple.h | 152 +++++++++++++++++++++++++++++++++++++++++++
kernel/Makefile | 2 +-

kernel/wait-simple.c | 62 +++++++++++++++++
3 files changed, 215 insertions(+), 1 deletions(-)
create mode 100644 include/linux/wait-simple.h
create mode 100644 kernel/wait-simple.c

diff --git a/include/linux/wait-simple.h b/include/linux/wait-simple.h
new file mode 100644
index 0000000..de69d8a
--- /dev/null
+++ b/include/linux/wait-simple.h
@@ -0,0 +1,152 @@
+#ifndef _LINUX_WAIT_SIMPLE_H
+#define _LINUX_WAIT_SIMPLE_H
+
+#include <linux/spinlock.h>
+#include <linux/list.h>
+
+#include <asm/current.h>
+
+struct swaiter {
+ struct task_struct *task;
+ struct list_head node;
+};
+
+#define DEFINE_SWAITER(name) \
+ struct swaiter name = { \
+ .task = current, \
+ .node = LIST_HEAD_INIT((name).node), \
+ }
+
+struct swait_head {
+ raw_spinlock_t lock;
+ struct list_head list;
+};
+
+#define DEFINE_SWAIT_HEAD(name) \
+ struct swait_head name = { \
+ .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \
+ .list = LIST_HEAD_INIT((name).list), \
+ }
+
+extern void __init_swait_head(struct swait_head *h, struct lock_class_key *key);
+
+#define init_swait_head(swh) \
+ do { \
+ static struct lock_class_key __key; \
+ \
+ __init_swait_head((swh), &__key); \
+ } while (0)
+
+/*
+ * Waiter functions
+ */
+static inline bool swaiter_enqueued(struct swaiter *w)
+{
+ return w->task != NULL;
+}
+
+extern void swait_prepare(struct swait_head *head, struct swaiter *w, int state);
+extern void swait_finish(struct swait_head *head, struct swaiter *w);
+
+/*
+ * Adds w to head->list. Must be called with head->lock locked.
+ */
+static inline void __swait_enqueue(struct swait_head *head, struct swaiter *w)
+{
+ list_add(&w->node, &head->list);
+}
+
+/*
+ * Removes w from head->list. Must be called with head->lock locked.
+ */
+static inline void __swait_dequeue(struct swaiter *w)
+{
+ list_del_init(&w->node);
+}
+
+/*
+ * Wakeup functions
+ */
+extern void __swait_wake(struct swait_head *head, unsigned int state);
+
+static inline void swait_wake(struct swait_head *head)
+{
+ __swait_wake(head, TASK_NORMAL);
+}
+
+/*
+ * Event API
+ */
+
+#define __swait_event(wq, condition) \
+do { \
+ DEFINE_SWAITER(__wait); \
+ \
+ for (;;) { \
+ swait_prepare(&wq, &__wait, TASK_UNINTERRUPTIBLE); \
+ if (condition) \
+ break; \
+ schedule(); \
+ } \
+ swait_finish(&wq, &__wait); \
+} while (0)
+
+/**
+ * swait_event - sleep until a condition gets true
+ * @wq: the waitqueue to wait on
+ * @condition: a C expression for the event to wait for
+ *
+ * The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
+ * @condition evaluates to true. The @condition is checked each time
+ * the waitqueue @wq is woken up.
+ *
+ * wake_up() has to be called after changing any variable that could
+ * change the result of the wait condition.
+ */
+#define swait_event(wq, condition) \
+do { \
+ if (condition) \
+ break; \
+ __swait_event(wq, condition); \
+} while (0)
+
+#define __swait_event_timeout(wq, condition, ret) \
+do { \
+ DEFINE_SWAITER(__wait); \
+ \
+ for (;;) { \
+ swait_prepare(&wq, &__wait, TASK_UNINTERRUPTIBLE); \
+ if (condition) \
+ break; \
+ ret = schedule_timeout(ret); \
+ if (!ret) \
+ break; \
+ } \
+ swait_finish(&wq, &__wait); \
+} while (0)
+
+/**
+ * swait_event_timeout - sleep until a condition gets true or a timeout elapses
+ * @wq: the waitqueue to wait on
+ * @condition: a C expression for the event to wait for
+ * @timeout: timeout, in jiffies
+ *
+ * The process is put to sleep (TASK_UNINTERRUPTIBLE) until the
+ * @condition evaluates to true. The @condition is checked each time
+ * the waitqueue @wq is woken up.
+ *
+ * wake_up() has to be called after changing any variable that could
+ * change the result of the wait condition.
+ *
+ * The function returns 0 if the @timeout elapsed, and the remaining
+ * jiffies if the condition evaluated to true before the timeout elapsed.
+ */
+#define swait_event_timeout(wq, condition, timeout) \
+({ \
+ long __ret = timeout; \
+ if (!(condition)) \
+ __swait_event_timeout(wq, condition, __ret); \
+ __ret; \
+})
+
+#endif
diff --git a/kernel/Makefile b/kernel/Makefile
index 11949f1..6a9558b 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -10,7 +10,7 @@ obj-y = sched.o fork.o exec_domain.o panic.o printk.o \
kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o \
hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
notifier.o ksysfs.o pm_qos_params.o sched_clock.o cred.o \
- async.o range.o jump_label.o
+ async.o range.o wait-simple.o jump_label.o
obj-y += groups.o

ifdef CONFIG_FUNCTION_TRACER
diff --git a/kernel/wait-simple.c b/kernel/wait-simple.c
new file mode 100644
index 0000000..7f1cb72
--- /dev/null
+++ b/kernel/wait-simple.c
@@ -0,0 +1,62 @@
+/*
+ * Simple waitqueues without fancy flags and callbacks
+ *
+ * (C) 2011 Thomas Gleixner <tg...@linutronix.de>
+ *
+ * Based on kernel/wait.c
+ *
+ * For licencing details see kernel-base/COPYING
+ */
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/wait-simple.h>
+
+void __init_swait_head(struct swait_head *head, struct lock_class_key *key)
+{
+ raw_spin_lock_init(&head->lock);
+ lockdep_set_class(&head->lock, key);
+ INIT_LIST_HEAD(&head->list);
+}
+EXPORT_SYMBOL_GPL(__init_swait_head);
+
+void swait_prepare(struct swait_head *head, struct swaiter *w, int state)
+{
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&head->lock, flags);
+ w->task = current;
+ __swait_enqueue(head, w);
+ set_current_state(state);
+ raw_spin_unlock_irqrestore(&head->lock, flags);
+}
+EXPORT_SYMBOL_GPL(swait_prepare);
+
+void swait_finish(struct swait_head *head, struct swaiter *w)
+{
+ unsigned long flags;
+
+ __set_current_state(TASK_RUNNING);
+ if (w->task) {
+ raw_spin_lock_irqsave(&head->lock, flags);
+ __swait_dequeue(w);
+ raw_spin_unlock_irqrestore(&head->lock, flags);
+ }
+}
+EXPORT_SYMBOL_GPL(swait_finish);
+
+void __swait_wake(struct swait_head *head, unsigned int state)
+{
+ struct swaiter *curr, *next;
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&head->lock, flags);
+
+ list_for_each_entry_safe(curr, next, &head->list, node) {
+ if (wake_up_state(curr->task, state)) {
+ __swait_dequeue(curr);
+ curr->task = NULL;
+ }
+ }
+
+ raw_spin_unlock_irqrestore(&head->lock, flags);
+}
--
1.7.8.3


signature.asc

Steven Rostedt

unread,
Jan 17, 2012, 11:21:20 PM1/17/12
to linux-...@vger.kernel.org, linux-rt-users, Thomas Gleixner, Carsten Emde, John Kacur
On Tue, 2012-01-17 at 22:45 -0500, Steven Rostedt wrote:
> Dear RT Folks,
>
> This is the RT stable review cycle of patch 3.0.14-rt32-rc1.
>

Note, I know that mainline is at 3.0.17, but I want to test these
patches first before I merge in the stable tree.

After this is released, I'll make a 3.0.17-rt33 (or 3.0.18-rt33 if that
appears first ;-)

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Reply all
Reply to author
Forward
0 new messages