Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[PATCH] xen-netfront: Fix handling packets on compound pages with skb_linearize

19 views
Skip to first unread message

Zoltan Kiss

unread,
Aug 11, 2014, 1:40:04 PM8/11/14
to
There is a long known problem with the netfront/netback interface: if the guest
tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
it gets dropped. The reason is that netback maps these slots to a frag in the
frags array, which is limited by size. Having so many slots can occur since
compound pages were introduced, as the ring protocol slice them up into
individual (non-compound) page aligned slots. The theoretical worst case
scenario looks like this (note, skbs are limited to 64 Kb here):
linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
using 2 slots
first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
end and the beginning of a page, therefore they use 3 * 15 = 45 slots
last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
Although I don't think this 51 slots skb can really happen, we need a solution
which can deal with every scenario. In real life there is only a few slots
overdue, but usually it causes the TCP stream to be blocked, as the retry will
most likely have the same buffer layout.
This patch solves this problem by linearizing the packet. This is not the
fastest way, and it can fail much easier as it tries to allocate a big linear
area for the whole packet, but probably easier by an order of magnitude than
anything else. Probably this code path is not touched very frequently anyway.

Signed-off-by: Zoltan Kiss <zolta...@citrix.com>
Cc: Wei Liu <wei....@citrix.com>
Cc: Ian Campbell <Ian.Ca...@citrix.com>
Cc: Paul Durrant <paul.d...@citrix.com>
Cc: net...@vger.kernel.org
Cc: linux-...@vger.kernel.org
Cc: xen-...@lists.xenproject.org

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 055222b..23359ae 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -628,9 +628,10 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
xennet_count_skb_frag_slots(skb);
if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
- net_alert_ratelimited(
- "xennet: skb rides the rocket: %d slots\n", slots);
- goto drop;
+ net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
+ slots, skb->len);
+ if (skb_linearize(skb))
+ goto drop;
}

spin_lock_irqsave(&queue->tx_lock, flags);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

David Miller

unread,
Aug 11, 2014, 6:00:03 PM8/11/14
to
From: Zoltan Kiss <zolta...@citrix.com>
Date: Mon, 11 Aug 2014 18:32:23 +0100

> There is a long known problem with the netfront/netback interface: if the guest
> tries to send a packet which constitues more than MAX_SKB_FRAGS + 1 ring slots,
> it gets dropped. The reason is that netback maps these slots to a frag in the
> frags array, which is limited by size. Having so many slots can occur since
> compound pages were introduced, as the ring protocol slice them up into
> individual (non-compound) page aligned slots. The theoretical worst case
> scenario looks like this (note, skbs are limited to 64 Kb here):
> linear buffer: at most PAGE_SIZE - 17 * 2 bytes, overlapping page boundary,
> using 2 slots
> first 15 frags: 1 + PAGE_SIZE + 1 bytes long, first and last bytes are at the
> end and the beginning of a page, therefore they use 3 * 15 = 45 slots
> last 2 frags: 1 + 1 bytes, overlapping page boundary, 2 * 2 = 4 slots
> Although I don't think this 51 slots skb can really happen, we need a solution
> which can deal with every scenario. In real life there is only a few slots
> overdue, but usually it causes the TCP stream to be blocked, as the retry will
> most likely have the same buffer layout.
> This patch solves this problem by linearizing the packet. This is not the
> fastest way, and it can fail much easier as it tries to allocate a big linear
> area for the whole packet, but probably easier by an order of magnitude than
> anything else. Probably this code path is not touched very frequently anyway.
>
> Signed-off-by: Zoltan Kiss <zolta...@citrix.com>

Applied.

You may wish to now make your queue stop/wake point be MAX_SKB_FRAGS + 1 slots.
That way you will always abide by the netdev queue management rules in that
if the queue is awake you will always be able to accept at least on more SKB.

Stefan Bader

unread,
Dec 1, 2014, 4:00:07 AM12/1/14
to
This does not seem to be marked explicitly as stable. Has someone already asked
David Miller to put it on his stable queue? IMO it qualifies quite well and the
actual change should be simple to pick/backport.

-Stefan

>
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 055222b..23359ae 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -628,9 +628,10 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
> slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
> xennet_count_skb_frag_slots(skb);
> if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
> - net_alert_ratelimited(
> - "xennet: skb rides the rocket: %d slots\n", slots);
> - goto drop;
> + net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
> + slots, skb->len);
> + if (skb_linearize(skb))
> + goto drop;
> }
>
> spin_lock_irqsave(&queue->tx_lock, flags);
>
> _______________________________________________
> Xen-devel mailing list
> Xen-...@lists.xen.org
> http://lists.xen.org/xen-devel
>


signature.asc

David Vrabel

unread,
Dec 1, 2014, 8:40:06 AM12/1/14
to
I think it's a candidate, yes.

Can you expand on the user visible impact of the bug this patch fixes?
I think it results in certain types of traffic not working (because the
domU always generates skb's with the problematic frag layout), but I
can't remember the details.

David

Zoltan Kiss

unread,
Dec 1, 2014, 9:00:07 AM12/1/14
to
Yes, this line in the comment talks about it: "In real life there is
only a few slots overdue, but usually it causes the TCP stream to be
blocked, as the retry will most likely have the same buffer layout."
Maybe we can add what kind of traffic triggered this so far, AFAIK NFS
was one of them, and Stefan had an another use case. But my memories are
blur about this.

Zoli

Stefan Bader

unread,
Dec 1, 2014, 9:20:06 AM12/1/14
to
We had some report about some web-app hitting packet losses. I suspect that also
was streaming something. For a easy trigger we found redis-benchmark (part of
the redis keyserver) with a larger (iirc 1kB) payload would trigger the
fragmentation/exceeding pages to happen. Though I think it did not fail but
showed a performance drop instead (from memory which also suffers from loosing
detail).

-Stefan
>
> Zoli


signature.asc

Luis Henriques

unread,
Dec 8, 2014, 5:20:05 AM12/8/14
to
Thank you Stefan, I'm queuing this for the next 3.16 kernel release.

Cheers,
--
Luís

> -Stefan
>
> >
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index 055222b..23359ae 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -628,9 +628,10 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
> > slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
> > xennet_count_skb_frag_slots(skb);
> > if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
> > - net_alert_ratelimited(
> > - "xennet: skb rides the rocket: %d slots\n", slots);
> > - goto drop;
> > + net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
> > + slots, skb->len);
> > + if (skb_linearize(skb))
> > + goto drop;
> > }
> >
> > spin_lock_irqsave(&queue->tx_lock, flags);
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-...@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>
>


David Vrabel

unread,
Dec 8, 2014, 6:20:06 AM12/8/14
to
Don't backport this yes. It's broken. It produces malformed requests
and netback will report a fatal error and stop all traffic on the VIF.

David

Luis Henriques

unread,
Dec 9, 2014, 5:00:09 AM12/9/14
to
Ok, thank you. I've dropped it already.

Cheers,
--
Luís
0 new messages