This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] new netfront and occasional receive path lockup

To: Christophe Saout <christophe@xxxxxxxx>
Subject: Re: [Xen-devel] new netfront and occasional receive path lockup
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Mon, 23 Aug 2010 17:53:13 -0700
Cc: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 23 Aug 2010 17:53:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1282502275.14390.59.camel@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1282495384.12843.11.camel@xxxxxxxxxxxxxxxxxxxx> <1282502275.14390.59.camel@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20100720 Fedora/3.1.1-1.fc13 Lightning/1.0b2pre Thunderbird/3.1.1
 On 08/22/2010 11:37 AM, Christophe Saout wrote:
> Hmm, looking a bit more.
> rx.sring->private.netif.smartpoll_active lies in a piece of memory that
> is shared between netback and netfront, is that right?
> If that is so, the tx spinlock in netfront only protects against
> simultaneous modifications from another thread in netfront, so netback
> can read smartpoll_active while netfront is fiddling with it.  Is that
> safe?

It depends on exactly how it is used.  But any use cross-cpu shared
memory must carefully consider access ordering, and possibly have
explicit barriers to make sure that the expected ordering is actually
seen by all cpus.


> Note that when the lockup occurs, /proc/interrupts in the guest doesn't
> show any interrupts arriving from for eth0 anymore.  Are there any
> conditions where netback waits for netfront to retrieve packages even
> when new packages arrive? (like e.g. when the ring is full and there is
> backlog into the network stack or something?) Any way to debug this from
> the Dom0 side?  Like looking into the state of the ring from userspace?
> Debug options?
>       Christophe
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list