WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Lots of udp (multicast) packet loss in domU

To: James Harper <james.harper@xxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Lots of udp (multicast) packet loss in domU
From: Mike Kazmier <DaKaZ@xxxxxxxxx>
Date: Wed, 14 Jan 2009 01:52:39 +0000
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 13 Jan 2009 17:53:44 -0800
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=zenbe.com; s=mail; t=1231897959; bh=3xNNesojDjIDcb37dmVeRDJMSacfVWnktTXxWNWlI/A=; h=Date:From:To:Cc:Message-Id:In-Reply-To:Subject:Mime-Version: Content-Type; b=SK52uwwcrxDYm3FhiW6XPS/asen6edTVhHQ3jm13x5aDLVE2Nd rUZ8hKrBGK3NC3HLXXVVLFWDRA0l4SEn7vgvMe73AhsfbG0v6AslVXCdJ7iEh24xwFF bPYIec9kwMxvYwhJvee3uKfprgjDPHsX/FEx/Zfn0uYD6Cp02Psnw0=
Domainkey-signature: a=rsa-sha1; s=mail; d=zenbe.com; c=simple; q=dns; b=bCziRiQUMRLBujCWDT0Cp1SJB84K3eIZk/oJo2Sq8J4ccXWrPqmwjRERpJ+mvu1ed D++NBmdpfD+djRdR3L3pxxBFS2L8aZNibS92INysklAyURTVEXsGnX3KlOWK5XyFgrg Os/ykiX5YClqSUr8wBdnfeAEurh8w8HIoDW8a5M=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AEC6C66638C05B468B556EA548C1A77D0155032B@trantor>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thanks for the reply James, there are some comments below but let me start by 
stating that indeed this problem is NOT solved.  When we removed the CPU cap we 
only moved on to the NEXT problem.  Here is the issue now, we are still getting 
massive packet loss (now upwards of 60%) and it appears the bridge is culprit.  
Again, we have approximately 600 Mbps of multicast (udp) traffic we are trying 
to pass TO and then back FROM a domU, and then other domU's occasionally grab 
and use this traffic.  Each domU that starts seems to consume >80% cpu of the 
dom0 - but only if it is passed the bridged ethernet ports.  So, maybe our 
architecture just isn't supported?  Should we be using a routed configuration 
(with xorp for a multicast router?) and/or just use PCI-Passthrough?  We don't 
see any such issues in PCI passthrough, but then our domU's have to be 
connected via an external switch, and this is something we were hoping to 
avoid.  Any advice here would be great.

On Mon, Jan 12, 2009 at 5:51 PM "James Harper" <james.harper@xxxxxxxxxxxxxxxx> 
wrote:
>> Hello,
> > 
> > After a few of us have spent a week google'ing around for answers, I
> feel
> > compelled to ask this question: how do I stop packet loss between my
> dom0
> > and domU?  We are currently running a xen-3.3.0 (and have tried with
> xen-
> > 3.2.1) on a Gentoo system with 2.6.18 kernel for both domU and dom0.
> We
> > have also tried a 2.6.25 kernel for dom0 with exactly the same
> results.
> > 
> > The goal is to run our multicast processing application in a domU with
> the
> > BRIDGED configuration.  Note: we do not have any problem if we put the
> > network interfaces into PCI Passthrough and use them exclusively in
> the
> > domU, but that is less than ideal as occasionally other domU's need to
> > communicate with those feeds.
> > 
> 
> Googling has probably already lead you to these tips but just in case:
> 
> Try 'echo 0 > bridge-nf-call-iptables' if you have't already. This will
> stop bridged traffic traversing any of your iptables firewall rules. If
> you are using ipv6 then also 'echo 0 > bridge-nf-call-ip6tables'

Tried this - no effect - we have no rules in place.

> Another thing to try is turning off checksum offloading. I don't think
> it is likely to make much difference but due to the little effort
> required it's probably worthwhile. (ethtool -k to see what settings are
> on, ethtool -K to modify them)

Again, no difference here.

> Also try pinning Dom0 and DomU to separate physical CPU's. Again I don't
> think this is likely to make much difference but it's easy to test.

Did this, also pinned domU to unused CPUs.  Again, no effect.

--Mike


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users