WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] MPI benchmark performance gap between native linux anddo

To: "Santos, Jose Renato G \(Jose Renato Santos\)" <joserenato.santos@xxxxxx>, "xuehai zhang" <hai@xxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Wed, 6 Apr 2005 08:08:17 +0100
Cc: "Turner, Yoshio" <yoshio_turner@xxxxxx>, Aravind Menon <aravind.menon@xxxxxxx>, Xen-devel@xxxxxxxxxxxxxxxxxxx, G John Janakiraman <john@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 06 Apr 2005 07:08:21 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcU6LhxujPVJCagoTPKbDyinXKneVQAElBcAAA0VRYA=
Thread-topic: [Xen-devel] MPI benchmark performance gap between native linux anddomU
>   I believe your problem is due to a higher network latency 
> in Xen. Your formula to compute throughput uses the inverse 
> of round trip latency (if I understood it correctly). This 
> probably means that your application is sensitive to the 
> round trip latency. Your latency mesurements show a higher 
> value for domainU and this is the reason for the lower 
> throughput.  I am not sure but it is possible that network 
> interrupts or event notifications in the inter-domain channel 
> are being coalesced and causing longer latency. Keir, do 
> event notifications get coalesced in the inter-domain I/O 
> channel for networking?

There's no timeout-based coalescing right now, so we'll be pushing
through packets as soon as the sending party emptys its own work
queue.[*]

If you're on an SMP with the dom0 and domU's on different CPUs (and have
CPU to burn) then you might get a performance improvement by
artificially capping some of the natural batching to just a couple of
packets. You could try modifying netback's net_rx_action to send the
notification through to netfront more eagerly. This will help get the
latency down, at the cost of burning more CPU.

Ian

[*] We actually need to add some timeout-based coallescing to make true
inter-VM communication work more efficiently (i.e. two VMs on the same
node talking to each other rather than out over the network). We'll
probably need to have some heuristic to detect when we're entering a
'high bandwith regime' and only then enable the timeout-forced batching.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel