This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] MPI benchmark performance gap between native linux anddo

To: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>, "Santos, Jose Renato G \(Jose Renato Santos\)" <joserenato.santos@xxxxxx>, "xuehai zhang" <hai@xxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Wed, 6 Apr 2005 08:23:52 +0100
Cc: "Turner, Yoshio" <yoshio_turner@xxxxxx>, Aravind Menon <aravind.menon@xxxxxxx>, Xen-devel@xxxxxxxxxxxxxxxxxxx, G John Janakiraman <john@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 06 Apr 2005 07:23:49 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcU6LhxujPVJCagoTPKbDyinXKneVQAElBcAAA0VRYAAAMIdoA==
Thread-topic: [Xen-devel] MPI benchmark performance gap between native linux anddomU
> If you're on an SMP with the dom0 and domU's on different 
> CPUs (and have CPU to burn) then you might get a performance 
> improvement by artificially capping some of the natural 
> batching to just a couple of packets. You could try modifying 
> netback's net_rx_action to send the notification through to 
> netfront more eagerly. This will help get the latency down, 
> at the cost of burning more CPU.

To be clearer, modify net_rx_action netback as follows to kick the
frontend after every packet. I expect this might help for some of the
larger message sizes. Kicking every packet may be overdoing it, so you
might want to adjust to every Nth, using the rx_notify array to store
the number of packets queued per netfront driver.

Overall, the MPI SendRecv benchmark is an absoloute worst case scenario
for s/w virtualization. Any 'optimisations' we add will be at the
expense of reduced CPU efficiency, possibly resulting in reduced
bandwidth for many users. The best soloution to this is to use a 'smart
NIC' or HCA (such as the Arsenic GigE we developed) that can deliver
packets directly to VMs. I expect we'll see a number of such NICs on the
market before too long, and they'll be great for Xen.


        evtchn = netif->evtchn;
        id =
        if ( make_rx_response(netif, id, status, mdata, size) &&
             (rx_notify[evtchn] == 0) )
-            rx_notify[evtchn] = 1;
-            notify_list[notify_nr++] = evtchn;
+            notify_via_evtchn(evtchn);


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>