WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Directly mapping vifs to physical devices in netback -an

To: "Santos, Jose Renato G" <joserenato.santos@xxxxxx>, "Xen Devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Directly mapping vifs to physical devices in netback -an alternative to bridge
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Thu, 31 Aug 2006 15:39:58 +0100
Cc: Yoshio Turner <yoshiotu@xxxxxxxxxx>, G John Janakiraman <john@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 31 Aug 2006 07:46:52 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <08CA2245AFCF444DB3AC415E47CC40AF0DC47A@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcbMcIQRH4bGnSgURgScetMRwwq9egAmgPBw
Thread-topic: [Xen-devel] Directly mapping vifs to physical devices in netback -an alternative to bridge
> 
> Performance Results:
>   - Machine: 4-way P4 Xeon 2.8 GHz with 4GB of RAM (dom0 with 512 MB and
> domU with 256MB)
>   - Benchmark: single TCP connection at max rate on a gigabit interface
> (940 Mb/s)
> 
> Measurement: CPU utilization on domain0 (99% confidence interval for 8
> meaurements)
> =======================================================================
> | Experiment | default bridge  | bridge with        |   netback       |
> |            |                 | netfilter disabled |   switching     |
> =======================================================================
> |  receive   |  85.00% ±0.38%  |   73.97% ±0.23%    |  72.17% ±0.56%  |
> |  transmit  |  77.13  ±0.49%  |   68.86% ±0.73%    |  66.34% ±0.52%  |
> =======================================================================

I'm kinda surprised that it doesn't work better than that. We see bridge fns 
show up a lot on oprofile results, so I'd have expected to see more than 1.5% 
benefit. How are you measuring CPU utilization? Are the dom0/domU on different 
CPUs?

Do you get the downgraded bridging performance simply by having 
CONFIG_BRIDGE_NETFILTER=y in the compiled kernel, or do you need to have 
modules loaded or rules installed? Does ebtables have the same effect?

Thanks,
Ian






 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>