WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Re: [Xen-devel] Release 0.8.5 of GPL PV drivers forWindo

On Tue, Mar 18, 2008 at 11:02:56PM +0200, Pasi Kärkkäinen wrote:
> On Tue, Mar 18, 2008 at 12:02:52PM -0700, Tom Brown wrote:
> > >
> > >Isn't these 100 mbps or 1000 mbps speeds funny numbers for today's CPU
> > >power? I mean, somewhere in the design, there's something wrong that forces
> > >us to make possibly too many context switches between DomU, Hypervisor and
> > >Dom0. ???
> > >
> > >Emre
> > 
> > what, something like the 1500 byte maximum transmission unit (MTU) from 
> > back in the days when 10 MILLION bits per second was so insanely fast we 
> > connected everything to the same cable!? (remember 1200 baud modems?) Yes, 
> > there might be some "design" decisions that don't work all that well 
> > today.
> > 
> > AFAIK, XEN can't do oversize (jumbo) frames, that would be a big help for 
> > a lot of things (iSCSI, ATAoE, local network )... but even so, AFAIK it 
> > would only be a relatively small improvement (jumbo frames only going up 
> > to about 8k AFAIK).
> > 
> 
> Afaik Xen itself supports jumbo frames as long as everything in both dom0
> and domU is configured correctly. Do you have more information about the 
> opposite? 
> 
> "Standard" jumbo frames are 9000 bytes.. 
> 
> Something that might be interesting: 
> http://www.vmware.com/pdf/hypervisor_performance.pdf
> 
> Especially the "Netperf" section..
> 
> "VMware ESX Server delivers near native performance for both one- and
> two-client tests. The Xen hypervisor, on the other hand, is extremely slow,
> performing at only 3.6 percent of the native performance."
> 
> "VMware ESX Server does very well, too: the throughput for two-client tests
> goes up 1.9-.2 times compared to the one-client tests. Xen is almost CPU
> saturated for the one-client case, hence it does not get much scaling and
> even slows down for the send case."
> 
> "The Netperf results prove that by using its direct I/O architecture
> together with the paravirtualized vmxnet network driver approach, VMware ESX
> Server can successfully virtualize network I/O intensive datacenter
> applications such as Web servers, file servers, and mail servers. The very
> poor network performance makes the Xen hypervisor less suitable for any such
> applications."
> 
> It seems VMware used Xen 3.0.3 _without_ paravirtualized drivers (using QEMU
> emulated NIC), so that explains the poor result for Xen.. 
> 
> 
> Another test, this time with Xen Enterprise 3.2: 
> http://www.vmware.com/pdf/Multi-NIC_Performance.pdf
> 
> "With one NIC configured, the two hypervisors were each within a fraction of
> one percent of native throughput for both cases. Virtualization overhead had 
> no effect for this
> lightly-loaded configuration."
> 
> "With two NICs, ESX301 had essentially the same throughput as native, but
> XE320 was slower by 10% (send) and 12% (receive), showing the effect of CPU 
> overhead."
> 
> "With three NICs, ESX301 is close to its limit for a uniprocessor virtual
> machine, with a degradation compared to native of 4% for send and 3% for 
> receive. XE320 is able to
> achieve some additional throughput using three NICs instead of two, but the 
> performance degradation
> compared to native is substantial: 30% for send, 34% for receive."
> 
> 
> So using paravirtualized network drivers with Xen should make a huge 
> difference, but
> there still seems to be something to optimize.. to catch up with VMware ESX. 
> 
> 

Replying to myself..

http://xen.org/files/xensummit_4/NetworkIO_Santos.pdf
http://xen.org/files/xensummit_fall07/16_JoseRenatoSantos.pdf

Papers from last fall about Xen network performance (with analysis and
benchmarks) and optimization suggestions.. 

Worth reading. 

So I guess the summary would be that using PV network drivers you should be
able to get near native performance with at least single CPU/NIC guests..
this is already the case with xensource windows pv network drivers. 

In the future with netchannel2 performance should scale much higher (10
gigabit).

So now it's only about figuring out how to make gplpv windows drivers perform as
well as xensource drivers:)

-- Pasi

> And some more bencnmark results by xensource: 
> http://www.citrixxenserver.com/Documents/hypervisor_performance_comparison_1_0_5_with_esx-data.pdf
> 
> Something I noticed about the benchmark configuration:
> 
> "XenEnterprise 3.2 - Windows: Virtual Network adapters: XenSource Xen Tools
> Ethernet Adapter RTL8139 Family PCI Fast Ethernet NIC, Receive Buffer 
> Size=64KB"
> 
> Receive buffer size=64KB.. is that something that needs to be tweaked in the
> drivers for better performance? Or is that just some benchmarking tool
> related setting.. 
> 
> -- Pasi
> 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users