WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Speed of Xen Network Bridge Interface (10/100/1000?)

To: Stephan Seitz <s.seitz@xxxxxxxxxxxx>
Subject: Re: [Xen-users] Speed of Xen Network Bridge Interface (10/100/1000?)
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Fri, 30 Nov 2007 18:16:06 +0000
Cc: xen-users@xxxxxxxxxxxxxxxxxxx, Emre Erenoglu <erenoglu@xxxxxxxxx>
Delivery-date: Fri, 30 Nov 2007 10:17:04 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <474D1806.1090103@xxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <fe9771a80711270911r49a9b437gccbcacdf49b6ee1d@xxxxxxxxxxxxxx> <200711272212.37427.mark.williamson@xxxxxxxxxxxx> <474D1806.1090103@xxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.6 (enterprise 0.20070907.709405)
> Just to add some salt to the original posters question, I'm going to
> migrate clustermembers into domU's on one big machine.
> Well, it's not a full featered HA cluster, it consists of one huge nfs/nis
> server and a lot of diskless servers with as little as necessary failover
> management.
> First tests showed, that booting one of the machines as domU results in
> random disk throughput of about 10 MB/s against about 80-95MB/s when
> running on bare-matel.

I'm not entirely clear where your domU is accessing storage from vs the bare 
metal case.  The domU is accessing disk via NFS?  How about the bare metal 
machine in your example?

> I don't necessarilly need to keep the current infrastructure, but I'll
> definitely need one mountpoint available on many (expandable) machines.
> Is there some best-practice description on how to get one mountpoint
> available to a lot of domU's ?

Well, if you can arrange for it to be readonly then the obvious thing to do is 
to export readonly VBDs to all the domUs.  That way they should all get 
access to it at a speed similar to local disk access.

If you needed them to have private writeable access you could look at some 
kind of layered copy-on-write access (e.g. run unionfs in the domUs?).

If they actually needed shared writeable access then at the moment I guess the 
best option is either NFS or to set them all up with a cluster filesystem 
such as GFS or OCFS2.  You might get better performance with a cluster 
filesystem, I guess - I'm not aware of any benchmarks of cluster FSes on Xen 
though.

Various projects, such as my XenFS filesystem are aiming to provide high 
performance NFS-like functionality on Xen, but I don't know of any ready for 
production use.

Cheers,
Mark

> Thanks for any suggestion!
>
> Stephan
>
> Mark Williamson schrieb:
> >> I would like to learn the speed of the network bridge interfaces created
> >> by XEN.
> >>
> >> More specifically, on xenbr0, given that the traffic only occurs between
> >> my Dom0 host and PV DomU guest, am I limited to 10 Mbps, 100 Mbps or
> >> 1000 Mbps? Does it depend on something such as ethernet card capability
> >> (even though the packets don't go out of the card and stay inside Dom0)?
> >
> > It's not limited by your physical ethernet card, nor is it restricted to
> > any particular maximum.  It's basically limited by how fast the Xen
> > virtual network drivers and the Linux bridging code can move the data
> > around.  This used to actually be slower a domU accessing the physical
> > ethernet due to the extra memory operations that were required (and used
> > a fair bit of CPU).  I think there have been some changes to reduce the
> > bottleneck and improve intrahost performance since then, so it would be
> > faster than I remember it. I'm not sure if it's currently faster than
> > GigE; possibly.
> >
> > It ought to be significantly faster than 100Mbps on a modern machine. 
> > It'll act like a really fast ethernet card, with no hard limit on the
> > transmission speed (instead, transmission speed will be limited by how
> > powerful your machine is and how efficient the virtual ethernet code is).
> >
> >> I plan to use iScsi or Ata-over-Ethernet, that's why I'm asking this
> >> question,
> >
> > Is that from dom0 to domU?  Do you have a particular reason for doing
> > that? Using blkback / blkfront would be simpler and more efficient.
> >
> > Cheers,
> > Mark



-- 
Dave: Just a question. What use is a unicyle with no seat?  And no pedals!
Mark: To answer a question with a question: What use is a skateboard?
Dave: Skateboards have wheels.
Mark: My wheel has a wheel!

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users