WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] What is the "fastest" way to access disk storage from Do

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] What is the "fastest" way to access disk storage from DomU?
From: "Todd Deshane" <deshantm@xxxxxxxxx>
Date: Sat, 26 Jan 2008 00:17:19 -0500
Delivery-date: Fri, 25 Jan 2008 21:17:52 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:references; bh=iemK/70PMae/JVCg2Nat8MfjpslMecGxF1KSBvLC9n0=; b=FzZZvSbfHvcAoYQrEQaca0ZeGzMhUQbQGgaVS7gTk/6+MDbJkT5dG5N4gcqvAIJepIVmBEQir48arB8drMo9IBuUPnJgTL6FZtO2nxVWY4dhus5pezGn5krDecNop+I6vBG+TDMHr9ovz2G2uYAtjOqub7LitoFIsJgjFFJxmyY=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:reply-to:to:subject:in-reply-to:mime-version:content-type:references; b=W+da1TfoS+O122iqUrZm+weF2i9KSVJqx6xk0TjCBq1dWcInLvI2o0Y1lnqggnjp4ss3LNI8faZmt8KqvkOV61Hoh02mYzghjePz52zxn/DOsaD0jhEYzBIvJsP9hX1KCpZ5/HzXy+fONkzAuIZpU7HvMhCbTS74X/J4o13pnWA=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080125084033.GC9803@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <20080124004402.GG17160@xxxxxxxxxxxxxxxxxxx> <200801242318.31698.mark.williamson@xxxxxxxxxxxx> <20080125084033.GC9803@xxxxxxxxxxxxxxxxxxx>
Reply-to: deshantm@xxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx


On Jan 25, 2008 3:40 AM, Maximilian Wilhelm <max@xxxxxxxxxxx> wrote:
Am Thursday, den 24 January hub Mark Williamson folgendes in die Tasten:

> > I've read some threads about storage speed here but didn't really got
> > a clue on what's the "best" or fastest way to set it up.

> > At the moment all the virtual disks are configured as

> >   disk = [ "phy:/dev/vg_xen1/<LV>,xvda1,w", ... ]

> > The volumes residing on the SAN storage are configured via EVMS and
> > I've 200MB/s writing speed from Dom0 (mesured with dd if=/dev/zero
> > of=/mnt/file) and "only" around 150MB/s when doing the same from DomU.

> When you do the tests of writing speed from dom0, are you writing to the
> domU's filesystem LV?  Otherwise you're not testing like-for-like since
> you're using a different part of the storage.  I'm not sure if this makes a
> difference in your case, but different parts of a physical disk can have
> surprisingly big differences in bandwidth (outer edge of the disk moves
> faster, so better bandwidth).

Sure I used the same EVMS volume.
Anything other would have been pointless :)

> I'm not too familiar with EVMS, maybe there's some bottleneck there I'm not
> familiar with and therefore missing.  Does EVMS do cluster volume management?
> I guess it does, as you're using it on a SAN ;-)

Paired with heartbeat (neccessary for EVMS) there is a Cluster Volume
Manager plugin/module (maybe the buzzword is called different), so
it's somehow possible to have the volumes shared among hosts.

> > Is this expected speed loss or is there any other way to give the DomU
> > access to the devices?

> You can only give domUs direct access to whole PCI devices at the moment, so
> unless you gave each a separate SAN adaptor, you can't really give them any
> more direct access.

> There's some work on SCSI passthrough being done by various people, so maybe
> at some point that'll let you pass individual LUNs through from the SAN.

Hmm.
That would most probably not really helpful in my case as I'm not
using the /dev/sd* devices I get from the SAN about 4 ways (dual-head
HBA connected to SAN with two SPs) but the /dev/mapper/<foo> device
handled via multipathd.

OK, I could push all the according SCSI devices to the DomU and
multipath inside (if possible), but it's not a simple task to figure out
which sd* belong to which LUN as far I know of.
(Ok, multipath can do so, so there has to be a way...)

I haven't been following the details of this thread very closely, but can't you use a udev rule or trick to do this?
For example if you look in /dev/disk/by-uuid, you get symlinks to the sda* devices. And the uuid should be unique based on the LUN right?

 


> For really high performance SAN access from domUs, the solution will
> eventually (one fine day, in the future) to use SAN adaptors with
> virtualization support that can natively give shared direct access to
> multiple domUs.  We're not quite there yet though!

So let's hope :)

Thanks
Ciao
Max
--
       Follow the white penguin.

_______________________________________________

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users