This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] QLogic Fibre Channel HBA Support

To: Steve Traugott <stevegt@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] QLogic Fibre Channel HBA Support
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Date: Mon, 05 Jul 2004 21:14:15 +0100
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxxx, Ian.Pratt@xxxxxxxxxxxx
Delivery-date: Mon, 05 Jul 2004 21:15:51 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Your message of "Mon, 05 Jul 2004 11:24:26 PDT." <20040705182426.GG18863@pathfinder>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> > I've mainly used a NetApp filer h/w target, so I haven't really
> > got enough experience to says whether the Ardistech code is
> > stable or not. There's always enbd, nbd, gnbd which are all
> > simple enough to believe that they work...
> I can see how that works using initrd to mount *nbd as root in guest
> domains, but what about using *nbd in dom0 and then allocating that as
> VDs to the other domains?  Is that supposed to work?  I remember
> something about needing physical raw partitions for VBDs, at least under
> 1.2.  Am I missing something?

You can, in principle, export anything that dom0 sees as a block
device e.g. sda7, nbd0, vg0 as a block device in another domain
e.g. sda1, hda1.

Hence, you should be able to implement the equivalent of Xen 1.2
VDs just by using Linux's existing LVM mechanism.

The current tools don't quite allow the full set of functionality
that Xen implements, but adding support for LVM partitions should be

> (For anyone curious, if using *nbd I would need to keep it in dom0,
> rather than in each guest, for both security and maintainability.  For a
> public Xenoserver, uid 0 on the guests is assumed to be untrusted.)

Each domain could use nbd directly and have its own separate area
of disk and its own uid space. The only issue would be that if
you were using nbd in each domain directly it would be quite
apparent to the user of each VM. E.g. an initrd would be required
to use nbd as a root file system. By running nbd in domain0 you
can hide all of this stuff from the other domains.


This SF.Net email sponsored by Black Hat Briefings & Training.
Attend Black Hat Briefings & Training, Las Vegas July 24-29 - 
digital self defense, top technical experts, no vendor pitches, 
unmatched networking opportunities. Visit www.blackhat.com
Xen-devel mailing list