> > I've mainly used a NetApp filer h/w target, so I haven't really
> > got enough experience to says whether the Ardistech code is
> > stable or not. There's always enbd, nbd, gnbd which are all
> > simple enough to believe that they work...
>
> I can see how that works using initrd to mount *nbd as root in guest
> domains, but what about using *nbd in dom0 and then allocating that as
> VDs to the other domains? Is that supposed to work? I remember
> something about needing physical raw partitions for VBDs, at least under
> 1.2. Am I missing something?
You can, in principle, export anything that dom0 sees as a block
device e.g. sda7, nbd0, vg0 as a block device in another domain
e.g. sda1, hda1.
Hence, you should be able to implement the equivalent of Xen 1.2
VDs just by using Linux's existing LVM mechanism.
The current tools don't quite allow the full set of functionality
that Xen implements, but adding support for LVM partitions should be
easy.
> (For anyone curious, if using *nbd I would need to keep it in dom0,
> rather than in each guest, for both security and maintainability. For a
> public Xenoserver, uid 0 on the guests is assumed to be untrusted.)
Each domain could use nbd directly and have its own separate area
of disk and its own uid space. The only issue would be that if
you were using nbd in each domain directly it would be quite
apparent to the user of each VM. E.g. an initrd would be required
to use nbd as a root file system. By running nbd in domain0 you
can hide all of this stuff from the other domains.
Ian
-------------------------------------------------------
This SF.Net email sponsored by Black Hat Briefings & Training.
Attend Black Hat Briefings & Training, Las Vegas July 24-29 -
digital self defense, top technical experts, no vendor pitches,
unmatched networking opportunities. Visit www.blackhat.com
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|