This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] iscsi

To: "Kip Macy" <kmacy@xxxxxxxxxxx>
Subject: RE: [Xen-devel] iscsi
From: "Williamson, Mark A" <mark.a.williamson@xxxxxxxxx>
Date: Tue, 20 Jan 2004 20:39:56 -0000
Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 20 Jan 2004 20:41:48 +0000
Envelope-to: steven.hand@xxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
Thread-index: AcPfkGxQ/AJzgOZUS+mAenxSiW1wlAAAScwg
Thread-topic: [Xen-devel] iscsi
> What you're saying sounds exactly right. Plus we can't stick a SW
> initiator in Xen without a TCP stack. My only hope would be a HW
> initiator. How annoying. I wonder how much work it would be to
> support what I'm thinking about? Managing NFS root for n virtual
> machines is much more annoying to manage. It would also make this
> a much harder sell internally.

You could still presumably just have all the domains connecting directly
to the target via iSCSI?  But I assume you wanted to re-export as VBDs
to avoid using any weird ramdisk-based hacks in order to get effectively
an iSCSI-based root filesystem in each guest, thus I realise this
wouldn't be the ideal.  Maybe you could NFS (or local partitions,
possibly via the new virtual disk stuff) for each root fs and use that
just for the basics, then use an iSCSI initiator in each domain to
access all the interesting stuff?

There are some plans (here at Intel Research Cambridge) to implement an
iSCSI "virtual channel processor" (see paper at
f), which would run the iSCSI protocol (with its own (optimised) net
stack) in a domain on top of Xen and also appear like a normal device to
guest Oses.  This work won't be ready for some time, though.  However,
it sounds like it would get exactly what you want (plus various other

Another option might be to re-export the iSCSI devices from dom0 over
Xen's internal "network" using some other network-based protocol that
could be used as a root fs.  Yes, that is a very icky idea ;-)

I imagine it would be possible to write some kind of user-space "proxy"
that would access devices in dom0 in the normal user program fashion and
then have XenoLinux drivers in guest domains talk to that proxy (either
through the internal network, or by the upcoming interdomain
communications facilities) - this could also be used to access weird
things like dom0 disk files as block devices.  You'd expect penalties in
performance and quality of service provision by doing this.  No-one's
currently working on this and I don't know what the feeling is as to how
worthwhile it would be...

The Virtual Channel Processors are the neatest solution but will take a

Other people may have suggestions, also...


The SF.Net email is sponsored by EclipseCon 2004
Premiere Conference on Open Tools Development and Integration
See the breadth of Eclipse activity. February 3-5 in Anaheim, CA.
Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>