WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Question about VBD interface

To: David Lie <lie@xxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Question about VBD interface
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxx>
Date: Thu, 04 Nov 2004 01:50:28 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx, Ian.Pratt@xxxxxxxxxxxx
Delivery-date: Thu, 04 Nov 2004 02:01:52 +0000
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: Your message of "Thu, 04 Nov 2004 00:09:27 GMT." <loom.20041104T005724-556@xxxxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> I'm wondering how guest domains are provided access to disks.  I suppose 
> Domain 0 has direct access to the actual hardware devices, and other domains 
> simply see a virtualized block device.  However, when a guest domain wants to 
> make a request to the disk, how does it get that request to Domain 0 and how 
> does Domain 0 actually receive those requests?  There appears to be a virtual 
> block device driver in drivers/xen/blkfront & blkback.  Is this the driver 
> used by the guest Domains to access the virtualized devices?

Yes. The blkback driver goes in dom0 (or any other suitably
privileged domain) and is able to export any block device Linux
knows about (e.g. physical partition, LVM volume, loopback file
etc) to it's peer blkfront driver in the guest domain.
 
> My other question actually pertains to CoW support for disks.
> I noticed that there was some work done on making a CoW driver
> that lived in the XenoLinux kernels.  Has this been made
> public?  Have there been any attempts to make one that provides
> that functionality in Xen itself?

There are a bunch of CoW options:

There's Bin Ren's CoW driver for Linux 2.4, or you can just use
the standard LVM2 dm-snap stuff in Linux 2.6. The latter
currently doesn't deal well with having many CoW devices, but it
shouldn't be too hard to fix up. (Michael Vrable started looking
at this.)

Since disk space is cheap, it might be better to write a CoW
driver that just uses CoW while it clones the actual disk
content in the background.

If you want CoW at the file system layer, there are a bunch of
different union/stackable/overlay file system kernel drivers for
Linux, but I'm not sure I could actually recommend any of them.

One approach that we've used is a user-space NFS server that
implements CoW semantics. It works OK, but performance isn't as
good as we'd like. 

One advantage of CoW schemes is that it should make it easy to
implement a shared buffer cache (as its easy to know when blocks
are identical). This is something we're actively looking in to.

Ian




-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader's Choice Award Winner for best database on Linux.
http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>