For this e-mail, I'm assuming we're talking about Xen 2.0...
> I'm wondering how guest domains are provided access to disks. I suppose
> Domain 0 has direct access to the actual hardware devices, and other
> domains simply see a virtualized block device.
All correct.
> However, when a guest
> domain wants to make a request to the disk, how does it get that request to
> Domain 0 and how does Domain 0 actually receive those requests?
When it boots, the unpriv domain sets up sharing of a page of memory between
its frontend driver and the backend driver running in domain 0. The front
and back end drivers do this by sending control messages via Xend. Xend also
binds an interdomain "event channel" between the two domains. This allows
the back and front end drivers to send each other "events" (virtual
interrupts).
Once the shared memory page is set up and the event channel bound, the two are
used for direct communication between the back and front end drivers. Xend
does not have to get involved anymore.
When the frontend wants to request data, it places a descriptor containng
details of the request into the shared memory region. It then sends an
"event" to the backend driver (remember, this is running in dom0), which
checks the shared memory region in response.
Assuming it decides the requests are valid, it will issue requests within the
domain 0 kernel to perform the IO directly into the memory of the unpriv
guest. When the IO has finished, the backend puts a response into the shared
memory page and sends an event to the frontend. The frontend responds to the
virtual interrupt by checking which IO completed and calling the appropriate
completion handlers.
All this happens without Xen knowing about the virtual devices. Xen operates
at the level of shared memory and event channels - it doesn't know or care
that domains are using these as virtual devices.
> There
> appears to be a virtual block device driver in drivers/xen/blkfront &
> blkback. Is this the driver used by the guest Domains to access the
> virtualized devices?
Correct again.
blkfront is the unpriv domain's part of the driver, blkback is the portion
that runs in domain 0 and sends requests to the disk where appropriate.
Note that these are not pretending to be any specific model of real-world
device - they're purely designed for high-performance virtual machine IO.
> My other question actually pertains to CoW support for disks. I noticed
> that there was some work done on making a CoW driver that lived in the
> XenoLinux kernels. Has this been made public?
Since device virtualisation moved into the dom0 kernel and out of Xen, it's
possible to use your favourite CoW driver for vanilla Linux to achieve this.
There's the LVM snapshot facility and, I think, the csnap driver and various
other patches that are out there...
> Have there been any
> attempts to make one that provides that functionality in Xen itself?
The functionality wouldn't be provided in Xen, since Xen itself doesn't manage
devices (that's done by dom0 in Xen 2.0). It wouldn't be a good idea to
bloat the XenLinux backend driver with it anyway - the best idea is for a
generic solution (like LVM, csnap, etc.). Unfortunately LVM and csnap both
have their drawbacks, which have previously been discussed on this list...
HTH,
Mark
-------------------------------------------------------
This SF.Net email is sponsored by:
Sybase ASE Linux Express Edition - download now for FREE
LinuxWorld Reader's Choice Award Winner for best database on Linux.
http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|