WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] question about io path in the front/backend

To: Mark Williamson <mark.williamson@xxxxxxxxxxxx>, Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Subject: Re: [Xen-devel] question about io path in the front/backend
From: tgh <wwwwww4187@xxxxxxxxxxx>
Date: Fri, 07 Dec 2007 17:25:24 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 07 Dec 2007 01:26:22 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200712040258.32187.mark.williamson@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <10EA09EFD8728347A513008B6B0DA77A02621F51@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <473BEF62.2070606@xxxxxxxxxxx> <200712040258.32187.mark.williamson@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.7 (Windows/20060909)
hi
In the phy:device mode, dom0 and domU share the page filled with the I/O request, which is transfered between frontend and backend, is it ?and you know, in the native linux, there are buffer and cache ,such as bio or something , which is shared in I/Opath ,say by FS and Block Driver, is it? and what about the cache or buffer in the front/backend mode in xen, say phy:device mode,does front/backend share some cache or buffer ,or does front/backend share no cache or buffer at all? and what about the granttable's function? does granttable (or shared page filled with I/Oreqest or I/O data) function as the cache or buffer in the native linux I/O path? or does the shared page between the front/backend act only as transferring the data and I/Orequest , or does it have a cache or buffer function as the cache or buffer such as bio in the native linux?


Thanks in advance




Mark Williamson 写道:
  I have read some documents and wiki about split driver in xen,and I am
confused about the I/O path ,in which a sys_read() pass through the domU
and dom0,does sys_read() in the domU pass through vfs and ,say ,ext3fs
in domU,and insert request into the requeest_queue of the
frontend-driver,is it right?

Sounds like you have the right idea. Requests get queued with the frontend driver in terms of Linux structures. IO requests to satisfy these are then placed into the shared memory ring so that the backend can find out what we're asking for.

  and then ,say domU sets up with a *.img file in the dom0, then what
does frontend and backend driver do?
does frontend transmit the request to the backend ,is it right?

Yes, the frontend does this by putting requests into the shared memory ringbuffer which is also accessible by the backend. The frontend then sends and event to the backend; this causes an interrupt in the backend so that it knows it must check the shared memory.

  and then what does backend driver do ? does backend transfer the
request to the phyiscal driver in the dom0 ,is it right?

Yes. The backend responds to the interrupt by checking the shared memory for new requests, then it maps parts of the domUs memory so that dom0 will be able to write data into it. Then it submits requests to the Linux block IO subsystem to fill that memory with data. The Linux block IO system eventually sends these requests to the device driver, to do the IO directly into the mapped domU memory.

 or does backend transfer the request into some read()operation ,and
submit it to the vfs and ,say,ext3fs in dom0, and do another relatively
complete io path in the dom0,is it right?

If you're just exporting a phy: device to the guest, then the block IO requests go down to the block device driver for that device and are serviced there. e.g. if I export IDE driver phy:/dev/hda to my guest, then the IDE driver will satisfy the IO requests directly.
Requests go backend -> block layer -> real device driver

If you're using a file: device then you have to go through the filesystem layer... So the IO requests go backend -> block layer -> loopback block device -> ext3 -> block layer (again) -> real device driver

If you're using blktap then the requests take a trip via userspace before getting submitted.

 or  if  backend  transfer the request to physical driver directly, how
does the backend deal with the request's virtual address ,and how does
backend manage bio buffer ,does physical driver and backend and frontend
share the bio buffer in  some way, or what does xen deal with it ?

I hope what I've said clarifies things a bit.

Cheers,
Mark



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel