This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] [Patch] Buffer disk I/O requests

To: "Han, Weidong" <weidong.han@xxxxxxxxx>, "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>, "Keir Fraser" <keir@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [Patch] Buffer disk I/O requests
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>
Date: Fri, 18 May 2007 10:37:54 +0100
Delivery-date: Fri, 18 May 2007 02:36:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <08DF4D958216244799FC84F3514D70F00AB955@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AceWC1RtzMJy5K2DTwGmXANjr/voFgAF3jWkAABdhRAAAMp2gACzFObAAA7cMHA=
Thread-topic: [Xen-devel] [Patch] Buffer disk I/O requests
> > How does it compare to just using the SCSI HBA support that got
> > checked in a few days ago (in the qemu-dm 0.9.0 upgrade)?
> In our test,  the performance of SCSI HBA is better than our patch
> performance in qemu 0.9.0, 

Thanks for running the tests.

> But we find the total I/O preformance
> downgrade a lot after upgrade to qemu 0.9.0. We suspect there may be
> some issues in qemu 0.9.0.

Please can you explain in more detail. 

> > If we're going to add support for enabling buffering of ioport
> > accesses beyond what we currently special case for the VGA it should
> > be via a generic interface used by qemu to register sets of ports
> > with xen and configure how they will be handled.
> Yes, if there are many these buffering cases, using a generic
> is a final solution.

I'd like to see this generic mechanism introduced for more than just
whether writes are buffered or not -- it would be very useful to
register ranges of port or mmio space for handling in different
fashions, e.g.:
 * read: forward to handler domain X channel Y
 * read: read as zeros
 * write: forward to handler domain X channel Y (and flush any buffered)
 * write: buffer and forward to domain X channel Y
 * write: ignore writes
These hooks would also be very useful for adding debugging/tracing. I
severely dislike our current  approach of forwarding anything that
doesn't get picked up in Xen to a single qemu-dm rather than registering
explicit ranges.




Xen-devel mailing list