WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] HVM domain with write caching going on somewhere to disk

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] HVM domain with write caching going on somewhere to disk
From: Steve Ofsthun <sofsthun@xxxxxxxxxxxxxxx>
Date: Mon, 12 Nov 2007 12:53:28 -0500
Cc: James Harper <james.harper@xxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 12 Nov 2007 09:54:24 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C358A24F.18199%Keir.Fraser@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C358A24F.18199%Keir.Fraser@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.6 (X11/20070801)
Keir Fraser wrote:
> On 8/11/07 11:08, "James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote:
> 
>>> No, it's trickier than that. Blkback sends I/O requests direct into
>> the
>>> block subsystem, bypassing the buffer cache. You can see there's
>> potential
>>> for confusion therefore!
>> Ah yes. That would probably do it. So I need to make sure that the
>> buffer cache is flushed (eg async writes in qemu?)... or maybe get qemu
>> to talk direct to the block subsystem in the same way... any
>> suggestions? I've already butchered ioemu to get this working so far
>> (changed the PCI ID's of the IDE interface so Windows won't detect it)
>> so I'm not afraid of doing more of it :)
> 
> Qemu-dm should probably be specifying O_DIRECT when it opens guest storage
> volumes. There was discussion about this quite some time ago, but I don't
> think any patch was ever floated.

We had a patch against the non-AIO version of QEMU that used O_DIRECT.
Initially our motivation was strictly to fix any coherence issues with
PV drivers vs. QEMU.  The patch was somewhat ugly due to the buffer
alignment requirements of using O_DIRECT.  Discussions on the list at
the time indicated that AIO was soon to be integrated in QEMU and any
O_DIRECT work should wait since much of the same code paths were involved.

Further work using the O_DIRECT patch turned up performance concerns.
QEMU tended to generate many small I/Os which O_DIRECT turned into
synchronous I/Os.  This resulted in O_DIRECT performance being
measurably slower than buffered I/O for QEMU emulated disk I/O loads.
For us this translated to slow install performance on HVM guests.

Our current patch (against 3.1.2) uses fsync/fadvise to allow limited
use of the buffer cache.  This improves I/O performance in QEMU (over
O_DIRECT) while still maintaining device block coherence between PV
driver/QEMU disk access.

We are in the process of porting this code to the latest xen-unstable.
When that is ready, we will submit it to the list.

Steve

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>