WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Segments can span multiple clusters withtap:qcow

To: "Mark McLoughlin" <markmc@xxxxxxxxxx>, "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Segments can span multiple clusters withtap:qcow
From: "Jake Wires" <Jake.Wires@xxxxxxxxxxxxx>
Date: Wed, 2 May 2007 18:06:25 -0700
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 03 May 2007 10:22:21 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1177582861.3487.45.camel@blaa>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C25636E6.DF42%keir@xxxxxxxxxxxxx> <1177582861.3487.45.camel@blaa>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AceNH0WS/5C6HokBQPS/T4w0JHDITw==
Thread-topic: [Xen-devel] [PATCH] Segments can span multiple clusters withtap:qcow
Hi,

This patch is back to only allocating enough requests for one segment:

+        /* A segment (i.e. a page) can span multiple clusters */
+        s->max_aio_reqs = (getpagesize() / s->cluster_size) + 1;

In fact, this code allocates exactly two AIO requests for QCoW images
created by qcow-create, which have a default cluster size of 4K.

For a while now, tapdisk has supported EBUSY -- that is, if a plugin
returns -EBUSY to tapdisk, tapdisk will put the last segment back on its
queue and wait until the plugin has made progress before reissuing the
request.  Thus users should not observe an error when QCoW runs out of
AIO requests.  This is attested by the fact that even with only 2 AIO
requests allocated, QCoW block devices can handle a heavy load: I just
mkfs'ed and copied a 1GB file to a QCoW image with no problem --
although it took quite a long while to do so, since only two segments
were served at a time ;).

If you were observing errors while writing to QCoW devices, I'd like to
know how you were causing them -- we may need to make some other changes
to fix them.  However, I'm not convinced that this patch is necessary.

Thanks,
Jake

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Mark McLoughlin
> Sent: Thursday, April 26, 2007 3:21 AM
> To: Keir Fraser
> Cc: xen-devel
> Subject: Re: [Xen-devel] [PATCH] Segments can span multiple clusters
> withtap:qcow
> 
> On Thu, 2007-04-26 at 11:00 +0100, Keir Fraser wrote:
> > On 26/4/07 10:18, "Mark McLoughlin" <markmc@xxxxxxxxxx> wrote:
> >
> > >> The current code allocates aio-request info for every segment in
a
> request
> > >> ring (MAX_AIO_REQUESTS == BLK_RING_SIZE *
MAX_SEGMENTS_PER_REQUEST).
> This
> > >> patch seems to take into account that each segment (part-of-page)
can
> itself
> > >> be split into clusters, hence the page_size/cluster_size
calculation,
> but
> > >> shouldn't this be multiplied by the existing MAX_AIO_REQUESTS?
> Otherwise you
> > >> provide only enough aio requests for one segment at a time,
rather
> than a
> > >> request ring's worth of segments?
> > >
> > > Absolutely, well spotted. I fixed that typo after testing, but
> > > obviously forgot to run "quilt refresh" before sending ...
> > >
> > > Fixed version attached.
> >
> > This one doesn't build (free_aio_state: line 164: structure has no
> member
> > named 'private'). Perhaps free_aio_state() should take a 'struct
> > disk_driver' rather than a 'struct td_state'?
> 
>       Gah, merge error going from 3.0.4 to 3.0.5. This one builds.
> 
> Thanks,
> Mark.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>