This patch is back to only allocating enough requests for one segment:
+ /* A segment (i.e. a page) can span multiple clusters */
+ s->max_aio_reqs = (getpagesize() / s->cluster_size) + 1;
In fact, this code allocates exactly two AIO requests for QCoW images
created by qcow-create, which have a default cluster size of 4K.
For a while now, tapdisk has supported EBUSY -- that is, if a plugin
returns -EBUSY to tapdisk, tapdisk will put the last segment back on its
queue and wait until the plugin has made progress before reissuing the
request. Thus users should not observe an error when QCoW runs out of
AIO requests. This is attested by the fact that even with only 2 AIO
requests allocated, QCoW block devices can handle a heavy load: I just
mkfs'ed and copied a 1GB file to a QCoW image with no problem --
although it took quite a long while to do so, since only two segments
were served at a time ;).
If you were observing errors while writing to QCoW devices, I'd like to
know how you were causing them -- we may need to make some other changes
to fix them. However, I'm not convinced that this patch is necessary.
> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Mark McLoughlin
> Sent: Thursday, April 26, 2007 3:21 AM
> To: Keir Fraser
> Cc: xen-devel
> Subject: Re: [Xen-devel] [PATCH] Segments can span multiple clusters
> On Thu, 2007-04-26 at 11:00 +0100, Keir Fraser wrote:
> > On 26/4/07 10:18, "Mark McLoughlin" <markmc@xxxxxxxxxx> wrote:
> > >> The current code allocates aio-request info for every segment in
> > >> ring (MAX_AIO_REQUESTS == BLK_RING_SIZE *
> > >> patch seems to take into account that each segment (part-of-page)
> > >> be split into clusters, hence the page_size/cluster_size
> > >> shouldn't this be multiplied by the existing MAX_AIO_REQUESTS?
> Otherwise you
> > >> provide only enough aio requests for one segment at a time,
> than a
> > >> request ring's worth of segments?
> > >
> > > Absolutely, well spotted. I fixed that typo after testing, but
> > > obviously forgot to run "quilt refresh" before sending ...
> > >
> > > Fixed version attached.
> > This one doesn't build (free_aio_state: line 164: structure has no
> > named 'private'). Perhaps free_aio_state() should take a 'struct
> > disk_driver' rather than a 'struct td_state'?
> Gah, merge error going from 3.0.4 to 3.0.5. This one builds.
Xen-devel mailing list