WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [Qemu-devel] [PATCH 05/10] xen: add block device backend

To: qemu-devel@xxxxxxxxxx
Subject: [Xen-devel] Re: [Qemu-devel] [PATCH 05/10] xen: add block device backend driver.
From: Christoph Hellwig <hch@xxxxxx>
Date: Thu, 2 Apr 2009 19:02:09 +0200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Gerd Hoffmann <kraxel@xxxxxxxxxx>
Delivery-date: Thu, 02 Apr 2009 10:02:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1238621982-18333-6-git-send-email-kraxel@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1238621982-18333-1-git-send-email-kraxel@xxxxxxxxxx> <1238621982-18333-6-git-send-email-kraxel@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.3.28i
On Wed, Apr 01, 2009 at 11:39:37PM +0200, Gerd Hoffmann wrote:
> +static void inline blkif_get_x86_32_req(blkif_request_t *dst, 
> blkif_x86_32_request_t *src)
> +{

> +static void inline blkif_get_x86_64_req(blkif_request_t *dst, 
> blkif_x86_64_request_t *src)
> +{

I think you'd be better of moving them to the .c file as normal static
function and leave the inlining decisions to the compiler.

> +
> +/*
> + *  FIXME: the code is designed to handle multiple outstanding
> + *         requests, which isn't used right now.  Plan is to
> + *         switch over to the aio block functions once they got
> + *         vector support.
> + */

We already have bdrv_aio_readv/writev which currently linearize the
buffer underneath.  Hopefully Anthony will have commited the patch to
implement the real one while I'm writing this, too :)

After those patches bdrv_aio_read/write will be gone so this code won't
compile anymore, too.

> +static int ioreq_runio_qemu_aio(struct ioreq *ioreq)
> +{
> +    struct XenBlkDev *blkdev = ioreq->blkdev;
> +    int i, len = 0;
> +    off_t pos;
> +
> +    if (-1 == ioreq_map(ioreq))
> +     goto err;
> +
> +    ioreq->aio_inflight++;
> +    if (ioreq->presync)
> +     bdrv_flush(blkdev->bs); /* FIXME: aio_flush() ??? */
> +
> +    switch (ioreq->req.operation) {
> +    case BLKIF_OP_READ:
> +     pos = ioreq->start;
> +     for (i = 0; i < ioreq->vecs; i++) {
> +            ioreq->aio_inflight++;
> +            bdrv_aio_read(blkdev->bs, pos / BLOCK_SIZE,
> +                          ioreq->vec[i].iov_base,
> +                          ioreq->vec[i].iov_len / BLOCK_SIZE,
> +                          qemu_aio_complete, ioreq);
> +         len += ioreq->vec[i].iov_len;
> +         pos += ioreq->vec[i].iov_len;
> +     }

hdrv_flush doesn't actually empty the aio queues but only issues
a fsync.  So we could still re-order requeuests around the barrier
with this implementation.  I will soon submit a real block-layer level
barrier implementation that just allows to flag a bdrv_aio_read/write
request as barrier and deal with this under the hood.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>