This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] xen-blkfront: simplify resume?

To: Keir Fraser <keir.xen@xxxxxxxxx>
Subject: Re: [Xen-devel] xen-blkfront: simplify resume?
From: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Date: Thu, 24 Mar 2011 19:39:55 -0700
Cc: Xen Developers <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 24 Mar 2011 19:40:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C9B16C8A.155E0%keir.xen@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix VMD
References: <C9B16C8A.155E0%keir.xen@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Thu, 2011-03-24 at 17:47 -0400, Keir Fraser wrote:
> On 24/03/2011 09:31, "Daniel Stodden" <daniel.stodden@xxxxxxxxxx> wrote:
> > Dear xen-devel.
> > 
> > I think the blkif_recover (blkfront's transparent VM resume) stuff looks
> > quite overcomplicated.
> > 
> > We copy the ring message to a shadow request allocated during submit, a
> > process involving some none-obvious-looking get_id_from_freelist()
> > subroutine to obtain a vector slot, and a memcpy.
> > 
> > When receiving a resume callback from xenstore, we memcpy the entire
> > shadow vector, reset the original one to zero, then reallocate the
> > thereby freed shadow entries and not only copy the message on the ring,
> > but the shadow back into the shadow vector just freed to keep stuff
> > consistent. Hmmm.
> > 
> > I wonder, should we just take the pending request and push it back onto
> > the request_queue (with a blk_requeue_request)?
> Are you suggesting to get rid of the shadow state? It is needed, because
> in-flight requests can be overwritten by out-of-order responses written into
> the shared ring by the backend driver.

I was suggesting just that while missing the somewhat essential fact
that we're currently using the segment vectors in shadow state as the
single backing store for our gref lists. :)

I'm aware that this is a duplex channel sharing message slots, and also
wouldn't suggest some daredevil mode which reads critical state  back
from the sring even if that were not the case.

Now, blkif segments are by far the are the most significant payload, not
much point in isolating them. Nor does scattering the memcpies look like
a particularly good idea.

Also, one might want to add least a few more paranoia BUG_ON/fail-if in
case of request/response mismatch (id, op, etc) than we currently do.

So keeping the full message makes perfect sense.

In summary, yesterdays idea was 'Yeah, maybe'. Right now it's rather
'hell, no' :)

Still, pushing requests back on the queue seems more straightforward
than what's happening now. Once I get it to run and it still looks good.

Also, I might have found a pretty optimization for the shadow copies.

Cheers + Thanks,

>  -- Keir
> > Different from the present code, this should also help preserve original
> > submit order if done right. (Don't panic, not like it matters a lot
> > anymore since the block barrier flags are gone.)
> > 
> > If we want to keep the shadow copy, let's do so with a prep_rq_fn. It
> > gets called before the request gets pulled off the queue. Looks nicer,
> > and one can arrange things so it only gets called once.
> > 
> > Counter opinions?
> > 
> > Thanks,
> > Daniel
> > 
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>