WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] blkback: Fix block I/O latency issue

To: "Vincent, Pradeep" <pradeepv@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] blkback: Fix block I/O latency issue
From: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Date: Tue, 3 May 2011 10:52:56 -0700
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
Delivery-date: Tue, 03 May 2011 10:54:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C9E484DF.1301D%pradeepv@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C9E484DF.1301D%pradeepv@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, 2011-05-02 at 21:10 -0400, Vincent, Pradeep wrote:
> Thanks Jan.
> 
> Re: avoid unnecessary notification
> 
> If this was a deliberate design choice then the duration of the delay is
> at the mercy of the pending I/O latencies & I/O patterns and the delay is
> simply too long in some cases. E.g. A write I/O stuck behind a read I/O
> could see more than double the latency on a Xen guest compared to a
> baremetal host. Avoiding notifications this way results in significant
> latency degradation perceived by many applications.

I'm trying to follow - let me know if I misread you - but I think you're
misunderstanding this stuff. 

The notification avoidance these macros implement does not promote
deliberate latency. This stuff is not dropping events or deferring guest
requests.

It only avoids a gratuitious notification sent by the remote end in
cases where the local one didn't go to sleep yet, and therefore can
guarantee that it's going to process the message ASAP, right after
finishing what's still pending from the previous kick. 

It's only a mechanism to avoid excess interrupt signaling. Think about a
situation where you ask the guy at the front door to take his thumb off
the buzzer while you're already running down the hallway.

R/W reordering is a matter dealt with by I/O schedulers. 

Any case of write I/O behind the read you describe is supposed to be
queued back-to-back. It should never get stuck. A backend can obviously
reserve the right to override guest submit order, but blkback doesn't do
this, it's just pushing everything down the disk queue as soon as it
sees it.

So, that'd be the basic idea. Now, we've got that extra stuff in there
mixing that up between request and response processing, and it's
admittedly somewhat hard to read.

If you found a bug in there, well, yoho. Normally the slightest mistake
on the event processing front rather leads to deadlocks, and we
currently don't see any.

Iff you're right -- I guess the better fix would look different. If this
stuff is actually broken, may we can rather simplify things again, not
add more extra checks on top. :)

Daniel

> If this is about allowing I/O scheduler to coalesce more I/Os, then I bet
> I/O scheduler's 'wait and coalesce' logic is a great substitute for the
> delays introduced by blkback.
> 
> I totally agree IRQ coalescing or delay is useful for both blkback and
> netback but we need a logic that doesn't impact I/O latencies
> significantly. Also, I don't think netback has this type of notification
> avoidance logic (at least in 2.6.18 code base).
> 
> 
> Re: Other points
> 
> Good call. Changed the patch to include tabs.
> 
> I wasn't very sure about blk_ring_lock usage and I should have clarified
> it before sending out the patch.
> 
> Assuming blk_ring_lock was meant to protect shared ring manipulations
> within blkback, is there a reason 'blk_rings->common.req_cons'
> manipulation in do_block_io_op is not protected ? The reasons for the
> differences between locking logic in do_block_io_op and make_response
> weren't terribly obvious although the failure mode for the race condition
> may very well be benign.
> 
> Anyway, I am attaching a patch with appropriate changes.
> 
> Jeremey, Can you apply this patch to pvops Dom-0
> (http://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git). Should I
> submit another patch for 2.6.18 Dom-0 ?
> 
> 
> Signed-off-by: Pradeep Vincent <pradeepv@xxxxxxxxxx>
> 
> diff --git a/drivers/xen/blkback/blkback.c b/drivers/xen/blkback/blkback.c
> --- a/drivers/xen/blkback/blkback.c
> +++ b/drivers/xen/blkback/blkback.c
> @@ -315,6 +315,7 @@ static int do_block_io_op(blkif_t *blkif)
>   pending_req_t *pending_req;
>   RING_IDX rc, rp;
>   int more_to_do = 0;
> + unsigned long     flags;
>  
>   rc = blk_rings->common.req_cons;
>   rp = blk_rings->common.sring->req_prod;
> @@ -383,6 +384,15 @@ static int do_block_io_op(blkif_t *blkif)
>    cond_resched();
>   }
>  
> + /* If blkback might go to sleep (i.e. more_to_do == 0) then we better
> +    let blkfront know about it (by setting req_event appropriately) so
> that
> +    blkfront will bother to wake us up (via interrupt) when it submits a
> +    new I/O */
> + if (!more_to_do){
> +  spin_lock_irqsave(&blkif->blk_ring_lock, flags);
> +  RING_FINAL_CHECK_FOR_REQUESTS(&blk_rings->common, more_to_do);
> +  spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
> + }
>   return more_to_do;
>  }
>  
> 
> 
> 
> 
> 
> 
> 
> On 5/2/11 1:13 AM, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> 
> >>>> On 02.05.11 at 09:04, "Vincent, Pradeep" <pradeepv@xxxxxxxxxx> wrote:
> >> In blkback driver, after I/O requests are submitted to Dom-0 block I/O
> >> subsystem, blkback goes to 'sleep' effectively without letting blkfront
> >>know 
> >> about it (req_event isn't set appropriately). Hence blkfront doesn't
> >>notify 
> >> blkback when it submits a new I/O thus delaying the 'dispatch' of the
> >>new I/O 
> >> to Dom-0 block I/O subsystem. The new I/O is dispatched as soon as one
> >>of the 
> >> previous I/Os completes.
> >> 
> >> As a result of this issue, the block I/O latency performance is
> >>degraded for 
> >> some workloads on Xen guests using blkfront-blkback stack.
> >> 
> >> The following change addresses this issue:
> >> 
> >> 
> >> Signed-off-by: Pradeep Vincent <pradeepv@xxxxxxxxxx>
> >> 
> >> diff --git a/drivers/xen/blkback/blkback.c
> >>b/drivers/xen/blkback/blkback.c
> >> --- a/drivers/xen/blkback/blkback.c
> >> +++ b/drivers/xen/blkback/blkback.c
> >> @@ -383,6 +383,12 @@ static int do_block_io_op(blkif_t *blkif)
> >>   cond_resched();
> >>   }
> >> 
> >> + /* If blkback might go to sleep (i.e. more_to_do == 0) then we better
> >> +   let blkfront know about it (by setting req_event appropriately) so
> >>that
> >> +   blkfront will bother to wake us up (via interrupt) when it submits a
> >> +   new I/O */
> >> +        if (!more_to_do)
> >> +                 RING_FINAL_CHECK_FOR_REQUESTS(&blk_rings->common,
> >>more_to_do);
> >
> >To me this contradicts the comment preceding the use of
> >RING_FINAL_CHECK_FOR_REQUESTS() in make_response()
> >(there it's supposedly used to avoid unnecessary notification,
> >here you say it's used to force notification). Albeit I agree that
> >the change looks consistent with the comments in io/ring.h.
> >
> >Even if correct, you're not holding blkif->blk_ring_lock here, and
> >hence I think you'll need to explain how this is not a problem.
> >
> >From a formal perspective, you also want to correct usage of tabs,
> >and (assuming this is intended for the 2.6.18 tree) you'd also need
> >to indicate so for Keir to pick this up and apply it to that tree (and
> >it might then also be a good idea to submit an equivalent patch for
> >the pv-ops trees).
> >
> >Jan
> >
> >>   return more_to_do;
> >>  }
> >
> >
> >
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>