WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] blkback: Fix block I/O latency issue

To: "Vincent, Pradeep" <pradeepv@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] blkback: Fix block I/O latency issue
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Mon, 16 May 2011 11:22:24 -0400
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>, Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Delivery-date: Mon, 16 May 2011 08:23:53 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110513025132.GA4652@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20110509202403.GA27755@xxxxxxxxxxxx> <C9F1B153.1500C%pradeepv@xxxxxxxxxx> <20110513025132.GA4652@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
On Thu, May 12, 2011 at 10:51:32PM -0400, Konrad Rzeszutek Wilk wrote:
> > >>what were the numbers when it came to high bandwidth numbers
> > 
> > Under high I/O workload, where the blkfront would fill up the queue as
> > blkback works the queue, the I/O latency problem in question doesn't
> > manifest itself and as a result this patch doesn't make much of a
> > difference in terms of interrupt rate. My benchmarks didn't show any
> > significant effect.
> 
> I have to rerun my benchmarks. Under high load (so 64Kb, four threads
> writting as much as they can to a iSCSI disk), the IRQ rate for each
> blkif went from 2-3/sec to ~5K/sec. But I did not do a good
> job on capturing the submission latency to see if the I/Os get the
> response back as fast (or the same) as without your patch.
> 
> And the iSCSI disk on the target side was an RAMdisk, so latency
> was quite small which is not fair to your problem.
> 
> Do you have a program to measure the latency for the workload you
> had encountered? I would like to run those numbers myself.

Ran some more benchmarks over this week. This time I tried to run it on:

 - iSCSI target (1GB, and on the "other side" it wakes up every 1msec, so the
   latency is set to 1msec).
 - scsi_debug delay=0 (no delay and as fast possible. Comes out to be about
   4 microseconds completion with queue depth of one with 32K I/Os).
 - local SATAI 80GB ST3808110AS. Still running as it is quite slow.

With only one PV guest doing a round (three times) of two threads randomly
writting I/Os with a queue depth of 256. Then a different round of four
threads writting/reading (80/20) 512bytes up to 64K randomly over the disk.

I used the attached patch against #master 
(git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git)
to gauge how well we are doing (and what the interrupt generation rate is).

These workloads I think would be considered 'high I/O' and I was expecting
your patch to not have any influence on the numbers.

But to my surprise the case where the I/O latency is high, the interrupt 
generation
was quite small. But where the I/O latency was very very small (4 microseconds)
the interrupt generation was on average about 20K/s. And this is with a queue 
depth
of 256 with four threads. I was expecting the opposite. Hence quite curious
to see your use case.

What do you consider a middle I/O and low I/O cases? Do you use 'fio' for your
testing?

With the high I/O load, the numbers came out to give us about 1% benefit with 
your
patch. However, I am worried (maybe unneccassarily?) about the 20K interrupt 
generation
when the iometer tests kicked in (this was only when using the unrealistic 
'scsi_debug'
drive).

The picture of this using iSCSI target:
http://darnok.org/xen/amazon/iscsi_target/iometer-bw.png

And when done on top of local RAMdisk:
http://darnok.org/xen/amazon/scsi_debug/iometer-bw.png

Attachment: amazon-debug.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>