This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] questions about the number of pending requests that the

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] questions about the number of pending requests that the host system can detect
From: Yuehai Xu <yuehaixu@xxxxxxxxx>
Date: Thu, 12 Aug 2010 14:36:20 -0400
Cc: yuehai.xu@xxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, yhxu@xxxxxxxxx
Delivery-date: Thu, 12 Aug 2010 11:37:28 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=kc8LUko5+ReqXyBcOfztLNgOb1TiJmM81Od/Qur/nBA=; b=YrGZGhb5zBx55HiLE9NdcawpPKnRww1962hiXtCyRCxm7hehY6lbjtJfS/ph/i+SwB OYLiZcaJEb46ixGIxFNdy74ktLqZU20WGRTXnbsmDs+F/jE5OL1PZTwup9LW6+3NCO1a k5ZjIpVDvybkWAkk4jP3xhTWytVlVgQUK9diQ=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=DAdVgU9EwYwrbSu9Y0wr13hlpRdsb+NIfdAzgN4MPKp05/tUd7OyqN6d8ll1AkZOsx bMqlNg3/Gs/+x6c5rJCq5hHTVDonljReJi+u1lTDHszlcj4pUz+7A5iNnqYWmaxBw8GS i2AntDTTOTAxer8Umi+sduE7OQKzxVYDpVazA=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C643B9B.1000308@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTi=3d=+J2WNbBM5rLu_MfpjA_3OXhm8qOkCf_sLg@xxxxxxxxxxxxxx> <4C6437C4.3040908@xxxxxxxx> <AANLkTimwpt8xFAt5Njw5V8+5qUFs2WTpWqUSacS_+3Qa@xxxxxxxxxxxxxx> <AANLkTin5L=qOGAH_oQDXnF9eaodnOxz6f6z6HAxEu6d-@xxxxxxxxxxxxxx> <4C643B9B.1000308@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Thu, Aug 12, 2010 at 2:21 PM, Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:
>  On 08/12/2010 11:18 AM, Yuehai Xu wrote:
>> On Thu, Aug 12, 2010 at 2:16 PM, Yuehai Xu<yuehaixu@xxxxxxxxx>  wrote:
>>> On Thu, Aug 12, 2010 at 2:04 PM, Jeremy Fitzhardinge<jeremy@xxxxxxxx>
>>>  wrote:
>>>>  On 08/11/2010 08:42 PM, Yuehai Xu wrote:
>>>>> However, the result turns out that my assumption is wrong. The number
>>>>> of pending requests, according to the trace of blktrace, is changing
>>>>> like this way: 9 8 7 6 5 4 3 2 1 1 1 2 3 4 5 4 3 2 1 1 1 2 3 4 5 6 7 8
>>>>> 8 8..., just like a curve.
>>>>> I am puzzled about this weird result. Can anybody explain what has
>>>>> happened between domU and dom0 for this result? Does this result make
>>>>> sense? or I did something wrong to get this result.
>>>> If you're using a journalled filesystem in the guest, it will be need to
>>>> drain the IO queue periodically to control the write ordering.  You
>>>> should
>>>> also observe barrier writes in the blkfront stream.
>>>>    J
>>> The file system I use in the guest system is ext3, which is a
>>> journaled file system. However, I don't quite understand what you said
>>> ".. control the write ordering" because the 10 processes running in
>>> the guest system all just send requests, there is no write request.
>>> What do you mean of "barrier writes" here?
>>> Thanks,
>>> Yuehai
>> I am sorry for the missing word, the requests sent by the 10 processes
>> in the guest system are all read requests.
> Even a pure read-only workload may generate writes for metadata unless
> you've turned it off.  Is it a read-only mount?  Do you have the noatime
> mount option?  Is the device itself read-only?

The definition of my disk is: ['tap2:aio:/PATH/dom.img, hda1, w'], so,
I think it should not be read-only mount, and I don't set any specific
option for mount. The device itself should be read-write.

> Still, it seems odd that it won't/can't keep the queue full of read
> requests.  Unless its getting local cache hits?
>    J

I don't think the local cache would be hit because every time I did
the test, I drop the cache both in the guest and host OS. And, the
access pattern is stride read, it is impossible to hit the cache.

I am not sure whether there are write requests, even there are, I
think the number of write requests should be very small, will it
affect the I/O queue of guest or host? I don't think so. The common
sense should be that the I/O queue in the host system should be almost
full because tapdisk2 is async.


Xen-devel mailing list