Hi all,
I know the default I/O scheduler for DomU is noop. Without considering
XEN, suppose the I/O scheduler for hard disk is noop, 10 processes run
currently and each process does stride read, in this case, the number
of pending requests which need to be dispatched to the hard disk
should be at around 8~9, here I suppose the hard disk can handle at
most 2 requests concurrently. this makes sense.
Now, suppose there is only one VM, and the ten processes are now
running in the guest system. The disk mode is tap2:aio, which means a
process called tapdisk2 is running in the host system and it handles
all the requests from the domU, dispatches them to the real hard disk.
In such case, from the view of the host, the number of pending
requests should always be at around 8~9 because tapdisk2 is using
asynchronous way to handle requests.
However, the result turns out that my assumption is wrong. The number
of pending requests, according to the trace of blktrace, is changing
like this way: 9 8 7 6 5 4 3 2 1 1 1 2 3 4 5 4 3 2 1 1 1 2 3 4 5 6 7 8
8 8..., just like a curve.
I am puzzled about this weird result. Can anybody explain what has
happened between domU and dom0 for this result? Does this result make
sense? or I did something wrong to get this result.
I am using Xen-4.0.0-rc5, kernel version in the host is 2.6.31.13, the
kernel in the guest system is 2.6.18
Thanks,
Yuehai
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|