WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH][RFC] open HVM backing storage with O_SYNC

To: Christian.Limpach@xxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH][RFC] open HVM backing storage with O_SYNC
From: Rik van Riel <riel@xxxxxxxxxx>
Date: Sun, 30 Jul 2006 18:45:51 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 30 Jul 2006 15:46:17 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <3d8eece20607281744u6c134500jab89011d1089198e@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Red Hat, Inc
References: <44C9B75B.7060809@xxxxxxxxxx> <44CA4330.7010007@xxxxxxxxxx> <44CA71C9.1040408@xxxxxxxxxx> <3d8eece20607281744u6c134500jab89011d1089198e@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.4 (X11/20060614)
Christian Limpach wrote:

Another possibility would be to integrate blktap/tapdisk into qemu
which will provide asynchronous completion events and hides the
immediate AIO interaction from qemu.  This should also make using qemu
inside a stub domain easier

Sounds like a very good idea indeed.

Do you fancy looking into this?

Unfortunately we've got some nasty blocker bugs left for
Fedora Core 6 which we're trying to track down first...

> The current bottleneck seems to be that MAX_MULT_COUNT is only 16.

Upon closer inspection of the code, this seems to not be the case for
LBA48 transfers.

Any other ideas what could be the bottleneck then?

Probably scheduling latency.  I'm running 2 VT domains on this
system, and both qemu-dm processes are taking up to 25% of the
CPU each, on a 3GHz system.

When running top inside the VT guest, a lot of CPU time is spent
in "hi" and "si" time, which is irq code being emulated by qemu-dm.

Of course, with qemu-dm taking this much CPU time, it'll have a
lower CPU priority and will not get scheduled immediately.  Still
fast enough to have 10000+ context switches/second, but apparently
not quite fast enough for the VT guest to have decent performance
under heavy network traffic...

--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>