WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Re: VT is comically slow

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: Re: VT is comically slow
From: alex@xxxxxxxxxxxxxxx
Date: Thu, 06 Jul 2006 18:35:42 -0800
Delivery-date: Thu, 06 Jul 2006 19:36:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Andrew Warfield wrote:
> > The QEMU code that we use doesn't go through the dom0 buffer cache, we 
> > modified the
> > code to use O_DIRECT.  Can't user buffer cache and accelerated drivers 
> > (they go right
> > to the disk) together, it can cause disk corruption.  The performance 
> > numbers we get
> > from this version of QEMU is still 4 to 6 times slower that native disk I/O.
>
> I doubt O_DIRECT buys you much in the way of performance -- as you say
> it's just a correctness thing.  Still, the qemu block code is all
> completely synchronous -- the fact that you simply can't have more
> than a single outstanding block request at a time is going to
> seriously harm performance.  As Anthony explained, some form of
> asynchronous IO in the qemu drivers would clearly improve performance.
>
That was exactly my point, that O_DIRECT doesn't improve performance. Anthony 
had a 
a point in his e-mail that buffered I/O could be one of the reasons that 
performance 
of QEMU is slow.  
>
> > You might be right, however even with pipelining and async I/O, I don't 
> > think it is going to >> > > get close to native I/O numbers.  I guess we'll 
> > just have to wait and see.
> 
> I'd expect that disk can be made to perform reasonably well with qemu,
> using dma emulation and async IO.  The old vmware workstation paper on
> device virtualization does a pretty good job of demonstrating that
> trap and emulate device access sucks, and would seem to imply that
> it's pretty unlikely to be practical for high-rate networking.
>
I understand what you guys are proposing, and I look forward to see your 
implementation and to 
your performance numbers.  In particular it would be very interesting to see 
what kind of CPU overhead you'll get. With regard to networking I agree with 
the VMWare guys, it is not practical to do traps & emulation to achieve high 
rate networking throughput.  For example, with our accel drivers on certain 
network benchmarks we can drive network almost at wire speeds from an HVM 
domain and consume very few CPU cycles in doing this.

Cheers,

-Alex V.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>