This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Writing to a ramdisk in a PV domain is SLLLOOOWWW?!?

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: RE: [Xen-devel] Writing to a ramdisk in a PV domain is SLLLOOOWWW?!?
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Thu, 6 May 2010 14:02:03 -0700 (PDT)
Cc: "Xen-Devel \(xen-devel@xxxxxxxxxxxxxxxxxxx\)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 06 May 2010 14:03:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4BE2FF9D.9010500@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <89f1cead-b77d-4294-ab12-5e05344ed346@default 4BE2FF9D.9010500@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Writing to a ramdisk in a PV domain is
> > Writing to the ramdisk appears to be VERY VERY slow,
> > elapsed time in the guest is several times larger than
> > user+sys, and xentop shows the guest consuming vcpu
> > seconds at about the user+sys rate.  Note that
> > this is when tmem is turned off and there is no
> > vhd swap disk configured.
> >
> > I'm suspecting that writing to ramdisk must be causing
> > some interesting/expensive PV pagetable behavior?
> > Or maybe somehow /dev/ram0 is getting routed through
> > qemu?  Or ??
> >
> I haven't looked at ramdisk, but I'm pretty sure there's nothing
> special
> about accessing it.  The only thing I can think of is that if you're
> using a 32bit highmem system then you may be being hit by lots of kmap
> overhead.  But on a 64-bit system, AFAIK, it should just be memory
> copies.

Thanks for the reply Jeremy.

I tried a 64-bit guest (and bare-metal) and saw the same problem.
I guess I'll start collecting some statistics.

Maybe since ramdisk (via the swap code) is still going
through the blockio layer, there is some kind of cache or
TLB overhead that normally would only be necessary for
memory used for DMA?  (I don't know much about I/O so
this is pure speculation.)  Or maybe there is an assumption
that since blockio is asynchronous, a timer is set to
some minimum value and the write-to-ramdisk isn't completed
until the timer fires?  (Hmmm... but this wouldn't explain
why virtual is much worse than bare-metal.)


Xen-devel mailing list