This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Improving domU restore time

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Improving domU restore time
From: Rafal Wojtczuk <rafal@xxxxxxxxxxxxxxxxxxxxxx>
Date: Tue, 25 May 2010 14:50:00 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 25 May 2010 05:51:14 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C8217820.15199%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20100525103557.GC23903@xxxxxxxxxxxxxxxxxxx> <C8217820.15199%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.17 (2007-11-01)
On Tue, May 25, 2010 at 12:50:40PM +0100, Keir Fraser wrote:
> On 25/05/2010 11:35, "Rafal Wojtczuk" <rafal@xxxxxxxxxxxxxxxxxxxxxx> wrote:
> > a) Is it correct that when xc_restore runs, the target domain memory is
> > already
> > zeroed (because hypervisor scrubs free memory, before it is assigned to a
> > new domain)
> There is no guarantee that the memory will be zeroed.
For my education, could you explain who is responsible for clearing memory
of a newborn domain ? Xend ? Could you point me to the relevant code
fragments ?
It looks sensible to clear free memory in hypervisor context in its idle 
cycles; if non-temporal instructions (movnti) were used for this, it would 
not pollute caches, and it must be done anyway ?

> > b) xen-3.4.3/xc_restore reads data from savefile in 4k portions - so, one
> > read syscall per page. Make it read in larger chunks. It looks it is fixed 
> > in
> > xen-4.0.0, is this correct ?
> It got changed a lot for Remus. I expect performance was on their mind.
> Normally kernel's file readahead heuristic would get back most of the
> performance of not reading in larger chunks.
Yes, readahead would keep the disk request queue full, but I was just
thinking of lowering the syscall overhead. 1e5 syscalls is a lot :)
[user@qubes ~]$ dd if=/dev/zero of=/dev/null bs=4k count=102400
102400+0 records in
102400+0 records out
419430400 bytes (419 MB) copied, 0.307211 s, 1.4 GB/s
[user@qubes ~]$ dd if=/dev/zero of=/dev/null bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 0.25347 s, 1.7 GB/s


Xen-devel mailing list