WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Improving domU restore time

To: Rafal Wojtczuk <rafal@xxxxxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Improving domU restore time
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 25 May 2010 12:50:40 +0100
Cc:
Delivery-date: Tue, 25 May 2010 04:51:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100525103557.GC23903@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acr79jUGyc2cQeLiSW2mOS93wCOzVQACkpP4
Thread-topic: [Xen-devel] Improving domU restore time
User-agent: Microsoft-Entourage/12.24.0.100205
On 25/05/2010 11:35, "Rafal Wojtczuk" <rafal@xxxxxxxxxxxxxxxxxxxxxx> wrote:

> a) Is it correct that when xc_restore runs, the target domain memory is
> already
> zeroed (because hypervisor scrubs free memory, before it is assigned to a
> new domain)

There is no guarantee that the memory will be zeroed.

> b) xen-3.4.3/xc_restore reads data from savefile in 4k portions - so, one
> read syscall per page. Make it read in larger chunks. It looks it is fixed in
> xen-4.0.0, is this correct ?

It got changed a lot for Remus. I expect performance was on their mind.
Normally kernel's file readahead heuristic would get back most of the
performance of not reading in larger chunks.

> Also, it looks really excessive that basically copying 400MB of memory takes
> over 1.3s cpu time. Is IOCTL_PRIVCMD_MMAPBATCH the culprit (its
> dom0 kernel code ? Xen mm code ? hypercall overhead ? ), anything
> else ?

I would expect IOCTL_PRIVCMD_MMAPBATCH to be the most significant part of
that loop.

 -- Keir

> I am aware that in the usual cases, xc_restore is not the bottleneck
> (savefile reads from the disk or the network is), but in case we can fetch
> savefile quickly, it matters.
> 
> Is 3.4.3 branch still being developed, or pure maintenance mode only, so new
> code should be prepared for 4.0.0 ? 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel