|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Improving domU restore time
On 25/05/2010 11:35, "Rafal Wojtczuk" <rafal@xxxxxxxxxxxxxxxxxxxxxx> wrote:
> a) Is it correct that when xc_restore runs, the target domain memory is
> already
> zeroed (because hypervisor scrubs free memory, before it is assigned to a
> new domain)
There is no guarantee that the memory will be zeroed.
> b) xen-3.4.3/xc_restore reads data from savefile in 4k portions - so, one
> read syscall per page. Make it read in larger chunks. It looks it is fixed in
> xen-4.0.0, is this correct ?
It got changed a lot for Remus. I expect performance was on their mind.
Normally kernel's file readahead heuristic would get back most of the
performance of not reading in larger chunks.
> Also, it looks really excessive that basically copying 400MB of memory takes
> over 1.3s cpu time. Is IOCTL_PRIVCMD_MMAPBATCH the culprit (its
> dom0 kernel code ? Xen mm code ? hypercall overhead ? ), anything
> else ?
I would expect IOCTL_PRIVCMD_MMAPBATCH to be the most significant part of
that loop.
-- Keir
> I am aware that in the usual cases, xc_restore is not the bottleneck
> (savefile reads from the disk or the network is), but in case we can fetch
> savefile quickly, it matters.
>
> Is 3.4.3 branch still being developed, or pure maintenance mode only, so new
> code should be prepared for 4.0.0 ?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|