WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops

To: Brendan Cully <brendan@xxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Thu, 3 Jun 2010 16:18:39 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Andreas Olsowski <andreas.olsowski@xxxxxxxxxxxxxxx>, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Delivery-date: Thu, 03 Jun 2010 08:19:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100603150305.GA53591@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsDLgkqxzM46883TTOom3JtpUbmfgAAgHzl
Thread-topic: [Xen-devel] slow live magration / xc_restore on xen4 pvops
User-agent: Microsoft-Entourage/12.24.0.100205
On 03/06/2010 16:03, "Brendan Cully" <brendan@xxxxxxxxx> wrote:

> I see no evidence that Remus has anything to do with the live
> migration performance regression discussed in this thread, and I
> haven't seen any other reported issues either. I think the mlock issue
> is a much more likely candidate.

I agree it's probably lack of batching plus expensive mlocks. The
performance difference between different machines under test is either
because one runs out of 2MB superpage extents before the other (for some
reason) or because mlock operations are for some reason much more likely to
take a slow path in the kernel (possibly including disk i/o) for some
reason.

We need to get batching back, and Edwin is on the case for that: I hope
Andreas will try out Edwin's patch to work towards that. We can also reduce
mlock cost by mlocking some domain_restore arrays across the entire restore
operation, I should imagine.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel