WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops

To: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops
From: Brendan Cully <brendan@xxxxxxxxx>
Date: Thu, 3 Jun 2010 08:03:05 -0700
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Andreas Olsowski <andreas.olsowski@xxxxxxxxxxxxxxx>
Delivery-date: Thu, 03 Jun 2010 08:04:20 -0700
Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=quuxuum.com; h=date:to:cc :subject:message-id:references:mime-version:content-type :in-reply-to:from; s=dk; bh=yid41vB96FyMuS/yhnafESwrpLM=; b=OoCJ qR0nZ69/IMTM7jPDH0Am/amq91oa6/kLReMBMNr8GDvwiKvQ0aUGUMOQfsRT+sgU FsrLubKGkex99G3DOSC3ozO5Oj3V0vwDeRf2PO+AKjd2VVz1d0bPBy9AjAjiUFY5 19Eo5NNUnN5OxVem26Aug58FOcoqgFU31sPXlF0=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <19463.32147.268104.94905@xxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: Ian.Jackson@xxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, andreas.olsowski@xxxxxxxxxxxxxxx
References: <2FD61F37AFF16D4DB46149330E4273C702FF9687@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4C0578EB.2040800@xxxxxxxxxxxxxxx> <19462.33905.936222.605434@xxxxxxxxxxxxxxxxxxxxxxxx> <20100602162745.GA27542@xxxxxxxxxxxxxxxxx> <19463.32147.268104.94905@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.20 (2010-04-22)
On Thursday, 03 June 2010 at 11:01, Ian Jackson wrote:
> Brendan Cully writes ("Re: [Xen-devel] slow live magration / xc_restore on 
> xen4 pvops"):
> > 2. in normal migration, the sender should close the fd after sending
> > all data, immediately triggering an IO error on the receiver and
> > completing the restore.
> 
> This is not true.  In normal migration, the fd is used by the
> machinery which surrounds xc_domain_restore (in xc_save and also in xl
> or xend).  In any case it would be quite wrong for a library function
> like xc_domain_restore to eat the fd.

The sender closes the fd, as it always has. xc_domain_restore has
always consumed the entire contents of the fd, because the qemu tail
has no length header under normal migration. There's no behavioral
difference here that I can see.

> It's not necessary for xc_domain_restore to behave this way in all
> cases; all that's needed is parameters to tell it how to behave.

I have no objection to a more explicit interface. The current form is
simply Remus trying to be as invisible as possible to the rest of the
tool stack.

> > I did try to avoid disturbing regular live migration as much as
> > possible when I wrote the code. I suspect some other regression has
> > crept in, and I'll investigate.
> 
> The short timeout is another regression.  A normal live migration or
> restore should not fall over just because no data is available for
> 100ms.

(the timeout is 1s, by the way).

For some reason you clipped the bit of my previous message where I say
this doesn't happen:

1. reads are only supposed to be able to time out after the entire              
                                   
first checkpoint has been received (IOW this wouldn't kick in until             
                                   
normal migration had already completed)    

Let's take a look at read_exact_timed in xc_domain_restore:

if ( completed ) {
    /* expect a heartbeat every HEARBEAT_MS ms maximum */
    tv.tv_sec = HEARTBEAT_MS / 1000;
    tv.tv_usec = (HEARTBEAT_MS % 1000) * 1000;

    FD_ZERO(&rfds);
    FD_SET(fd, &rfds);
    len = select(fd + 1, &rfds, NULL, NULL, &tv);
    if ( !FD_ISSET(fd, &rfds) ) {
        fprintf(stderr, "read_exact_timed failed (select returned %zd)\n", len);
        return -1;
    }
}

'completed' is not set until the first entire checkpoint (i.e., the
entirety of non-Remus migration) has completed. So, no issue.

I see no evidence that Remus has anything to do with the live
migration performance regression discussed in this thread, and I
haven't seen any other reported issues either. I think the mlock issue
is a much more likely candidate.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel