This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [RFC] use event channel to improve suspend speed

To: Xen Developers <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC] use event channel to improve suspend speed
From: Brendan Cully <brendan@xxxxxxxxx>
Date: Thu, 10 May 2007 15:13:10 -0700
Delivery-date: Thu, 10 May 2007 15:11:36 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20070509000110.GI19767@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: xen-devel@xxxxxxxxxxxxxxxxxxx
References: <20070509000110.GI19767@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.15 (2007-05-02)
The posted patch was a fairly conservative approach (backward
compatible, equivalent to existing semantics). I've done some
more experimental work that reduces the time for the final round to
about 5ms. Here are the stats for 100 checkpoints:

avg: 5.62 ms, min: 3.96, max: 13.70, median: 4.86

It turns out the biggest remaining delay is (surprise!) xenstored. To
get the above numbers I unwired xenstored from VIRQ_DOM_EXC and let
xc_save bind to it.

Obviously this isn't a practical approach. I'd love to hear any ideas
about the right way to avoid the xenstore penalty though. My current
thought is that it might be possible to arrange to register a dynamic
virq from xc_save into xen for a target domain, and then have xen fire
it on suspend instead of DOM_EXC (iff it's installed, otherwise use
the normal path).

Any advice would be welcome.

On Tuesday, 08 May 2007 at 17:01, Brendan Cully wrote:
> Hi,
> I've been doing a little work on improving the latency of guest domain
> suspends. I've added a couple of printfs into xc_domain_save around
> the last round, and hooked up a harness to loop over the last round
> code every couple of seconds. Here are some numbers for a run of 100
> last rounds (from just before the suspend callback to just before it
> would exit), on a 3.2 Ghz P4 with 1 GB of RAM, 128 MB of which goes to
> a guest. This approximates the best-case downtime for live migration,
> I think.
> current code:
> avg: 133.57 ms, min: 82.53, max: 559.86, median: 135.63
> with the attached patch:
> avg: 36.05 ms, min: 33.99, max: 52.14, median: 35.51
> The patch creates an event channel in the guest that fires the suspend
> code. xc_save can use this to suspend the domain instead of calling
> back to xend, which then writes a xenstore entry, which then causes a
> watch to fire in the guest. It seems the xenstore interaction is
> fairly slow and very jittery.
> This isn't intended for 3.1, but I thought I'd put it out just in case
> anyone else finds it interesting. I'd appreciate comments about the
> approach.
> There's also a fair amount of latency involved in xend receiving the
> notification that the domain has suspended and passing that back on to
> xc_save. A quick hack to let xc_save simply loop on xc_domain_getinfo
> until the domain suspends indicates that it should be fairly easy to
> cut the suspend latency in half again, to about 15ms. I'll see about
> finding a clean equivalent of this...
> Comments?

Xen-devel mailing list