WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Live migration fails under heavy network use

To: "John Levon" <levon@xxxxxxxxxxxxxxxxx>, "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] Live migration fails under heavy network use
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Thu, 22 Feb 2007 22:34:30 -0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 22 Feb 2007 14:34:22 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20070221004156.GB31928@xxxxxxxxxxxxxxxxxxxxxxx><C201A5FA.25B9%Keir.Fraser@xxxxxxxxxxxx> <20070222195547.GA17823@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdWuzncVVrnCWwRSx25QnLZ1Lu7iAAFLHBw
Thread-topic: [Xen-devel] Live migration fails under heavy network use
> I've modified the segment driver to prefault the MFNs and things seem
a
> lot better for both Solaris and Linux domUs:
> 
> (XEN) /export/johnlev/xen/xen-work/xen.hg/xen/include/asm/mm.h:184:d0
Error
> pfn 5512: rd=ffff830000f92100, od=0000000000000000, caf=00000000,
> taf=0000000000000002
> (XEN) mm.c:590:d0 Error getting mfn 5512 (pfn 47fa) from L1 entry
> 0000000005512705 for dom52
> (XEN) mm.c:566:d0 Non-privileged (53) attempt to map I/O space
00000000
> done
> done
> 
> Not quite sure why the new domain is trying to map 00000000 though.

The messages from the save side are expected. Is the message from the
restored domain triggered by the restore code i.e. before the domain is
un-paused?

I expect if you change the 'pfn=0' in canonicalize_pagetable:539 to
'deadb000' you'll see that propagated through to the restore message. In
which case, its ugly, but benign. 

> I also see a fair amount of:
> 
> Dom48 freeing in-use page 2991 (pseudophys 100a4): count=2
type=e8000000

That's fine. Debug builds are a bit chatty for live migration...

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel