WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops

I assume this is PV domU rather than HVM, right?

1. we need check if super page is the culprit by SP_check1.patch.

2. if this can fix this problem, we need further check where the extra costs comes: the speculative algorithm, or the super page population hypercall by SP_check2.patch

If SP_check2.patch works, the culprit is the new allocation hypercall(so guest creation also suffer); Else, the speculative algorithm.

Does it make sense?

Thanks,
edwin


Brendan Cully wrote:
On Thursday, 03 June 2010 at 06:47, Keir Fraser wrote:
On 03/06/2010 02:04, "Brendan Cully" <Brendan@xxxxxxxxx> wrote:

I've done a bit of profiling of the restore code and observed the
slowness here too. It looks to me like it's probably related to
superpage changes. The big hit appears to be at the front of the
restore process during calls to allocate_mfn_list, under the
normal_page case. It looks like we're calling
xc_domain_memory_populate_physmap once per page here, instead of
batching the allocation? I haven't had time to investigate further
today, but I think this is the culprit.
Ccing Edwin Zhai. He wrote the superpage logic for domain restore.

Here's some data on the slowdown going from 2.6.18 to pvops dom0:

I wrapped the call to allocate_mfn_list in uncanonicalize_pagetable
to measure the time to do the allocation.

kernel, min call time, max call time
2.6.18, 4 us, 72 us
pvops, 202 us, 10696 us (!)

It looks like pvops is dramatically slower to perform the
xc_domain_memory_populate_physmap call!

I'll attach the patch and raw data below.

--
best rgds,
edwin

diff -r 4ab68bf4c37e tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c   Thu Jun 03 07:30:54 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c   Thu Jun 03 16:30:30 2010 +0800
@@ -1392,6 +1392,8 @@ int xc_domain_restore(xc_interface *xch,
     if ( hvm )
         superpages = 1;
 
+    superpages = 0;
+
     if ( read_exact(io_fd, &dinfo->p2m_size, sizeof(unsigned long)) )
     {
         PERROR("read: p2m_size");
diff -r 4ab68bf4c37e tools/libxc/xc_domain_restore.c
--- a/tools/libxc/xc_domain_restore.c   Thu Jun 03 07:30:54 2010 +0100
+++ b/tools/libxc/xc_domain_restore.c   Thu Jun 03 16:48:38 2010 +0800
@@ -248,6 +248,7 @@ static int allocate_mfn_list(xc_interfac
     if  ( super_page_populated(xch, ctx, pfn) )
         goto normal_page;
 
+#if 0
     pfn &= ~(SUPERPAGE_NR_PFNS - 1);
     mfn =  pfn;
 
@@ -263,6 +264,7 @@ static int allocate_mfn_list(xc_interfac
     DPRINTF("No 2M page available for pfn 0x%lx, fall back to 4K page.\n",
             pfn);
     ctx->no_superpage_mem = 1;
+#endif
 
 normal_page:
     if ( !batch_buf )
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>