WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Live migration with MMIO pages

On Thu, 2007-11-01 at 11:34 +0000, Tim Deegan wrote:
> At 11:18 +0000 on 01 Nov (1193915902), Kieran Mansley wrote:
> > I'm a bit concerned about this one in _sh_propogate().  It seems it's
> > checking that the mfn is valid for a good reason, and so letting it
> > carry on in the case of MMIO frames will lead to a rather ungraceful
> > failure.  If I modify the above if to be:
> >
> >     if ( !mfn_valid(target_mfn) && (p2mt != p2m_mmio_direct) 
> >          && !iomem_access_permitted(d, target_mfn, target_mfn))
> > 
> > so that PV iomem frames can continue rather than giving up at this
> > point, it gets a fatal page fault (see below).
> 
> Oh dear.  Yes, you'll need to protect all the stuff further down that
> tries to mark the frame dirty.  Probably making sh_mfn_is_dirty test for
> !mfn_valid(mfn) and return 0 will be enough.

See attached patch.

I opted instead to surround the section that calls sh_mfn_is_dirty()
with 
        if ( mfn_valid(target_mfn) ||
             !iomem_access_permitted(d, target_mfn, target_mfn) ) 
because if sh_mfn_is_dirty() returned zero on a bad mfn, it would remove
the _PAGE_RW flag.

I'm not 100% sure this patch is complete or correct or doesn't have side
effects, but it works for me.  I'd appreciate it if you could give it a
once over to make sure it makes sense.  If it does, and isn't too big a
risk for 3.2.0, applying it to that would be great.

One question in my mind is whether the tests of iomem_access_permitted()
in _sh_propogate() would be better replaced with the more general
shadow_mode_translate().  The former seemed less risky to me as
otherwise any invalid mfn could drop through when !shadow_mode_translate
(), and this test was originally designed to filter them out.

Signed-off-by Kieran Mansley <kmansley@xxxxxxxxxxxxxx>

Thanks

Kieran


fix iomem frame shadow propogation to allow live migration

diff -r b28fa67d3940 xen/arch/x86/mm/shadow/multi.c
--- a/xen/arch/x86/mm/shadow/multi.c    Thu Nov 01 11:52:34 2007 +0000
+++ b/xen/arch/x86/mm/shadow/multi.c    Thu Nov 01 16:02:36 2007 +0000
@@ -28,6 +28,7 @@
 #include <xen/sched.h>
 #include <xen/perfc.h>
 #include <xen/domain_page.h>
+#include <xen/iocap.h>
 #include <asm/page.h>
 #include <asm/current.h>
 #include <asm/shadow.h>
@@ -696,7 +697,8 @@ _sh_propagate(struct vcpu *v,
     /* N.B. For pass-through MMIO, either this test needs to be
relaxed,
      * and shadow_set_l1e() trained to handle non-valid MFNs (ugh), or
the
      * MMIO areas need to be added to the frame-table to make them
"valid". */
-    if ( !mfn_valid(target_mfn) && (p2mt != p2m_mmio_direct) )
+    if ( !mfn_valid(target_mfn) && (p2mt != p2m_mmio_direct) &&
+         !iomem_access_permitted(d, target_mfn, target_mfn) )
     {
         ASSERT((ft == ft_prefetch));
         *sp = shadow_l1e_empty();
@@ -709,9 +711,12 @@ _sh_propagate(struct vcpu *v,
     // SHADOW_PRESENT bit.
     //
     pass_thru_flags = (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_USER |
-                       _PAGE_RW | _PAGE_PRESENT);
+                       _PAGE_RW | _PAGE_PRESENT );
     if ( guest_supports_nx(v) )
         pass_thru_flags |= _PAGE_NX_BIT;
+    if ( !mfn_valid(target_mfn) && 
+         iomem_access_permitted(d, target_mfn, target_mfn) )
+        pass_thru_flags |= _PAGE_PCD | _PAGE_PWT;
     sflags = gflags & pass_thru_flags;
 
     /* Only change memory caching type for pass-through domain */
@@ -758,10 +763,13 @@ _sh_propagate(struct vcpu *v,
     // p2m_ram_logdirty p2m type: only HAP uses that.)
     if ( unlikely((level == 1) && shadow_mode_log_dirty(d)) )
     {
-        if ( ft & FETCH_TYPE_WRITE ) 
-            paging_mark_dirty(d, mfn_x(target_mfn));
-        else if ( !sh_mfn_is_dirty(d, target_mfn) )
-            sflags &= ~_PAGE_RW;
+        if ( mfn_valid(target_mfn) ||
+             !iomem_access_permitted(d, target_mfn, target_mfn) ) {
+            if ( ft & FETCH_TYPE_WRITE ) 
+                paging_mark_dirty(d, mfn_x(target_mfn));
+            else if ( !sh_mfn_is_dirty(d, target_mfn) )
+                sflags &= ~_PAGE_RW;
+        }
     }
 
     /* Read-only memory */
@@ -2836,7 +2844,8 @@ static int sh_page_fault(struct vcpu *v,
     gfn = guest_l1e_get_gfn(gw.eff_l1e);
     gmfn = gfn_to_mfn(d, gfn, &p2mt);
 
-    if ( !p2m_is_valid(p2mt) || (!p2m_is_mmio(p2mt) && !mfn_valid
(gmfn)) )
+    if ( !shadow_mode_translate(d) && 
+         (!p2m_is_valid(p2mt) || (!p2m_is_mmio(p2mt) && !mfn_valid
(gmfn))) )
     {
         perfc_incr(shadow_fault_bail_bad_gfn);
         SHADOW_PRINTK("BAD gfn=%"SH_PRI_gfn" gmfn=%"PRI_mfn"\n", 

Attachment: iomem_shadow
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel