WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] PoD: appropriate BUG_ON when domain is dying

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH] PoD: appropriate BUG_ON when domain is dying
From: Kouya Shimura <kouya@xxxxxxxxxxxxxx>
Date: Wed, 9 Dec 2009 12:00:24 +0900
Cc: george.dunlap@xxxxxxxxxxxxx
Delivery-date: Tue, 08 Dec 2009 19:00:48 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

BUG_ON(d->is_dying) in p2m_pod_cache_add() which is introduced in 
c/s 20426 is not proper. Since dom->is_dying is set asynchronously.
For example, MMU_UPDATE hypercalls from qemu and the
DOMCTL_destroydomain hypercall from xend can be issued simultaneously.

(XEN) p2m_pod_demand_populate: Out of populate-on-demand memory! tot_pages 
65751 pod_entries 197408
(XEN) domain_crash called from p2m.c:1062
(XEN) Domain 1 reported crashed by domain 0 on cpu#0:
(XEN) Xen BUG at p2m.c:306
(XEN) ----[ Xen-3.5-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801bd12b>] p2m_pod_cache_add+0x350/0x3b1
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor
(XEN) rax: ffff830138c8ad30   rbx: ffff830138002018   rcx: ffff82f6001a8f00
(XEN) rdx: ffff830138c8ada0   rsi: ffff830138002020   rdi: ffff82f6001aab00
(XEN) rbp: ffff82c4802ef9b8   rsp: ffff82c4802ef968   r8:  000000000000d412
(XEN) r9:  0000000000000001   r10: ffff82f600000000   r11: 000000000000d478
(XEN) r12: 0000000000000001   r13: ffff830138002000   r14: 0000000000000001
(XEN) r15: 000000000000d478   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000011cec7000   cr2: ffff8800e41c7560
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c4802ef968:
(XEN)    ffff830138002000 0000000000000000 ffff82f6001a8f00 000000000000d478
(XEN)    ffff830138c8ad30 000000000000000e 000000000000000e ffff82c4802ef9d0
(XEN)    ffff830138002000 ffff82c4802ef9d0 ffff82c4802efbb8 ffff82c4801be27e
(XEN)    0000000000000002 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff83000d478000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000100000001
(XEN)    0000000100000001 0000000100000001 0000000100000001 0000000100000001
(XEN)    0000000100000001 0000000100000001 0000000100000001 ffff830138002e20
(XEN)    ffff830138002000 000000000000d5f4 0000000000125f18 0000000000125f19
(XEN)    0000000000125f1a 0000000000125f1b 0000000000125f1c 0000000000125f1d
(XEN)    0000000000125f1e 000000000011d72b 0000000000125f1f 0000000000125f20
(XEN)    000000000000d4ec 0000000000125f21 000000000000d341 000000000000d478
(XEN)    0000000000126960 ffff830138921000 0000000000138921 0000001038002000
(XEN)    ffff82c4802efc28 ffff82c4802efab0 ffff82c4802efa60 ffff82c4802efa60
(XEN)    ffff82c4802ef9d0 ffff82c4802efa60 00000002802efbb8 ffff82c4802eff28
(XEN)    ffff82c4802efab0 00000000000317ed 0000000000000010 ffff830138c8ad30
(XEN)    000000000003a26d ffff830138002000 ffff82c4802efce8 ffff82c4801be8c9
(XEN)    ffff82c4802efbd8 ffff82c4802efc28 ffff82c4802efcb4 ffff82c48011d8c4
(XEN) Xen call trace:
(XEN)    [<ffff82c4801bd12b>] p2m_pod_cache_add+0x350/0x3b1
(XEN)    [<ffff82c4801be27e>] p2m_pod_zero_check+0x3a5/0x3d8
(XEN)    [<ffff82c4801be8c9>] p2m_pod_demand_populate+0x618/0x8d4
(XEN)    [<ffff82c4801bed04>] p2m_pod_check_and_populate+0x17f/0x1fa
(XEN)    [<ffff82c4801bf3d1>] p2m_gfn_to_mfn+0x34b/0x3f4
(XEN)    [<ffff82c480166528>] mod_l1_entry+0x1aa/0x7ee
(XEN)    [<ffff82c48016774f>] do_mmu_update+0x56a/0x144b
(XEN)    [<ffff82c4801ed1bf>] syscall_enter+0xef/0x149
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at p2m.c:306
(XEN) ****************************************

Also this patch lets p2m_pod_empty_cache() wait by spin_barrier
until another PoD operation ceases.

Thanks,
Kouya

Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>

diff -r 7f611de6b93c xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c     Tue Dec 08 14:14:27 2009 +0000
+++ b/xen/arch/x86/mm/p2m.c     Wed Dec 09 11:03:19 2009 +0900
@@ -267,6 +267,8 @@ p2m_pod_cache_add(struct domain *d,
     }
 #endif
 
+    ASSERT(p2m_locked_by_me(p2md));
+
     /*
      * Pages from domain_alloc and returned by the balloon driver aren't
      * guaranteed to be zero; but by reclaiming zero pages, we implicitly
@@ -303,7 +305,9 @@ p2m_pod_cache_add(struct domain *d,
         BUG();
     }
 
-    BUG_ON(d->is_dying);
+    /* Ensure that the PoD cache has never been emptied.  
+     * This may cause "zombie domains" since the page will never be freed. */
+    BUG_ON( d->arch.relmem != RELMEM_not_started );
 
     spin_unlock(&d->page_alloc_lock);
 
@@ -501,6 +505,8 @@ p2m_pod_set_mem_target(struct domain *d,
     int ret = 0;
     unsigned long populated;
 
+    p2m_lock(p2md);
+
     /* P == B: Nothing to do. */
     if ( p2md->pod.entry_count == 0 )
         goto out;
@@ -528,6 +534,8 @@ p2m_pod_set_mem_target(struct domain *d,
     ret = p2m_pod_set_cache_target(d, pod_target);
 
 out:
+    p2m_unlock(p2md);
+
     return ret;
 }
 
@@ -536,6 +544,10 @@ p2m_pod_empty_cache(struct domain *d)
 {
     struct p2m_domain *p2md = d->arch.p2m;
     struct page_info *page;
+
+    /* After this barrier no new PoD activities can happen. */
+    BUG_ON(!d->is_dying);
+    spin_barrier(&p2md->lock);
 
     spin_lock(&d->page_alloc_lock);
 
@@ -588,7 +600,7 @@ p2m_pod_decrease_reservation(struct doma
 
     /* If we don't have any outstanding PoD entries, let things take their
      * course */
-    if ( p2md->pod.entry_count == 0 || unlikely(d->is_dying) )
+    if ( p2md->pod.entry_count == 0 )
         goto out;
 
     /* Figure out if we need to steal some freed memory for our cache */
@@ -596,6 +608,9 @@ p2m_pod_decrease_reservation(struct doma
 
     p2m_lock(p2md);
     audit_p2m(d);
+
+    if ( unlikely(d->is_dying) )
+        goto out_unlock;
 
     /* See what's in here. */
     /* FIXME: Add contiguous; query for PSE entries? */
@@ -1008,9 +1023,11 @@ p2m_pod_demand_populate(struct domain *d
     struct p2m_domain *p2md = d->arch.p2m;
     int i;
 
+    ASSERT(p2m_locked_by_me(d->arch.p2m));
+
     /* This check is done with the p2m lock held.  This will make sure that
-     * even if d->is_dying changes under our feet, empty_pod_cache() won't 
start
-     * until we're done. */
+     * even if d->is_dying changes under our feet, p2m_pod_empty_cache() 
+     * won't start until we're done. */
     if ( unlikely(d->is_dying) )
         goto out_fail;
 
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>