WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] xen.git branch reorg / success with 2.6.30-rc3 pv_ops do

On Thu, 2009-06-11 at 15:34 -0400, Pasi Kärkkäinen wrote:
> On Thu, Jun 11, 2009 at 09:27:09PM +0300, Pasi Kärkkäinen wrote:
> > On Thu, Jun 11, 2009 at 10:18:34AM +0100, Ian Campbell wrote:
> > > Pasi, to validate the theory that you are seeing races between unpinning
> > > and kmap_atomic_pte can you give this biguglystick approach to solving
> > > it a go.
> > > 
> > 
> > Guess what.. 
> > 
> > Now my dom0 didn't crash !! (with only this patch applied).
> > It survived kernel compilation just fine.. first time so far with pv_ops 
> > dom0.
> > 
> > I'll try again, just in case.
> > 
> 
> Yep, I tried again, and it still worked. 
> 
> No crashes anymore with this patch :) Congratulations and thanks!

Oh good, thanks for testing. The patch is not really a suitable
long-term fix as it is but it sounds like Jeremy has some ideas.

I'm still curious how come you are the only one who sees this issue.  I
don't recall you having lots of processors in your dmesg which might
make the race more common, nor do you have involuntary preempt enabled.
Very strange. Oh well I guess it doesn't matter now ;-)

Ian.


> 
> -- Pasi
> 
> > 
> > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > index 1729178..beeb8e8 100644
> > > --- a/arch/x86/xen/mmu.c
> > > +++ b/arch/x86/xen/mmu.c
> > > @@ -1145,9 +1145,12 @@ static int xen_unpin_page(struct mm_struct *mm, 
> > > struct page *page,
> > >   return 0;               /* never need to flush on unpin */
> > >  }
> > >  
> > > +static DEFINE_SPINLOCK(hack_lock); /* Hack to sync unpin against 
> > > kmap_atomic_pte */
> > > +
> > >  /* Release a pagetables pages back as normal RW */
> > >  static void __xen_pgd_unpin(struct mm_struct *mm, pgd_t *pgd)
> > >  {
> > > + spin_lock(&hack_lock);
> > >   xen_mc_batch();
> > >  
> > >   xen_do_pin(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> > > @@ -1173,6 +1176,7 @@ static void __xen_pgd_unpin(struct mm_struct *mm, 
> > > pgd_t *pgd)
> > >   __xen_pgd_walk(mm, pgd, xen_unpin_page, USER_LIMIT);
> > >  
> > >   xen_mc_issue(0);
> > > + spin_unlock(&hack_lock);
> > >  }
> > >  
> > >  static void xen_pgd_unpin(struct mm_struct *mm)
> > > @@ -1521,6 +1525,9 @@ static void xen_pgd_free(struct mm_struct *mm, 
> > > pgd_t *pgd)
> > >  static void *xen_kmap_atomic_pte(struct page *page, enum km_type type)
> > >  {
> > >   pgprot_t prot = PAGE_KERNEL;
> > > + void *ret;
> > > +
> > > + spin_lock(&hack_lock);
> > >  
> > >   if (PagePinned(page))
> > >           prot = PAGE_KERNEL_RO;
> > > @@ -1530,7 +1537,11 @@ static void *xen_kmap_atomic_pte(struct page 
> > > *page, enum km_type type)
> > >                  page_to_pfn(page), type,
> > >                  (unsigned long)pgprot_val(prot) & _PAGE_RW ? "WRITE" : 
> > > "READ");
> > >  
> > > - return kmap_atomic_prot(page, type, prot);
> > > + ret = kmap_atomic_prot(page, type, prot);
> > > +
> > > + spin_unlock(&hack_lock);
> > > +
> > > + return ret;
> > >  }
> > >  #endif
> > >  
> > > 
> > > 
> > 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>