On Mon, Jun 15, 2009 at 11:03:17AM +0100, Ian Campbell wrote:
> On Thu, 2009-06-11 at 15:34 -0400, Pasi Kärkkäinen wrote:
> > On Thu, Jun 11, 2009 at 09:27:09PM +0300, Pasi Kärkkäinen wrote:
> > > On Thu, Jun 11, 2009 at 10:18:34AM +0100, Ian Campbell wrote:
> > > > Pasi, to validate the theory that you are seeing races between unpinning
> > > > and kmap_atomic_pte can you give this biguglystick approach to solving
> > > > it a go.
> > > >
> > >
> > > Guess what..
> > >
> > > Now my dom0 didn't crash !! (with only this patch applied).
> > > It survived kernel compilation just fine.. first time so far with pv_ops
> > > dom0.
> > >
> > > I'll try again, just in case.
> > >
> >
> > Yep, I tried again, and it still worked.
> >
> > No crashes anymore with this patch :) Congratulations and thanks!
>
> Oh good, thanks for testing. The patch is not really a suitable
> long-term fix as it is but it sounds like Jeremy has some ideas.
>
Yep. I'm only able to test patches until thursday this week, after that
I'll be on summer vacation for a month and I don't know yet how much I'm
able to test patches during that period..
> I'm still curious how come you are the only one who sees this issue. I
> don't recall you having lots of processors in your dmesg which might
> make the race more common, nor do you have involuntary preempt enabled.
> Very strange. Oh well I guess it doesn't matter now ;-)
>
(XEN) Initializing CPU#0
(XEN) Detected 3000.241 MHz processor.
(XEN) CPU0: Intel(R) Pentium(R) 4 CPU 3.00GHz stepping 04
(XEN) Initializing CPU#1
(XEN) CPU1: Intel(R) Pentium(R) 4 CPU 3.00GHz stepping 04
(XEN) Total of 2 processors activated.
It's old Intel P4 CPU with hyperthreading, so one physical CPU, seen as two
logical CPUs.
dom0 kernel/domain is seeing both CPUs:
SMP: Allowing 2 CPUs, 0 hotplug CPUs
Initializing CPU#0
CPU0: Intel P4/Xeon Extended MCE MSRs (12) available
Initializing CPU#1
CPU1: Intel P4/Xeon Extended MCE MSRs (12) available
Brought up 2 CPUs
But yeah, if the reason for the problem looks valid, I guess it doesn't
really matter then :)
-- Pasi
> Ian.
>
>
> >
> > -- Pasi
> >
> > >
> > > > diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
> > > > index 1729178..beeb8e8 100644
> > > > --- a/arch/x86/xen/mmu.c
> > > > +++ b/arch/x86/xen/mmu.c
> > > > @@ -1145,9 +1145,12 @@ static int xen_unpin_page(struct mm_struct *mm,
> > > > struct page *page,
> > > > return 0; /* never need to flush on unpin */
> > > > }
> > > >
> > > > +static DEFINE_SPINLOCK(hack_lock); /* Hack to sync unpin against
> > > > kmap_atomic_pte */
> > > > +
> > > > /* Release a pagetables pages back as normal RW */
> > > > static void __xen_pgd_unpin(struct mm_struct *mm, pgd_t *pgd)
> > > > {
> > > > + spin_lock(&hack_lock);
> > > > xen_mc_batch();
> > > >
> > > > xen_do_pin(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
> > > > @@ -1173,6 +1176,7 @@ static void __xen_pgd_unpin(struct mm_struct *mm,
> > > > pgd_t *pgd)
> > > > __xen_pgd_walk(mm, pgd, xen_unpin_page, USER_LIMIT);
> > > >
> > > > xen_mc_issue(0);
> > > > + spin_unlock(&hack_lock);
> > > > }
> > > >
> > > > static void xen_pgd_unpin(struct mm_struct *mm)
> > > > @@ -1521,6 +1525,9 @@ static void xen_pgd_free(struct mm_struct *mm,
> > > > pgd_t *pgd)
> > > > static void *xen_kmap_atomic_pte(struct page *page, enum km_type type)
> > > > {
> > > > pgprot_t prot = PAGE_KERNEL;
> > > > + void *ret;
> > > > +
> > > > + spin_lock(&hack_lock);
> > > >
> > > > if (PagePinned(page))
> > > > prot = PAGE_KERNEL_RO;
> > > > @@ -1530,7 +1537,11 @@ static void *xen_kmap_atomic_pte(struct page
> > > > *page, enum km_type type)
> > > > page_to_pfn(page), type,
> > > > (unsigned long)pgprot_val(prot) & _PAGE_RW ?
> > > > "WRITE" : "READ");
> > > >
> > > > - return kmap_atomic_prot(page, type, prot);
> > > > + ret = kmap_atomic_prot(page, type, prot);
> > > > +
> > > > + spin_unlock(&hack_lock);
> > > > +
> > > > + return ret;
> > > > }
> > > > #endif
> > > >
> > > >
> > > >
> > >
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|