This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] re: [PATCH] mem_sharing: fix race condition of nominate and

To: "'Jui-Hao Chiang'" <juihaochiang@xxxxxxxxx>
Subject: [Xen-devel] re: [PATCH] mem_sharing: fix race condition of nominate and unshare
From: tinnycloud <tinnycloud@xxxxxxxxxxx>
Date: Fri, 7 Jan 2011 15:35:06 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, 'Tim Deegan' <Tim.Deegan@xxxxxxxxxx>
Delivery-date: Thu, 06 Jan 2011 23:35:55 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTinp+YcOC8xJVrh3+ht4kucgFAhLc9eAkF8kqK0f@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinMp1v1zex2BfcUuszotPuxJFWZQNUp40gu_gxL@xxxxxxxxxxxxxx> <BLU157-ds19B9B6B10F800B74320CFEDA0B0@xxxxxxx> <AANLkTinp+YcOC8xJVrh3+ht4kucgFAhLc9eAkF8kqK0f@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcuuNmg1DX61qOdYRv65PtfDkzUgswABh1sA

HI Jui-Hao:


         I have no stub-dom fro HVM. The domain ID starts from 1, and grows on every new domain is created.

         Below is for you, thanks.




664     shr_lock();

665     mfn = gfn_to_mfn(d, gfn, &p2mt);

666     /* Has someone already unshared it? */

667     printk("===will unshare mfn %lx p2mt %x gfn %lu did %d\n", mfn, p2mt, gfn, d->domain_id);

668     if (!p2m_is_shared(p2mt)) {

669         printk("===someone unshare mfn %lx p2mt %x gfn %lu did %d\n", mfn, p2mt, gfn, d->domain_id);                                                    

670         shr_unlock();

671         return 0;

672     }



------output -------


(XEN) ===will unshare mfn 1728ae p2mt d gfn 512686 did 1

(XEN) ===will unshare mfn 1728ef p2mt d gfn 512751 did 1

(XEN) ===will unshare mfn 1729aa p2mt d gfn 512938 did 1

(XEN) ===will unshare mfn 1728f6 p2mt d gfn 512758 did 1

(XEN) ===will unshare mfn 2de94a p2mt d gfn 39754 did 1

(XEN) ===will unshare mfn 2de94b p2mt d gfn 39755 did 1

(XEN) ===will unshare mfn 2de94c p2mt d gfn 39756 did 1

(XEN) printk: 32 messages suppressed.

(XEN) mm.c:859:d0 Error getting mfn 2de94c (pfn fffffffffffffffe) from L1 entry 80000002de94c627 for l1e_owner=0, pg_owner=1

(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----

(XEN) CPU:    0

(XEN) RIP:    e008:[<ffff82c48015d1d1>] get_page_from_l1e+0x351/0x4d0

(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor

(XEN) rax: 007fffffffffffff   rbx: 0000000000000001   rcx: 0000000000000092

(XEN) rdx: 8000000000000002   rsi: 8000000000000003   rdi: ffff82f605bd2980

(XEN) rbp: 00000000002de94c   rsp: ffff82c48035fcd8   r8:  0000000000000001

(XEN) r9:  0000000000000000   r10: 00000000fffffffb   r11: 0000000000000002

(XEN) r12: 0000000000000000   r13: fffffffffffffffe   r14: ffff82f605bd2980

(XEN) r15: 80000002de94c627   cr0: 000000008005003b   cr4: 00000000000026f0

(XEN) cr3: 000000031b870000   cr2: 000000000098efa0

(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008

(XEN) Xen stack trace from rsp=ffff82c48035fcd8:

(XEN)    80000002de94c627 ffff830200000000 ffff82c400000001 ffff82c4801df3d9

(XEN)    ffff83033e944930 ffff8300bf554000 ffff83023ff40000 000000000000014c

(XEN)    ffffffffffffffff 0000000000800627 ffff8302dd6a0000 0000000000000001

(XEN)    ffff83031ccb26b8 000000000031ccb2 80000002de94c627 ffff82c48016288b

(XEN)    000082c480168170 0000000000000000 0000000000000004 ffff8300bf554000

(XEN)    0000000000000009 000000000031ccb2 ffff83023fe60000 80000002de94c627

(XEN)    000000000031ccb2 ffff82c48035fedc 0000000000000000 000000010000014c

(XEN)    ffff83031babc000 0000000000000000 0000000000000000 0000000000000001

(XEN)    ffff83031ccb26b8 000000000031ccb2 8000000009b4c627 ffff82c480163f36

(XEN)    0000000000000001 ffff82c480161a49 ffff82c48035fe88 ffff82c48035fe88

(XEN)    00007ff0fffffffe 0000000000000000 0000000100000000 ffff8300bf554000

(XEN)    00000001bf554000 0000000000000000 000000010000c178 ffff82f606399640

(XEN)    0000000000000006 ffff83023fe60000 ffff8302dd6a0000 ffff8300bf554000

(XEN)    0000000000000000 0000000000000000 0000000000000000 000000008035ff28

(XEN)    ffff8801208d5c18 0000000080251008 000000031ccb26b8 8000000009b4c627

(XEN)    ffff82c480251008 ffff82c480251000 0000000000000000 ffff82c480113d7e

(XEN)    0000000d00000000 0000000000000001 00000001ffffffff ffff8300bf554000

(XEN)    ffff8801208d5d68 0000000000000001 ffff880121dbd0a8 00007f4de18d7000

(XEN)    0000000000000001 ffff82c4801e3169 0000000000000001 00007f4de18d7000

(XEN)    ffff880121dbd0a8 0000000000000001 ffff8801208d5d68 0000000000000000

(XEN) Xen call trace:

(XEN)    [<ffff82c48015d1d1>] get_page_from_l1e+0x351/0x4d0

(XEN)    [<ffff82c4801df3d9>] ept_get_entry+0xa9/0x1c0

(XEN)    [<ffff82c48016288b>] mod_l1_entry+0x37b/0x9a0

(XEN)    [<ffff82c480163f36>] do_mmu_update+0x9f6/0x1a70

(XEN)    [<ffff82c480161a49>] do_mmuext_op+0x859/0x1320

(XEN)    [<ffff82c480113d7e>] do_multicall+0x14e/0x340

(XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae



From: Jui-Hao Chiang [mailto:juihaochiang@xxxxxxxxx]
Date: 2011.1.1 14:45
To: tinnycloud
CC: Tim Deegan; xen-devel@xxxxxxxxxxxxxxxxxxx
Sub: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare


Hi, tinnycloud:

(XEN) mm.c:859:d0 Error getting mfn 2df6a8 (pfn fffffffffffffffe) from L1 entry 80000002df6a8627 for l1e_owner=0, pg_owner=2

(XEN) mm.c:859:d0 Error getting mfn 2df6a9 (pfn fffffffffffffffe) from L1 entry 80000002df6a9627 for l1e_owner=0, pg_owner=2

Could you use dump_execution_state() in mm.c:859?
And in the unshare() function, could you move the printk outside the
(!p2m_is_shared(p2mt)) checking?
If you put inside it, we never know if the unshare() is being done or not (please also print out the mfn, p2mt, gfn, domain_id).

Just out of curiosity, are you running stubdom? your HVM guest id =2 is pretty weird.


Xen-devel mailing list