Xiaoyun,
Thanks for all of your work getting page sharing working. When
submitting patches, please break them down into individual chunks,
each of which does one thing. Each patch should also include a
comment saying what the patch does and why, and a Signed-off-by line
indicating that you certify that the copyright holder (possibly you)
is placing the code under the GPL.
Using mercurial queues:
http://mercurial.selenic.com/wiki/MqExtension
and the mercurial patchbomb extension:
http://mercurial.selenic.com/wiki/PatchbombExtension
are particularly handy for this process.
Quick comment: The ept-locking part of the patch is going to be
NACK-ed, as it will cause circular locking dependencies with the
hap_lock. c/s 22526:7a5ee380 has an explanation of the circular
dependency, and a fix which doesn't introduce a circular dependency.
(It may need to be adapted a bit to apply to 4.0-testing)
-George
On Mon, Jan 31, 2011 at 10:13 AM, tinnycloud <tinnycloud@xxxxxxxxxxx> wrote:
> Hi:
>
>
> Attached is the whole patch suit for latest xen-4.0-testing,changeset 21443,
>
> It comes from George, Tim and JuiHao, also, has some extra debug info.
>
>
>
> On most occasion, memory sharing works fine, but still exists a bug.
>
> I’ve been tracing this problem for a while.
>
> There is a bug on get_page_and_type() in mem_sharing_share_pages()
>
>
>
> --------------------------------------------mem_sharing_share_pages()-----------------------
>
> 789 if(!get_page_and_type(spage, dom_cow, PGT_shared_page)){
>
> 790 mem_sharing_debug_gfn(cd, gfn->gfn);
>
> 791 mem_sharing_debug_gfn(sd, sgfn->gfn);
>
> 792 printk("c->dying %d s->dying %d spage %p se->mfn %lx\n",
> cd->is_dying, sd->is_dying, spage, se->mfn);
>
> 793 printk("Debug page: MFN=%lx is ci=%lx, ti=%lx,
> owner_id=%d\n",
>
> 794 mfn_x(page_to_mfn(spage)),
>
> 795 spage->count_info,
>
> 796 spage->u.inuse.type_info,
>
> 797 page_get_owner(spage)->domain_id);
>
> 798 BUG();
>
> 799 }
>
>
>
>
>
> Below painc log contains the debug info from line 790-798.
>
> We saw that 180000000000000 which is PGC_state_free,
>
> So it looks like a shared page has been freed unexpectly.
>
>
>
>
>
> (XEN) teardown 64
>
> (XEN) teardown 66
>
> blktap_sysfs_destroy
>
> blktap_sysfs_create: adding attributes for dev ffff880109d0c200
>
> blktap_sysfs_destroy
>
> __ratelimit: 1 callbacks suppressed
>
> blktap_sysfs_destroy
>
> (XEN) Debug for domain=83, gfn=1e8dc, Debug page: MFN=1306dc is
> ci=8000000000000005, ti=8400000000000001, owner_id=32755
>
> (XEN) Debug for domain=79, gfn=1dc95, Invalid MFN=ffffffffffffffff
>
> (XEN) c->dying 0 s->dying 0 spage ffff82f60a0f12a0 se->mfn 507895
>
> (XEN) Debug page: MFN=507895 is ci= 180000000000000, ti=8400000000000001,
> owner_id=32755
>
> (XEN) Xen BUG at mem_sharing.c:798
>
> (XEN) ----[ Xen-4.0.2-rc2-pre x86_64 debug=n Not tainted ]----
>
> (XEN) CPU: 0
>
> (XEN) RIP: e008:[<ffff82c4801c3760>] mem_sharing_share_pages+0x5b0/0x5d0
>
> (XEN) RFLAGS: 0000000000010286 CONTEXT: hypervisor
>
> (XEN) rax: 0000000000000000 rbx: ffff83044ef76000 rcx: 0000000000000092
>
> (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: ffff82c4802237c4
>
> (XEN) rbp: ffff83030d99e310 rsp: ffff82c48035fc48 r8: 0000000000000001
>
> (XEN) r9: 0000000000000001 r10: 00000000fffffff8 r11: 0000000000000005
>
> (XEN) r12: ffff83050b558000 r13: ffff830626ec5740 r14: 0000000000347967
>
> (XEN) r15: ffff83030d99e068 cr0: 0000000080050033 cr4: 00000000000026f0
>
> (XEN) cr3: 000000010e642000 cr2: 00002abdc4809000
>
> (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008
>
> (XEN) Xen stack trace from rsp=ffff82c48035fc48:
>
> (XEN) 000000000001e8dc ffff83030d99e068 0000000000507895 ffff83030d99e050
>
> (XEN) ffff82f60a0f12a0 ffff82f60260db80 ffff83030d99e300 ffff830626afbbd0
>
> (XEN) 0000000080372980 ffff82c48035fe38 ffff83023febe000 00000000008f7000
>
> (XEN) 0000000000305000 0000000000000006 0000000000000006 ffff82c4801c44a4
>
> (XEN) ffff82c480258188 ffff82c48011cb89 000000000001e8dc fffffffffffffff3
>
> (XEN) ffff82c48035fe28 ffff82c480148503 000003d932971281 0000000000000080
>
> (XEN) 000003d9329611d6 ffff82c48018aedf ffff82c4803903a8 ffff82c48035fd38
>
> (XEN) ffff82c480390380 ffff82c48035fe28 0000000000000002 0000000000000000
>
> (XEN) 0000000000000002 0000000000000000 ffff83023febe000 ffff83023ff80080
>
> (XEN) ffff82c48035fe58 0000000000000000 ffff82c48022ca80 0000000000000000
>
> (XEN) 0000000000000002 ffff82c48018a452 0000000000000000 ffff82c48016f6bb
>
> (XEN) ffff82c48035fe58 ffff82c4801538ea 000000023ab79067 fffffffffffffff3
>
> (XEN) ffff82c48035fe28 00000000008f7000 0000000000305000 0000000000000006
>
> (XEN) 0000000000000006 ffff82c4801042b3 0000000000000000 ffff82c48010d2a5
>
> (XEN) ffff830477fb51e8 0000000000000000 00007ff000000200 ffff8300bf76e000
>
> (XEN) 0000000600000039 0000000000000000 00007fc09d744003 0000000000325258
>
> (XEN) 0000000000347967 ffffffffff600429 000000004d463b17 0000000000062fe2
>
> (XEN) 0000000000000000 00007fc09d744070 00007fc09d744000 00007fffcfec0d20
>
> (XEN) 00007fc09d744078 0000000000430fd8 00007fffcfec0d88 00000000019f1ca8
>
> (XEN) 0000f05c00000000 00007fc09f4c2718 0000000000000000 0000000000000246
>
> (XEN) Xen call trace:
>
> (XEN) [<ffff82c4801c3760>] mem_sharing_share_pages+0x5b0/0x5d0
>
> (XEN) [<ffff82c4801c44a4>] mem_sharing_domctl+0xe4/0x130
>
> (XEN) [<ffff82c48011cb89>] cpumask_raise_softirq+0x89/0xa0
>
> (XEN) [<ffff82c480148503>] arch_do_domctl+0x14f3/0x22d0
>
> (XEN) [<ffff82c48018aedf>] handle_hpet_broadcast+0x16f/0x1d0
>
> (XEN) [<ffff82c48018a452>] hpet_legacy_irq_tick+0x42/0x50
>
> (XEN) [<ffff82c48016f6bb>] timer_interrupt+0xb/0x130
>
> (XEN) [<ffff82c4801538ea>] ack_edge_ioapic_irq+0x2a/0x70
>
> (XEN) [<ffff82c4801042b3>] do_domctl+0x163/0xfe0
>
> (XEN) [<ffff82c48010d2a5>] do_grant_table_op+0x75/0x1ad0
>
> (XEN) [<ffff82c4801e7169>] syscall_enter+0xa9/0xae
>
> (XEN)
>
> (XEN)
>
> (XEN) ****************************************
>
> (XEN) Panic on CPU 0:
>
> (XEN) Xen BUG at mem_sharing.c:798
>
> (XEN) ****************************************
>
> (XEN)
>
> (XEN) Manual reset required ('noreboot' specified)
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|