xen-devel
RE: [Xen-devel] Xen unstable crash
We tried our auto test suite (the detailed testing is followed, it is same as
the Bi-weekly VMX status report sent by Haicheng), but still can't trigger the
issue. Andrew Lyon, can you share more detailed info on how this issue is
produced?
Thanks
Yunhong Jiang
Test Environment:
================================================================
Platform : x86_64
Service OS : Red Hat Enterprise Linux Server release 5.1 (Tikanga)
Hardware : Nehalem
Xen package: 19043:10a8fae412c5
Platform : PAE
Service OS : Red Hat Enterprise Linux Server release 5.2 (Tikanga)
Hardware : Nehalem
Xen package: 19043:10a8fae412c5
Details:
=====================================================================
X86_64:
Summary Test Report of Last Session
=====================================================================
Total Pass Fail NoResult Crash
=====================================================================
vtd_ept_vpid 16 11 5 0 0
ras_ept_vpid 1 1 0 0 0
control_panel_ept_vpid 18 18 0 0 0
stubdom_ept_vpid 2 1 1 0 0
gtest_ept_vpid 22 22 0 0 0
acpi_ept_vpid 5 1 4 0 0
device_model_ept_vpid 2 2 0 0 0
=====================================================================
vtd_ept_vpid 16 11 5 0 0
:two_dev_up_xp_nomsi_64_ 1 1 0 0 0
:two_dev_smp_nomsi_64_g3 1 1 0 0 0
:two_dev_scp_64_g32e 1 0 1 0 0
:lm_pcie_smp_64_g32e 1 0 1 0 0
:lm_pcie_up_64_g32e 1 0 1 0 0
:two_dev_up_64_g32e 1 0 1 0 0
:lm_pcie_up_xp_nomsi_64_ 1 1 0 0 0
:two_dev_up_nomsi_64_g32 1 1 0 0 0
:two_dev_smp_64_g32e 1 0 1 0 0
:lm_pci_up_xp_nomsi_64_g 1 1 0 0 0
:lm_pci_up_nomsi_64_g32e 1 1 0 0 0
:two_dev_smp_xp_nomsi_64 1 1 0 0 0
:two_dev_scp_nomsi_64_g3 1 1 0 0 0
:lm_pcie_smp_xp_nomsi_64 1 1 0 0 0
:lm_pci_smp_nomsi_64_g32 1 1 0 0 0
:lm_pci_smp_xp_nomsi_64_ 1 1 0 0 0
ras_ept_vpid 1 1 0 0 0
:cpu_online_offline_64_g 1 1 0 0 0
control_panel_ept_vpid 18 18 0 0 0
:XEN_1500M_guest_64_g32e 1 1 0 0 0
:XEN_LM_Continuity_64_g3 1 1 0 0 0
:XEN_256M_xenu_64_gPAE 1 1 0 0 0
:XEN_four_vmx_xenu_seq_6 1 1 0 0 0
:XEN_vmx_vcpu_pin_64_g32 1 1 0 0 0
:XEN_SR_Continuity_64_g3 1 1 0 0 0
:XEN_linux_win_64_g32e 1 1 0 0 0
:XEN_vmx_2vcpu_64_g32e 1 1 0 0 0
:XEN_1500M_guest_64_gPAE 1 1 0 0 0
:XEN_four_dguest_co_64_g 1 1 0 0 0
:XEN_two_winxp_64_g32e 1 1 0 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
:XEN_256M_guest_64_gPAE 1 1 0 0 0
:XEN_LM_SMP_64_g32e 1 1 0 0 0
:XEN_Nevada_xenu_64_g32e 1 1 0 0 0
:XEN_256M_guest_64_g32e 1 1 0 0 0
:XEN_SR_SMP_64_g32e 1 1 0 0 0
:XEN_four_sguest_seq_64_ 1 1 0 0 0
stubdom_ept_vpid 2 1 1 0 0
:boot_stubdom_no_qcow_64 1 1 0 0 0
:boot_stubdom_qcow_64_g3 1 0 1 0 0
gtest_ept_vpid 22 22 0 0 0
:boot_up_acpi_win2k_64_g 1 1 0 0 0
:boot_up_noacpi_win2k_64 1 1 0 0 0
:reboot_xp_64_g32e 1 1 0 0 0
:boot_solaris10u5_64_g32 1 1 0 0 0
:boot_up_vista_64_g32e 1 1 0 0 0
:boot_indiana_64_g32e 1 1 0 0 0
:boot_up_acpi_xp_64_g32e 1 1 0 0 0
:boot_smp_acpi_xp_64_g32 1 1 0 0 0
:boot_up_acpi_64_g32e 1 1 0 0 0
:boot_base_kernel_64_g32 1 1 0 0 0
:boot_up_win2008_64_g32e 1 1 0 0 0
:kb_nightly_64_g32e 1 1 0 0 0
:boot_up_acpi_win2k3_64_ 1 1 0 0 0
:boot_nevada_64_g32e 1 1 0 0 0
:boot_smp_vista_64_g32e 1 1 0 0 0
:ltp_nightly_64_g32e 1 1 0 0 0
:boot_fc9_64_g32e 1 1 0 0 0
:boot_smp_win2008_64_g32 1 1 0 0 0
:boot_smp_acpi_win2k3_64 1 1 0 0 0
:boot_rhel5u1_64_g32e 1 1 0 0 0
:reboot_fc6_64_g32e 1 1 0 0 0
:boot_smp_acpi_win2k_64_ 1 1 0 0 0
acpi_ept_vpid 5 1 4 0 0
:monitor_c_status_64_g32 1 0 1 0 0
:check_t_control_64_g32e 1 0 1 0 0
:hvm_s3_sr_64_g32e 1 0 1 0 0
:hvm_s3_smp_64_g32e 1 0 1 0 0
:monitor_p_status_64_g32 1 1 0 0 0
device_model_ept_vpid 2 2 0 0 0
:pv_on_up_64_g32e 1 1 0 0 0
:pv_on_smp_64_g32e 1 1 0 0 0
=====================================================================
Total 66 56 10 0 0
32PAE:
Summary Test Report of Last Session
=====================================================================
Total Pass Fail NoResult Crash
=====================================================================
vtd_ept_vpid 16 11 5 0 0
ras_ept_vpid 1 1 0 0 0
control_panel_ept_vpid 14 14 0 0 0
stubdom_ept_vpid 2 1 1 0 0
gtest_ept_vpid 24 24 0 0 0
device_model_ept_vpid 2 0 0 2 0
=====================================================================
vtd_ept_vpid 16 11 5 0 0
:lm_pcie_smp_xp_nomsi_PA 1 1 0 0 0
:lm_pci_up_xp_nomsi_PAE_ 1 1 0 0 0
:lm_pci_up_nomsi_PAE_gPA 1 1 0 0 0
:two_dev_scp_nomsi_PAE_g 1 1 0 0 0
:lm_pcie_up_xp_nomsi_PAE 1 1 0 0 0
:lm_pci_smp_xp_nomsi_PAE 1 1 0 0 0
:two_dev_up_PAE_gPAE 1 0 1 0 0
:two_dev_up_xp_nomsi_PAE 1 1 0 0 0
:lm_pcie_smp_PAE_gPAE 1 0 1 0 0
:two_dev_smp_xp_nomsi_PA 1 1 0 0 0
:two_dev_smp_PAE_gPAE 1 0 1 0 0
:two_dev_smp_nomsi_PAE_g 1 1 0 0 0
:two_dev_up_nomsi_PAE_gP 1 1 0 0 0
:two_dev_scp_PAE_gPAE 1 0 1 0 0
:lm_pcie_up_PAE_gPAE 1 0 1 0 0
:lm_pci_smp_nomsi_PAE_gP 1 1 0 0 0
ras_ept_vpid 1 1 0 0 0
:cpu_online_offline_PAE_ 1 1 0 0 0
control_panel_ept_vpid 14 14 0 0 0
:XEN_four_vmx_xenu_seq_P 1 1 0 0 0
:XEN_four_dguest_co_PAE_ 1 1 0 0 0
:XEN_SR_SMP_PAE_gPAE 1 1 0 0 0
:XEN_linux_win_PAE_gPAE 1 1 0 0 0
:XEN_Nevada_xenu_PAE_gPA 1 1 0 0 0
:XEN_LM_SMP_PAE_gPAE 1 1 0 0 0
:XEN_SR_Continuity_PAE_g 1 1 0 0 0
:XEN_vmx_vcpu_pin_PAE_gP 1 1 0 0 0
:XEN_LM_Continuity_PAE_g 1 1 0 0 0
:XEN_256M_guest_PAE_gPAE 1 1 0 0 0
:XEN_1500M_guest_PAE_gPA 1 1 0 0 0
:XEN_two_winxp_PAE_gPAE 1 1 0 0 0
:XEN_four_sguest_seq_PAE 1 1 0 0 0
:XEN_vmx_2vcpu_PAE_gPAE 1 1 0 0 0
stubdom_ept_vpid 2 1 1 0 0
:boot_stubdom_no_qcow_PA 1 1 0 0 0
:boot_stubdom_qcow_PAE_g 1 0 1 0 0
gtest_ept_vpid 24 24 0 0 0
:boot_up_acpi_PAE_gPAE 1 1 0 0 0
:ltp_nightly_PAE_gPAE 1 1 0 0 0
:reboot_xp_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_xp_PAE_gPA 1 1 0 0 0
:boot_up_vista_PAE_gPAE 1 1 0 0 0
:boot_up_acpi_win2k3_PAE 1 1 0 0 0
:boot_smp_acpi_win2k3_PA 1 1 0 0 0
:boot_smp_acpi_win2k_PAE 1 1 0 0 0
:boot_up_acpi_win2k_PAE_ 1 1 0 0 0
:boot_smp_acpi_xp_PAE_gP 1 1 0 0 0
:boot_up_noacpi_win2k_PA 1 1 0 0 0
:boot_smp_vista_PAE_gPAE 1 1 0 0 0
:boot_up_noacpi_win2k3_P 1 1 0 0 0
:boot_nevada_PAE_gPAE 1 1 0 0 0
:boot_solaris10u5_PAE_gP 1 1 0 0 0
:boot_indiana_PAE_gPAE 1 1 0 0 0
:boot_rhel5u1_PAE_gPAE 1 1 0 0 0
:boot_base_kernel_PAE_gP 1 1 0 0 0
:boot_up_win2008_PAE_gPA 1 1 0 0 0
:boot_up_noacpi_xp_PAE_g 1 1 0 0 0
:boot_smp_win2008_PAE_gP 1 1 0 0 0
:reboot_fc6_PAE_gPAE 1 1 0 0 0
:boot_fc10_PAE_gPAE 1 1 0 0 0
:kb_nightly_PAE_gPAE 1 1 0 0 0
device_model_ept_vpid 2 0 0 2 0
:pv_on_up_PAE_gPAE 1 0 0 1 0
:pv_on_smp_PAE_gPAE 1 0 0 1 0
=====================================================================
Total 59 51 6 2 0
Keir Fraser <mailto:keir.fraser@xxxxxxxxxxxxx> wrote:
> If you can reproduce this bug, it's worth trying to revert c/s 19285 and try
> again:
> hg export 19285 | patch -Rp1
> To put the tree back into clean state afterwards:
> hg diff | patch -Rp1
>
> If the bug still reproduces another possible culprit is c/s 19317.
>
> -- Keir
>
> On 13/03/2009 12:55, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>
>> Thanks for the log, seems the count info is -1UL in such situation, I
>> think it may because some change to count_info, and I will try to check it.
>>
>> Thanks
>> Yunhong Jiang
>>
>>> -----Original Message-----
>>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>>> Sent: 2009年3月13日 18:39
>>> To: Andrew Lyon; Xen-devel
>>> Cc: Jiang, Yunhong
>>> Subject: Re: [Xen-devel] Xen unstable crash
>>>
>>> Thanks. Our testing has showed this up too. The cause hasn't been tracked
>>> down yet unfortunately.
>>>
>>> -- Keir
>>>
>>> On 13/03/2009 10:33, "Andrew Lyon" <andrew.lyon@xxxxxxxxx> wrote:
>>>
>>>> Hi,
>>>>
>>>> Running Xen unstable on a Dell Optiplex 755, after starting and
>>>> shutting down a few hvm's the system crashes with the following message:
>>>>
>>>> This is with Xensource 2.6.18.8 kernel:
>>>>
>>>> (XEN) ** page_alloc.c:407 -- 449/512 ffffffffffffffff
>>>> (XEN) Xen BUG at page_alloc.c:409
>>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN)
>>>> CPU: 1 (XEN) RIP: e008:[<ffff828c8011206f>]
>>>> alloc_heap_pages+0x35a/0x486 (XEN) RFLAGS: 0000000000010286 CONTEXT:
>>>> hypervisor (XEN) rax: 0000000000000000 rbx: ffff82840199b820 rcx:
>>>> 0000000000000001 (XEN) rdx: 000000000000000a rsi: 000000000000000a
>>>> rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfcb8 rsp:
>>>> ffff830127fdfc58 r8: 0000000000000004 (XEN) r9: 0000000000000004
>>>> r10: 0000000000000010 r11: 0000000000000010 (XEN) r12:
>>>> ffff828401998000 r13: 00000000000001c1 r14: 0000000000000200 (XEN)
>>>> r15: 0000000000000000 cr0: 0000000080050033 cr4: 00000000000026f0
>>>> (XEN) cr3: 000000011f46e000 cr2: 000000000133a000 (XEN) ds: 0000 es:
>>>> 0000 fs: 0063 gs: 0000 ss: e010
> cs: e008
>>>> (XEN) Xen stack trace from rsp=ffff830127fdfc58:
>>>> (XEN) 0000000900000001 0000000000000098 0000000000000200
>>>> 0000000100000001 (XEN) ffff830127fdfcf8 ffff828c801a5c68
>>>> ffff830127fdfccc ffff830127fdff28 (XEN) 0000000000000027
>>>> 0000000000000000 ffff83011d9c4000 0000000000000000 (XEN)
>>>> ffff830127fdfcf8 ffff828c8011383b 0100000400000009 ffff830127fdff28
>>>> (XEN) 0000000000000006 0000000044803760 00000000448037c0
>>>> 0000000000000000 (XEN) ffff830127fdff08 ffff828c80110591
>>>> 0000000000000000 0000000000000000 (XEN) 0000000000000000
>>>> 0000000000000000 0000000000000000 0000000000000000 (XEN)
>>>> 0000000000000200 0000000000000001 0000000000000000 0000000000000001
>>>> (XEN) ffff828c80214d00 ffff830127fdfda8 ffff830127fdfde8
>>>> ffff828c80115811 (XEN) 0000000000000001 ffff828c80151777
>>>> ffff830127fdfda8 ffff828c8013c559 (XEN) 0000000000000004
>>>> 0000020000000001 ffff830127fdfdb8 ffff828c8013c5f6 (XEN)
>>>> ffff830127fdfde8 ffff828c80107247 ffff830127fdfde8 ffff828c8011d73e
>>>> (XEN) 0000000000000001 0000000000000000 ffff830127fdfe28
>>>> ffff83011d9c4000 (XEN) 0000000000000282 0000000400000009
>>>> 0000000044803750 ffff8300cfdfc030 (XEN) ffff830127ff1f28
>>>> 0000000000000002 ffff830127fdfe58 ffff828c80119cb7 (XEN)
>>>> 00007cfed8020197 ffff828c80239180 0000000000000002 ffff830127fdfe68
>>>> (XEN) ffff828c8011c0c8 ffff828c80239180 ffff830127fdfe78
>>>> ffff828c8014c173 (XEN) ffff830127fdfe98 ffff828c801d166d
>>>> ffff830127fdfe98 00000000000cce00 (XEN) 000000000001f600
>>>> ffff828c801d2063 0000000044803750 0000000000000004 (XEN)
>>>> 0000000000000009 000000000000000a 00002b820b4a5eb7 ffff83011d9c4000
>>>> (XEN) Xen call trace: (XEN) [<ffff828c8011206f>]
>>>> alloc_heap_pages+0x35a/0x486 (XEN) [<ffff828c8011383b>]
>>>> alloc_domheap_pages+0x128/0x17b (XEN) [<ffff828c80110591>]
>>>> do_memory_op+0x988/0x17a7 (XEN) [<ffff828c801cf1bf>]
>>>> syscall_enter+0xef/0x149 (XEN) (XEN) (XEN)
>>>> **************************************** (XEN) Panic on CPU 1: (XEN) Xen
>>>> BUG at page_alloc.c:409 (XEN) ****************************************
>>>> (XEN)
>>>>
>>>> And here again running opensuse 2.6.27 kernel:
>>>>
>>>> (XEN) ** page_alloc.c:534 -- 0/1 ffffffffffffffff
>>>> (XEN) Xen BUG at page_alloc.c:536
>>>> (XEN) ----[ Xen-3.4-unstable x86_64 debug=y Not tainted ]---- (XEN)
>>>> CPU: 1 (XEN) RIP: e008:[<ffff828c80112d77>]
>>>> free_heap_pages+0x12f/0x4b8 (XEN) RFLAGS: 0000000000010206 CONTEXT:
>>>> hypervisor (XEN) rax: 0000000000000000 rbx: ffff82840236e3e0 rcx:
>>>> 0000000000000001 (XEN) rdx: ffffffffffffffff rsi: 000000000000000a
>>>> rdi: ffff828c801fedec (XEN) rbp: ffff830127fdfe90 rsp:
>>>> ffff830127fdfe40 r8: 0000000000000004 (XEN) r9: 0000000000000004
>>>> r10: 0000000000000010 r11: 0000000000000010 (XEN) r12:
>>>> ffff82840236e3e0 r13: 0000000000000000 r14: 0000000000000000 (XEN)
>>>> r15: 0080000000000000 cr0: 000000008005003b cr4: 00000000000026f0
>>>> (XEN) cr3: 00000000bf1b2000 cr2: 00000000004878d5 (XEN) ds: 0000 es:
>>>> 0000 fs: 0000 gs: 0000 ss: e010
> cs: e008
>>>> (XEN) Xen stack trace from rsp=ffff830127fdfe40:
>>>> (XEN) 0000000000000001 ffff82840236e3e0 0000000127fdfe90
>>>> 0000000000000001 (XEN) 0000000000000000 0000000000000200
>>>> c2c2c2c2c2c2c2c2 ffff82840236e3c0 (XEN) ffff82840236e2e0
>>>> ffff830000000000 ffff830127fdfed0 ffff828c8011329a (XEN)
>>>> 000000985dfe4f8c 0000000000000001 ffff830127fdff28 ffff828c80297880
>>>> (XEN) 0000000000000002 ffff828c80239100 ffff830127fdff00
>>>> ffff828c8011ba21 (XEN) ffff8800e9be5d80 ffff830127fdff28
>>>> ffff828c802375b0 ffff8300cee8a000 (XEN) ffff830127fdff20
>>>> ffff828c8013ca78 0000000000000001 ffff8300cfaee000 (XEN)
>>>> ffff830127fdfda8 ffff8800e9be5d80 ffff8800ea1006c0 ffffffff8070f1c0
>>>> (XEN) 000000000000008f ffff8800c4df3c98 0000000000000184
>>>> 0000000000000246 (XEN) ffff8800c4df3d68 ffff8800eab76b00
>>>> 0000000000000000 0000000000000000 (XEN) ffffffff802073aa
>>>> 0000000000000009 00000000deadbeef 00000000deadbeef (XEN)
>>>> 0000010000000000 ffffffff802073aa 000000000000e033 0000000000000246
>>>> (XEN) ffff8800c4df3c60 000000000000e02b 7f766dfbff79beef
>>>> fddffff4f3b9beef (XEN) 008488008022beef 0001000a0a03beef
>>>> f7f5ff7b00000001 ffff8300cfaee000 (XEN) Xen call trace: (XEN)
>>>> [<ffff828c80112d77>] free_heap_pages+0x12f/0x4b8 (XEN)
>>>> [<ffff828c8011329a>] page_scrub_softirq+0x19a/0x23c (XEN)
>>>> [<ffff828c8011ba21>] do_softirq+0x6a/0x77 (XEN) [<ffff828c8013ca78>]
>>>> idle_loop+0x9d/0x9f (XEN) (XEN) (XEN)
>>>> **************************************** (XEN) Panic on CPU 1: (XEN) Xen
>>>> BUG at page_alloc.c:536 (XEN) ****************************************
>>>> (XEN)
>>>> (XEN) Reboot in five seconds...
>>>>
>>>>
>>>> Andy
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] Xen unstable crash, Andrew Lyon
- Re: [Xen-devel] Xen unstable crash, Keir Fraser
- RE: [Xen-devel] Xen unstable crash, Jiang, Yunhong
- Re: [Xen-devel] Xen unstable crash, Keir Fraser
- RE: [Xen-devel] Xen unstable crash, Jiang, Yunhong
- RE: [Xen-devel] Xen unstable crash, Jiang, Yunhong
- RE: [Xen-devel] Xen unstable crash,
Jiang, Yunhong <=
- Re: [Xen-devel] Xen unstable crash, Andrew Lyon
- Re: [Xen-devel] Xen unstable crash, Andrew Lyon
- Re: [Xen-devel] Xen unstable crash, Andrew Lyon
- Re: [Xen-devel] Xen unstable crash, Keir Fraser
- Re: [Xen-devel] Xen unstable crash, Andrew Lyon
- Re: [Xen-devel] Xen unstable crash, Andrew Lyon
- Re: [Xen-devel] Xen unstable crash, Andrew Lyon
- Re: [Xen-devel] Xen unstable crash, Keir Fraser
- Re: [Xen-devel] Xen unstable crash, Keir Fraser
- Re: [Xen-devel] Xen unstable crash, Keir Fraser
|
|
|