WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-bugs

[Xen-bugs] [Bug 1582] New: Xen Live migration failed when domu has more

To: xen-bugs@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-bugs] [Bug 1582] New: Xen Live migration failed when domu has more than 1 VCPUs
From: bugzilla-daemon@xxxxxxxxxxxxxxxxxxx
Date: Wed, 10 Feb 2010 21:53:39 -0800
Delivery-date: Wed, 10 Feb 2010 21:53:46 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-bugs-request@lists.xensource.com?subject=help>
List-id: Xen Bugzilla <xen-bugs.lists.xensource.com>
List-post: <mailto:xen-bugs@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=unsubscribe>
Reply-to: bugs@xxxxxxxxxxxxxxxxxx
Sender: xen-bugs-bounces@xxxxxxxxxxxxxxxxxxx
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1582

           Summary: Xen Live migration failed when domu has more than 1
                    VCPUs
           Product: Xen
           Version: 3.0 (general)
          Platform: Other
        OS/Version: Linux-2.6
            Status: NEW
          Severity: normal
          Priority: P2
         Component: 2.6.18 domU
        AssignedTo: xen-bugs@xxxxxxxxxxxxxxxxxxx
        ReportedBy: irwanhadi@xxxxxxxxx


After one successful live migration, the subsequent live migration will always
failed when domu has more than 1 VCPUs.

Xen version: 3.4.0
Dom0: Centos 5.4 64 bits (kernel 2.6.18-164.9.1.el5xen)
Domu: Centos 5.4 64 bits (kernel 2.6.18-164.el5xen)
Hardware: Dell PowerEdge R710 with 2X Intel Xeon E5520 (2.26 Ghz)

Steps to reproduce:
1. Create a domu with more than 1 VCPUs
2. Start the domu on one VM host (called A)
3. Live migrate the domu to another VM host (called B)
4. Live migrate the domu back to the original host (or you could also save the
domu)
5. The domu will crash.


crashdump output:
=================================================
  SYSTEM MAP: /boot/System.map-2.6.18-164.el5xen
DEBUG KERNEL: /usr/lib/debug/lib/modules/2.6.18-164.el5xen/vmlinux
(2.6.18-164.el5xen)
    DUMPFILE: 2010-0210-1906.09-migrating-server01.test.9.core
        CPUS: 4
        DATE: Wed Feb 10 19:06:09 2010
      UPTIME: 00:02:58
LOAD AVERAGE: 0.04, 0.06, 0.02
       TASKS: 103
    NODENAME: server01.test
     RELEASE: 2.6.18-164.el5xen
     VERSION: #1 SMP Thu Sep 3 04:03:03 EDT 2009
     MACHINE: x86_64  (2260 Mhz)
      MEMORY: 3.9 GB
       PANIC: "Oops: 0000 [1] SMP " (check log for details)
         PID: 3806
     COMMAND: "suspend"
        TASK: ffff8800f948e860  [THREAD_INFO: ffff8800ed3c6000]
         CPU: 0
       STATE: TASK_RUNNING (PANIC)
=================================================


Backtrace:
crash> bt
PID: 3806   TASK: ffff8800f948e860  CPU: 0   COMMAND: "suspend"
 #0 [ffff8800ed3c79d0] xen_panic_event at ffffffff80270957
 #1 [ffff8800ed3c7a00] notifier_call_chain at ffffffff802679ef
 #2 [ffff8800ed3c7a20] panic at ffffffff8028cadb
 #3 [ffff8800ed3c7b10] oops_end at ffffffff802650f0
 #4 [ffff8800ed3c7b20] do_page_fault at ffffffff80267905
 #5 [ffff8800ed3c7c10] error_exit at ffffffff8026082b
    [exception RIP: find_first_bit+18]
    RIP: ffffffff80251680  RSP: ffff8800ed3c7cc8  RFLAGS: 00010203
    RAX: 0000000000000000  RBX: 0000000000000001  RCX: 0000000000000004
    RDX: 0000000000000018  RSI: 000000000000013e  RDI: 0000000000000018
    RBP: 0000000000000000   R8: 0000000000000001   R9: ffff8800f9db9e58
    R10: ffff8800f9db9e58  R11: ffffffff803a8f2d  R12: 0000000000000000
    R13: 0000000000000000  R14: 0000000000000000  R15: ffffffff8029b92c
    ORIG_RAX: ffffffffffffffff  CS: e030  SS: e02b
 #6 [ffff8800ed3c7cc8] __first_cpu at ffffffff8033f9b2
 #7 [ffff8800ed3c7cd8] cacheinfo_cpu_callback at ffffffff8027a249
 #8 [ffff8800ed3c7da8] notifier_call_chain at ffffffff802679ef
 #9 [ffff8800ed3c7dc8] cpu_down at ffffffff802a02cf
#10 [ffff8800ed3c7e48] smp_suspend at ffffffff803aff62
#11 [ffff8800ed3c7e68] __do_suspend at ffffffff803b07fb
#12 [ffff8800ed3c7ee8] kthread at ffffffff80233bcd
#13 [ffff8800ed3c7f48] kernel_thread at ffffffff80260b2c
crash>
crash>


dmesg:
=================================================
Initializing CPU#1
Initializing CPU#2
Initializing CPU#3
Unable to handle kernel NULL pointer dereference at 0000000000000018 RIP:
 [<ffffffff80251680>] find_first_bit+0x12/0x2a
PGD 0
Oops: 0000 [1] SMP
last sysfs file: /class/cpuid/cpu3/dev
CPU 0
Modules linked in: autofs4 i2c_dev i2c_core lockd sunrpc dm_mirror dm_multipath
scsi_dh scsi_mod parport_pc lp parport xennet pcspkr dm_raid45 dm_message
dm_region_hash dm_log dm_mod dm_mem_cache xenblk ext3 jbd uhci_hcd ohci_hcd
ehci_hcd
Pid: 3806, comm: suspend Not tainted 2.6.18-164.el5xen #1
RIP: e030:[<ffffffff80251680>]  [<ffffffff80251680>] find_first_bit+0x12/0x2a
RSP: e02b:ffff8800ed3c7cc8  EFLAGS: 00010203
RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000004
RDX: 0000000000000018 RSI: 000000000000013e RDI: 0000000000000018
RBP: 0000000000000000 R08: 0000000000000001 R09: ffff8800f9db9e58
R10: ffff8800f9db9e58 R11: ffffffff803a8f2d R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: ffffffff8029b92c
FS:  00002acdb3ca06e0(0000) GS:ffffffff805ca000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000
Process suspend (pid: 3806, threadinfo ffff8800ed3c6000, task ffff8800f948e860)
Stack:  ffffffff8033f9b2  ffff880000e86ed0  ffffffff8027a249  ffff8800f9db9e58
 ffff8800f95a6490  ffff8800f9db9e58  ffffffff802fea2f  ffffffff806971c8
 ffffffff8020ba34  ffff8800f95a6480
Call Trace:
 [<ffffffff8033f9b2>] __first_cpu+0xe/0x1d
 [<ffffffff8027a249>] cacheinfo_cpu_callback+0x3e1/0x45d
 [<ffffffff802fea2f>] sysfs_remove_dir+0x10e/0x122
 [<ffffffff8020ba34>] kfree+0x15/0xc5
 [<ffffffff8020ba34>] kfree+0x15/0xc5
 [<ffffffff8034014b>] kobject_cleanup+0x62/0x7e
 [<ffffffff80340167>] kobject_release+0x0/0x9
 [<ffffffff80340167>] kobject_release+0x0/0x9
 [<ffffffff8029b92c>] keventd_create_kthread+0x0/0xc4
 [<ffffffff802679ef>] notifier_call_chain+0x20/0x32
 [<ffffffff802a02cf>] cpu_down+0x1a9/0x297
 [<ffffffff80287f88>] deactivate_task+0x28/0x5f
 [<ffffffff803b0799>] __do_suspend+0x0/0x5ec
 [<ffffffff803aff62>] smp_suspend+0x1f/0x82
 [<ffffffff803b0799>] __do_suspend+0x0/0x5ec
 [<ffffffff803b07fb>] __do_suspend+0x62/0x5ec
 [<ffffffff80286f8d>] __wake_up_common+0x3e/0x68
 [<ffffffff8029b92c>] keventd_create_kthread+0x0/0xc4
 [<ffffffff8029b92c>] keventd_create_kthread+0x0/0xc4
 [<ffffffff80233bcd>] kthread+0xfe/0x132
 [<ffffffff80260b2c>] child_rip+0xa/0x12
 [<ffffffff8029b92c>] keventd_create_kthread+0x0/0xc4
 [<ffffffff80233acf>] kthread+0x0/0x132
 [<ffffffff80260b22>] child_rip+0x0/0x12


Code: f3 48 af 74 08 48 83 ef 08 48 0f bc 07 48 29 d7 48 c1 e7 03
RIP  [<ffffffff80251680>] find_first_bit+0x12/0x2a
 RSP <ffff8800ed3c7cc8>
CR2: 0000000000000018
 <0>Kernel panic - not syncing: Fatal exception
=================================================


-- 
Configure bugmail: 
http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

_______________________________________________
Xen-bugs mailing list
Xen-bugs@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-bugs

<Prev in Thread] Current Thread [Next in Thread>