WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] qemu-dm segfault with multiple HVM domains?

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] qemu-dm segfault with multiple HVM domains?
From: John Clemens <jclemens@xxxxxxxxxxxxxxx>
Date: Wed, 22 Feb 2006 12:03:59 -0500 (EST)
Delivery-date: Wed, 22 Feb 2006 17:04:03 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

When running multiple HVM/VT win2003 domains on a large system (4socket, 8core, HT), qemu-dm will often segfault on all but one of the domains. My last sucessful test was on friday. (with yesterday's tip (8911), all my windows domains hung on boot, and today's (8922) won't compile due to the physbase problem in domain.c that there's a patch pending for..I'll be trying with that soon, and then reverting back until I find the changeset that broke it)

The easiest way to reproduce is to start up 4 windows domains in rapid succession. Each one will start to boot, but early in the boot process, three of the 4 qemu-dm processes will crash. The fourth one will continue on just fine. The core dumps left behind are garbage.

From dom0 dmesg:
qemu-dm[4721]: segfault at 0000000000000000 rip 0000000000000000 rsp 0000000040800198 error 14
xenbr0: port 5(tun2) entering disabled state
device tun2 left promiscuous mode
xenbr0: port 5(tun2) entering disabled state
qemu-dm[4705]: segfault at 0000000000000000 rip 0000000000000000 rsp 0000000040800198 error 14
xenbr0: port 4(tun1) entering disabled state
device tun1 left promiscuous mode
xenbr0: port 4(tun1) entering disabled state
qemu-dm[4689]: segfault at 0000000000000000 rip 0000000000000000 rsp 0000000040800198 error 14


If you stagger the startup times a little bit, the domains will come up and stay up for a while (~10-30minutes) and then all but one qemu-dm will eventually segfault. If I run just one domain it seems to work fine, running over the weekend with very light load.

I see that others have been running HVM domain tests, has anyone run multiple Windows domains for any period of time on HVM? Has anyone tried to start multiple windows domains in rapid succession?

Thanks,
john.c

- --
John Clemens                    jclemens@xxxxxxxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>