WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] qemu-dm segfault with multiple HVM domains?

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] qemu-dm segfault with multiple HVM domains?
From: John Clemens <jclemens@xxxxxxxxxxxxxxx>
Date: Wed, 22 Feb 2006 16:37:09 -0500 (EST)
Delivery-date: Wed, 22 Feb 2006 21:37:13 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <Pine.LNX.4.63.0602211510440.2141@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <Pine.LNX.4.63.0602211510440.2141@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

Just verifying that with cs 8932 I still see this problem. I'm able to start multiple windows domains, but either immediately or over time all but one qemu-dm processes segfaults. This only appears to be a problem when I start multiple windows domains, a single domain seems to work fine.

qemu-dm[4961]: segfault at 0000000000000000 rip 0000000000000000 rsp 0000000040800198 error 14 qemu-dm[4963]: segfault at 0000000000000000 rip 0000000000000000 rsp 0000000040800198 error 14

john.c

- --
John Clemens                    jclemens@xxxxxxxxxxxxxxx


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>