WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] SMP dom0 and AMD64?

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] SMP dom0 and AMD64?
From: Paul Larson <plars@xxxxxxxxxxxxxxxxx>
Date: Thu, 06 Oct 2005 17:46:31 -0500
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Nicholas Lee <emptysands@xxxxxxxxx>
Delivery-date: Fri, 07 Oct 2005 08:35:54 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D32E1FC@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D32E1FC@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Debian Thunderbird 1.0.2 (X11/20050602)
Ian Pratt wrote:


I noted that the config on a AMD64 machine for dom0 defaults to non-SMP and domU defaults to SMP.

Is there a recommendation config for dom0 in this situation? I recall reading a message saying SMP dom0 was more stable.

SMP dom0 should work fine, but non-SMP is more stable.
Since we're supposed to be flushing out bugs, could we consider changing this to default to SMP dom0 to help get more exposure here?

-Paul Larson

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>