WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] max vcpus in dom0

To: Pasi Kärkkäinen <pasik@xxxxxx>
Subject: Re: [Xen-devel] max vcpus in dom0
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Tue, 27 Apr 2010 11:38:30 -0700
Cc: "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>, JBeulich@xxxxxxxxxx
Delivery-date: Tue, 27 Apr 2010 11:38:57 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100427085921.GR17817@xxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20100426191733.39f8b8c2@xxxxxxxxxxxxxxxxxxxx> <20100427085921.GR17817@xxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.0.4-1.fc12 Lightning/1.0b2pre Thunderbird/3.0.4
On 04/27/2010 01:59 AM, Pasi Kärkkäinen wrote:
>> Looking at a bug where dom0 is crashing coming up with more than 32
>> vcpus, the problem happens trying to initialize 32nd vcpu. I see
>> the shared info is limited to 32vcpus, implying we'd have a hard limit
>> of 32 vcpus in dom0, correct? 
>>
>>     
> 'shared info' in Xen hypervisor or in the dom0 kernel?
>   

Both ;) It's shared.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>