WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] SMP and CPU hyperthreading

To: "Lars E. D. Jensen | DCmedia" <ledj@xxxxxxxxxxx>
Subject: Re: [Xen-users] SMP and CPU hyperthreading
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Tue, 19 Apr 2005 20:18:32 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 19 Apr 2005 19:18:21 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <200504192020.01766.ledj@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <200504191801.59804.ledj@xxxxxxxxxxx> <200504191904.30577.mark.williamson@xxxxxxxxxxxx> <200504192020.01766.ledj@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.7.1
> Ok thanks.
>
> I'm using 2.0.5 stable release, so I'll just assign domains to 1 of the
> four 0,1,2 or 3 until further notice.

IIRC, on a two CPU system the numbering is as follows:
0 : 1st hyperthread, 1st CPU
1 : 2nd hyperthread, 1st CPU
2 : 1st hyperthread, 2nd CPU
3 : 2nd hyperthread, 2nd CPU

If you don't want to use HT, just assign to 0 and 2 (for instance) and then 
you'll only have one domain running on a physical CPU at any time.

HTH,
Mark

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>