WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Multiple VCPUs

To: "Jared Bellows" <xen@xxxxxxxxxxxxxxx>, "Xen Users" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Multiple VCPUs
From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
Date: Thu, 29 Jun 2006 19:27:27 +0200
Delivery-date: Thu, 29 Jun 2006 10:31:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4fcf6a50606291012q37904a07g47da64521236bdc2@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acabn6+9MV8kdguuTDq5Rm1Nvh8CAQAAB9MA
Thread-topic: [Xen-users] Multiple VCPUs
Not sure if this will solve your problem or not, but if I understand things right, the HVM guest needs to have MP-tables (MP=multiprocessor) generated during startup (or hard-coded in BIOS), which the default build didn't have until my colleague Travis sent some patches in a few days ago. So anything other than unstable is doesn't have this change - it went in Wednesday (yesterday).
 
Get the latest unstable and it should work, or .../tools/firmware/rombios/Makefile to have

BIOS_BUILDS += BIOS-bochs-8-processors

and change .../tools/firmware/hvmloader/Makefile to use

sh ./mkhex rombios ../rombios/BIOS-bochs-8-processors > roms.h

[And remove the other BIOS line, of course]

--
Mats


From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jared Bellows
Sent: 29 June 2006 18:13
To: Xen Users
Subject: [Xen-users] Multiple VCPUs

I have a self built system using an Intel D 920 processor and a MB that supports VT. I'm able to run HVM domains fine, but have trouble getting multiple VCPUs for these domains. Here is an example of one of my configs.
 
#  -*- mode: python; -*-
#============================================================================
# Python configuration setup for 'xm create'.
# This script sets the parameters used when a domain is created using 'xm create                                                                             '.
# You use a separate script for each domain you want to create, or
# you can set the parameters for the domain on the xm command line.
#============================================================================

import os, re
arch = os.uname()[4]
if re.search('64', arch):
    arch_libdir = 'lib64'
else:
    arch_libdir = 'lib'

#----------------------------------------------------------------------------
# Kernel image file.
kernel = "/usr/lib/xen/boot/hvmloader"

# The domain build function. HVM domain uses 'hvm'.
builder='hvm'

# Initial memory allocation (in megabytes) for the new domain.
memory = 256

# A name for your domain. All domains must have different names.
name = "windows"

#-----------------------------------------------------------------------------
# the number of cpus guest platform has, default=1
vcpus=2

# enable/disable HVM guest PAE, default=0 (disabled)
#pae=0

# enable/disable HVM guest ACPI, default=0 (disabled)
#acpi=0

# enable/disable HVM guest APIC, default=0 (disabled)
#apic=0

# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = ""         # leave to Xen to pick
#cpus = "0"        # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5

# Optionally define mac and/or bridge for the network interfaces.
# Random MACs are assigned if not given.
#vif = [ 'type=ioemu, mac=00:16:3e:00:00:11, bridge=xenbr0' ]
# type=ioemu specify the NIC is an ioemu device not netfront
vif = [ 'type=ioemu, bridge=xenbr0' ]

#----------------------------------------------------------------------------
# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.

#disk = [ 'phy:hda1,hda1,r' ]
disk = [ 'file:/root/windows.img,ioemu:hda,w' ]

#----------------------------------------------------------------------------
# Configure the behaviour when a domain exits.  There are three 'reasons'
# for a domain to stop: poweroff, reboot, and crash.  For each of these you
# may specify:
#
#   "destroy",        meaning that the domain is cleaned up as normal;
#   "restart",        meaning that a new domain is started in place of the old
#                     one;
#   "preserve",       meaning that no clean-up is done until the domain is
#                     manually destroyed (using xm destroy, for example); or
#   "rename-restart", meaning that the old domain is not cleaned up, but is
#                     renamed and a new domain started in its place.
#
# The default is
#
#  
#   on_reboot   = 'restart'
#   on_crash    = 'restart'
#
# For backwards compatibility we also support the deprecated option restart
#
# restart = 'onreboot' means
#                            on_reboot   = 'restart'
#                            on_crash    = 'destroy'
#
# restart = 'always'   means
#                            on_reboot   = 'restart'
#                            on_crash    = 'restart'
#
# restart = 'never'    means
#                            on_reboot   = 'destroy'
#                            on_crash    = 'destroy'

#
#on_reboot   = 'restart'
#on_crash    = 'restart'

#============================================================================

# New stuff
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'

#-----------------------------------------------------------------------------
# Disk image for
cdrom='/images/xp.iso'

#-----------------------------------------------------------------------------
# boot on floppy (a), hard disk (c) or CD-ROM (d)
boot='d'
#-----------------------------------------------------------------------------
#  write to temporary files instead of disk image files
#snapshot=1

#----------------------------------------------------------------------------
# enable SDL library for graphics, default = 0
sdl=0

#----------------------------------------------------------------------------
# enable VNC library for graphics, default = 1
vnc=1

#----------------------------------------------------------------------------
# enable spawning vncviewer(only valid when vnc=1), default = 1
vncviewer=0

#----------------------------------------------------------------------------
# no graphics, use serial port
#nographic=0

#----------------------------------------------------------------------------
# enable stdvga, default = 0 (use cirrus logic device model)
stdvga=1

#-----------------------------------------------------------------------------
#   serial port re-direct to pty deivce, /dev/pts/n
#   then xm console or minicom can connect
serial='pty'

#----------------------------------------------------------------------------
# enable ne2000, default = 0(use pcnet)
ne2000=0


#-----------------------------------------------------------------------------
#   enable audio support
#audio=1


#-----------------------------------------------------------------------------
#    set the real time clock to local time [default=0 i.e. set to utc]
localtime=1


#-----------------------------------------------------------------------------
#    start in full screen
#full-screen=1
-----------------------------------------------------------------------------------------------

I'm running Xen 3.0.2-2 from the binary download. The HVM domain only sees 1 cpu and xm list displays on VCPU.

Any help would be greatly appreciated.

Jared

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>