|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] Unable to start more than 103 VMs
Thanks Keir. When we tried this
scaling in December, we were able to start 121 VMs, so I think I was not
suspecting the IRQ limit yet at 103. No message was output to the serial console
or the xend.log - but you are right, it's there right under my nose in
dmesg:
eth1: port 105(vif104.0) entering
learning state eth1: topology
change detected, propagating
eth1: port 105(vif104.0) entering forwarding state vif104.0: no IPv6 routers
present blkback: ring-ref 8,
event-channel 15, protocol 1 (x86_64-abi) No available IRQ to bind to: increase
NR_DYNIRQS.
brian carb unisys
corporation - malvern, pa
Christian pointed out that this is probably running into
the default dynamic irq limit in dom0, which is 256. You should try raising
NR_DYNIRQS in include/asm/mach-xen/irq_vectors.h. If you are hitting this limit
you should get a warning in dom0’s dmesg (or /var/log/messages) unless the
loglevel is set too low.
K.
On 3/7/07 22:21, "Carb, Brian A"
<Brian.Carb@xxxxxxxxxx> wrote:
Unable to start more than 103
VMs. Any ideas?
Running Xen Unstable
(changeset 15445) on a Unisys ES7000/one on SLES10 Release, 64gb memory, 16
cpu, dom0_mem=512M, xenheap_megabytes=64. VMs are SLES10 para-virtual guests,
each on their own lun on san storage.
We can start 103 VMs
successfully. When starting the 104th, the VM times out waiting for its disk
to appear: XENBUS: Timeout connecting to device: device/vbd/2048 (state
6) XENBUS: Timeout connecting to device: device/vif/0 (state 6) XENBUS:
Device with no driver: device/console/0 BIOS EDD facility v0.16
2004-Jun-25, 0 devices found EDD information not available. Freeing
unused kernel memory: 172k freed Starting udevd Creating
devices Loading jbd jbd: no version for "struct_module" found: kernel
tainted. Loading ext3 Loading reiserfs Loading jbd Loading
ext3 Waiting for device /dev/sda2 to appear:
..............................not found -- exiting to /bin/sh sh: no job
control in this shell $
Noteworthy: -
domain-0 can see the lun of the failing VM correctly, and there are no errors
if we run kpartx and mount the VMs root partition. - we can
reproduce the problem on xen-3.1 as well as xen-unstable. -
problem still occurs if we split the VMs across 2 bridges - the 104th still
fails. - problem also occurs if we start the VMs in any order -
once 103 are started, the next one fails. - if we comment out
the "vif = " statement in the VM config files, we can start more than
103 brian carb unisys corporation - malvern,
pa
_______________________________________________ Xen-devel
mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|