We're seeing a cpu scheduling behavior in
Xen and we're wondering if anyone can explain it.
We're running XEN 3.0.4 on a Unisys
ES7000/one with 8 CPUs (4 dual-core sockets) and 32GB memory. XEN is built on
SLES10, and the system is booted with dom0_mem=512mb. We have 2 para-virtual
machines, each booted with 2 vcpus and 2GB memory, and each running SLES10 and
Apache2 with worker multi-processing modules.
The vcpus of dom0, vm1 and vm2 are pinned
as follows:
dom0 is relegated to 2 vcpus (xm vcpu-set
0 2) and these are pinned to cpus 0-1 vm1 uses 2 vcpus pinned to cpus
2-3 vm2 uses 2 vcpus pinned to cpus 2-3
The cpus 4 through 7 are left
unused.
Our test runs http_load against the
Apache2 web servers in the 2 vms. Since Apache2 is using worker multi-processing
modules, we expect that each vm will spread its load over the 2 vcpus, and
during the test we have verified this using top and sar inside a vm
console.
The odd behavior occurs when we monitor
cpu usage using xenmon in interactive mode. By pressing "c", we can observe the
load on each of the cpus. When we examine cpus 2 and 3 initially, each is used
equally by vm1 and vm2. However, shortly after we start our testing, cpu2 runs
vm1 exclusively 100% of the time, and cpu3 runs vm2 100% of the time. When
the test completes, CPUs 2 and 3 go back to sharing the load of vm1 and
vm2.
Is this the expected
behavior?
brian carb unisys corporation -
malvern, pa
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|