WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Priority for SMP VMs

To: "Mark Williamson" <mark.williamson@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] Priority for SMP VMs
From: "Gabriel Southern" <gsouther@xxxxxxx>
Date: Mon, 21 Jul 2008 23:43:28 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 21 Jul 2008 20:43:53 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender :to:subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references :x-google-sender-auth; bh=Sx2DB6AkSqE8pSAwnU85kffjkNYCOQ4B2tKrsqNKUB8=; b=VjtitxBI+2eCuz012pCQI3ia5623q17Kkhdoaf2NBRT6+Amh4TlQH1Ct7J2sDwugai IVRh674f2+A6xcDWaR5FC5NeGN/v5T2fU+/br32JRZrm8bbXqATRiNHUtwm6gjGo4oSN G1FiKGorporStcqmPuPTBWt7ek4AkuoIiVqQ4=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references:x-google-sender-auth; b=XTXVDJW4Z2kDChbPbe9BKi3kITz8VGnJWM+STO1LT8EYCIjqcPOv91ubZaP2BZKW5J oeL5wLfqoLlc+f0HCJQNgywfthiJdDzRsLFagKEcTs5y9bPSRe/rDZ3sjHDxBwySTmVv AQ6UpPLz/Sn3S4yI0ZcjkUQrkE1ytzUcDFM7M=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <200807212200.51784.mark.williamson@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <7a88c6510807021936g40635d18qaebff94e9267ce35@xxxxxxxxxxxxxx> <200807212200.51784.mark.williamson@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi Mark,

Thanks for the reply, I'll be interested to see if you have any
additional thoughts after I describe one of the tests that I have run.

The system that I have been working with is a dual quad-core system so
it has eight logical processors.  Most of the tests that I have run
have been with 8 VMs executing simultaneously with various different
numbers of VCPUs for each VM.  Most of the tests have been run with
various benchmarks from the SPEC CPU2006 suite.

One test that does not use the SPEC benchmarks and is probably the
easiest to replicate is as follows:

Eight VMs configured with varying numbers of VCPUs ranging from 1 to
8.  Each VM executing a program with the same number of threads as it
has VCPUs (1 VCPU VM has 1 thread, 8 VCPU VM has 8 threads) where each
thread is running an infinite loop designed to use CPU time.  No cap
was set and each VM had a weight of 256.

>From what I understand about how the credit scheduler works I would
think in this case each VM would receive 12.5% of the total system CPU
time.  However, after running this test for a couple of hours the host
CPU time had been allocated as follows:

1-VCPU VM: 12.14%
2-VCPU VM: 9.26%
3-VCPU VM: 11.58%
4-VCPU VM: 12.81%
5-VCPU VM: 13.35%
6-VCPU VM: 13.53%
7-VCPU VM: 13.62%
8-VCPU VM: 13.72%

As you can see the number of VCPUs changes the allocation of CPU so
that VMs with fewer VCPUs receive less CPU time than they should based
on the configured weight value.  I'm not sure why the 1-VCPU VM is
getting more CPU time in this test than the 2 and 3 VCPU VMs.  Overall
the trend that I have seen is that assigning more VCPUs to a VM
slightly increases that VM's priority on an overcommitted host, this
test ended up with the 1-VCPU VM not following that trend exactly.

I'd be interested to hear any thoughts you have on these results;
either comments about my experiment setup, or thoughts about the why
the scheduling algorithm is exhibiting this behavior.

Thanks,

-Gabriel

On Mon, Jul 21, 2008 at 5:00 PM, Mark Williamson
<mark.williamson@xxxxxxxxxxxx> wrote:
> Hi Gabriel,
>
> I'm not particularly familiar with the credit scheduler but I'll do my best to
> help clarify things a bit (I hope!).
>
> On Thursday 03 July 2008, Gabriel Southern wrote:
>> Hi,
>>
>> I'm working a project with SMP VMs and I noticed something about the
>> behavior of the credit scheduler that does not match my understanding
>> of the documentation about the credit scheduler.  It seems like
>> assigning more VCPUs to a VM increases the proportion of total system
>> CPU resources the VM will receive, whereas the documentation indicates
>> that this should be controlled by the weight value.
>>
>> For example when running a CPU intensvie benchmark with some VMs
>> configured with 1-VCPU and other VMs configured with 8-VCPUs, the
>> benchmark took 37% longer to complete on the VMs with 1-VCPU than the
>> ones with 8-VCPUs.  Unfortunately I did not record the exact values
>> for CPU time that each VM received; however, I think that the 8-VCPU
>> VMs did receive around 30% more CPU time than the 1-VCPU VMs.  These
>> tests were performed with the default weight of 256 for all VMs and no
>> cap configured.
>
> You need to tell us a bit more about how you did your benchmarking...  Were
> the SMP and UP guests running concurrently and competing for CPU time?  Or
> were they run separately?  Was the benchmark able to take advantage of
> multiple CPUs itself?
>
>> I don't think that this is the behavior that the scheduler should
>> exhibit based on the documentation I read.  I admit the tests I was
>> doing were not really practical use cases for real applications.  But
>> I'd be curious if anyone knows if this is a limitation of the design
>> of the credit scheduler, or possibly due to a configuration problem
>> with my system.  I running Xen 3.2.0 compiled from the official source
>> distribution tarball, and the guest VMs are also using the 3.2.0
>> distribution with the 2.6.18 kernel.  Any ideas anyone has about why
>> my system is behaving this way are appreciated.
>
> Without knowing more about your setup there are lots of things that could be
> happening...
>
> If you're not using caps then there's no reason why the SMP guests shouldn't
> get more CPU time if they're somehow able to consume more slack time in the
> system.  SMP scheduling makes things pretty complicated!
>
> If you reply with more details, I can try and offer my best guess as to what
> might be happening.  If you don't get a response within a day or two, please
> feel free to poke me directly.
>
> Cheers,
> Mark
>
>>
>> Thanks,
>>
>> Gabriel
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>
>
>
> --
> Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/)
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>