WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: Power aware credit scheduler

To: "Emmanuel Ackaouy" <ackaouy@xxxxxxxxx>
Subject: [Xen-devel] RE: Power aware credit scheduler
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Fri, 20 Jun 2008 08:40:28 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, "Wei, Gang" <gang.wei@xxxxxxxxx>, "Yu, Ke" <ke.yu@xxxxxxxxx>
Delivery-date: Thu, 19 Jun 2008 17:41:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <07121FD6-E912-4A85-B841-B939A1FEE0D0@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <D470B4E54465E3469E2ABBC5AFAC390F024D9444@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <46C27FF3-24A7-48DA-9ABA-BCCB3E9DD30C@xxxxxxxxx> <D470B4E54465E3469E2ABBC5AFAC390F024D9454@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <07121FD6-E912-4A85-B841-B939A1FEE0D0@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcjSGirNNvdVhqoeTI6AO36XR7AMAAAT9XSA
Thread-topic: Power aware credit scheduler
>From: Emmanuel Ackaouy [mailto:ackaouy@xxxxxxxxx] 
>Sent: 2008年6月19日 22:38
>
>On Jun 19, 2008, at 15:32 , Tian, Kevin wrote:
>>> Regardless of any new knobs, a good default behavior might be
>>> to only take a package out of C-state when another non-idle
>>> package has had more than one VCPU active on it over some
>>> reasonable amount of time.
>>>
>>> By default, putting multiple VCPUs on the same physical package
>>> when other packages are idle is obviously not always going to
>>> be optimal. Maybe it's not a bad default for VCPUs that are
>>> related (same VM or qemu)? I think Ian P hinted at this. But it
>>> frightens me that you would always do this by default for any set
>>> of VCPUs. Power saving is good but so is memory bandwidth
>>
>> To enable this feature depends on a control command from system
>> adminstrator, who knows the tradeoff. From absolute performance
>> P.O.V, I believe it's not optimal. However if looking from the
>> performance/watt, i.e. power efficiency angle, power saving due to
>> package level idle may overwhelm performance impact by keeping
>> activity in other package. Of course finally memory latency should
>> be also considered in NUMA system, as you mentioned.
>
>I'm saying something can be done to improve power saving in
>the current system without adding a knob. Perhaps you can give
>the admin even more power saving abilities with a knob, but it
>makes sense to save power when performance is not impacted,
>regardless of any knob position.

Then I agree. It's always good to have one improved with the other
immune, or fix some hindering both first. Then we'll also compare 
whether a knob can shoot for obviously better result.

>
>Also, note I mentioned memory BANDWIDTH and not latency.
>It's not the same thing. And I wasn't just thinking about NUMA
>systems.
>

Thanks for pointing out. I misread fast. But I'm not sure how memory
bandwidth is affected by the vcpu scheduling. Do you mean more mem
traffic involved in bus due to shared cache contention when multiple 
vcpus are running in same package? It then may be workload specific
and others may not be affected to same extent. But this is good hint
that we'll keep such workload in experiment when doing the change.
Also consider vcpu/domain relationship is one thing we can try. The
basic direction will be first go simple to see the effect.

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>