This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] [RFC]PLE's performance enhancement through improving schedul

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [RFC]PLE's performance enhancement through improving scheduler.
From: "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Date: Wed, 18 Aug 2010 13:51:44 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "Dong, Eddie" <eddie.dong@xxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 17 Aug 2010 22:53:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acs+mXAM4Ye9pjxJRsSrUB9wDlhb3Q==
Thread-topic: [RFC]PLE's performance enhancement through improving scheduler.
The attached patch is just for RFC, not for check-in. Recently, we are working 
on enhancing the hardware feature PLE through improving scheduler in Xen, and 
the attached patch can improve system's throughput significantly. With standand 
virtualization benchmark(vConsolidate), the testing result shows  ~20% 
performance gain.   In the implemenation, there are two points to enhance the 
system's scheduler. 
The first one is that when PLE vmexit occurs,  scheduler de-schedules the vcpu 
and put it in the second position of the runq instead of moving it to the tail 
of runq so that it can be re-scheduled in a very short time. In this case, it  
can improve scheduler's faireness and make the PLE-senstive guests allocated 
with reasonable timeslice.  The other improvement is to boost other vcpus' 
priority of the same guest through moving them to the head of the runq when PLE 
vmexit happens with one vcpu of the guest.  And we are also improving the 
implementation to make it more robust and more pervasive, but before the work 
is done, we also want to collect your guys' ideas and suggestions about it ?  
Any comment is very appreciated!. Thanks! 

Attachment: sched-ple.patch
Description: sched-ple.patch

Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>