WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: performance of credit2 on hybrid workload

To: George Dunlap <george.dunlap@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: performance of credit2 on hybrid workload
From: David Xu <davidxu06@xxxxxxxxx>
Date: Thu, 9 Jun 2011 15:50:13 -0400
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 09 Jun 2011 12:51:13 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=qmCDOdUloA12xrHiWtTqq19wfCMtUs2jyB/CIA+abxY=; b=LeOVteo1B69vZsSERNdmv5hVsz0Mk4Xg+ptcczgYfpq+BsSDo6BQN7VaOF2gUIvGUH yyV6qV+H7OUzWtpR0jiCkiQ7a0qUPEnqCH7ZpCwklnDFhSzCQ2zVm5W2LsC1BCR47mM1 uMQx80Z+y/LLBUSrmFh4yv3q0H21Z7XHHJMb4=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=ds7CBc2TLc8EmhIP61/HYfkDt6u9Ki9bDaw/pkmUdQNzzppaWw455ECqr0y/gGy000 rZLYh3YUJ3KvgRzFpoLiN/4oO0GREf1Hx/JuMCqoXK1djRtNMQov5g9q0Xq0GRw8pxtv hMm0Z//9FxmDG+HlYhei752zstTQ5kmO7bgZc=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1307626494.27103.3243.camel@elijah>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <BANLkTik9+a64cm6YPgnL0sTaXbEWCqYJcA@xxxxxxxxxxxxxx> <1306340309.21026.8524.camel@elijah> <BANLkTi=57gDitoq7-T7n9Zh0_ZrCMuxfRg@xxxxxxxxxxxxxx> <1306401493.21026.8526.camel@elijah> <BANLkTikU0KqN_yd1J3_HtCaAN0LrF6qBXQ@xxxxxxxxxxxxxx> <BANLkTimaUs=pnBV3sEd0c_KsNeEF4SjSDQ@xxxxxxxxxxxxxx> <BANLkTikGiK+HFAgzVF1pPObwGi55FeAW-g@xxxxxxxxxxxxxx> <BANLkTim=LPqLB=atbM+QJD-6i2LaRXj27Q@xxxxxxxxxxxxxx> <BANLkTin7AWvbtyqZKAu4_cuD_DXOVr=v6w@xxxxxxxxxxxxxx> <1307626494.27103.3243.camel@elijah>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> Remember though -- you can't just give a VM more CPU time.  Giving a VM
> more CPU at one time means taking CPU time away at another time.  I
> think they key is to think the opposite way -- taking away time from a
> VM by giving it a shorter timeslice, so that you can give time back when
> it needs it.

It seems if scheduler always schedules a VM firstly, it will use up all allocated credits soon compared with other VMs and steal credits from others, which may cause unfairness. And your suggestion that thinking the opposite way is reasonable. An efficient method to reduce scheduling latency for a specific VM is to preempt current running VM when a interrupt is coming. However, too frequently context switch and interrupt processing may negatively impact the performance as well. BTW, do you know how to give a VM running mix workload a shorter time-slice (ex. 5ms) and keep other VMs the default value (30ms)? 

> I've just been talking to one of our engineers here who used to work for
> a company which sold network cards.  Our discussion convinced me that we
> shouldn't really need any more information about a VM than the
> interrupts which have been delivered to it: even devices which go into
> polling mode do so for a relatively brief period of time, then re-enable
> interrupts again.

Do you think a pending interrupt generally indicates a latency-intensive workload? From my point of view, it means there is a I/O-intensive workload which may not be latency-intensive but only require high throughput. 

> Yes, I look forward to seeing the results of your work.  Are you going
> to be doing this on credit2?

I am not familiar with credit2, but I will delve into it in future. Of course, If I have any new progress, I will share my results with you. 

2011/6/9 George Dunlap <george.dunlap@xxxxxxxxxx>
On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
> Hi George,
>
>
> Thanks for your reply. I have similar ideas to you, adding another
> parameter that indicates the required latency and then letting
> scheduler determine latency characteristics of a VM automatically.
> Firstly, adding another parameter and let users set its value in
> advance sounds similar to SEDF. But sometimes the configuration
> process is hard and inflexible when workloads in VM is complex. So in
> my opinion, a task-aware scheduler is better. However, manually
> configuration can help us to check out the effectiveness of the new
> parameter.

Great!  Sounds like we're on the same page.

>  For another hand, as you described, it is also not easy and accurate
> to make scheduler  determine the latency characteristics of a VM
> automatically with some information we can get from hypervisor, for
> instance the delay interrupt. Therefore, the key point for me is to
> find and implement a scheduling helper to indicate which VM should be
> scheduled soon.

Remember though -- you can't just give a VM more CPU time.  Giving a VM
more CPU at one time means taking CPU time away at another time.  I
think they key is to think the opposite way -- taking away time from a
VM by giving it a shorter timeslice, so that you can give time back when
it needs it.

> For example, for TCP network, we can implement a tool similar to a
> packet sniffer to capture the packet and analyze its head information
> to infer the type of workload. Then the analysis result can help
> scheduler to make a decision. In fact, not all I/O-intensive workloads
> require low latency, some of them only require high-throughput. Of
> course, scheduling latency impact significantly the throughput (You
> handled this problem with boost mechanism to some extension).

The boost mechanism (and indeed the whole credit1 scheduler) was
actually written by someone else. :-)  And although it's good in theory,
the way it's implemented actually causes some problems.

I've just been talking to one of our engineers here who used to work for
a company which sold network cards.  Our discussion convinced me that we
shouldn't really need any more information about a VM than the
interrupts which have been delivered to it: even devices which go into
polling mode do so for a relatively brief period of time, then re-enable
interrupts again.

> What I want to is to only reduce the latency of a VM which require low
> latency while postpone other VMs, and use other technology such as
> packet offloading to compensate their lost and improve their
> throughput.
>
>
> This is just my course idea and there are many problems as well. I
> hope I can often discuss with you and share our results. Thanks very
> much.

Yes, I look forward to seeing the results of your work.  Are you going
to be doing this on credit2?

Peace,
 -George




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel