[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: performance of credit2 on hybrid workload


  • To: George Dunlap <george.dunlap@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: David Xu <davidxu06@xxxxxxxxx>
  • Date: Mon, 13 Jun 2011 12:52:18 -0400
  • Cc:
  • Delivery-date: Mon, 13 Jun 2011 09:53:25 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=TJB2nO11IgALyLMgOvz/ADOzA9QPvmC+KzgRsG7/34sJXnGo7LQkI/wBxRfLMJOZ8a xnbHgzvs1/u1i/EnpNvkARz+3rllw4IptO86ZHCZU8iwtwYvPdiRei2tUKozCPX8O9wN 86Nc9LGydxgP2W8tPa9o9tvu2KZ3xkBzlTW0Q=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi,

Could you tell me how to check out the pending interrupt during the scheduling while not adding extra risk of crash? Thanks.

Regards,
Cong

2011/6/9 David Xu <davidxu06@xxxxxxxxx>
> Remember though -- you can't just give a VM more CPU time.  Giving a VM
> more CPU at one time means taking CPU time away at another time.  I
> think they key is to think the opposite way -- taking away time from a
> VM by giving it a shorter timeslice, so that you can give time back when
> it needs it.

It seems if scheduler always schedules a VM firstly, it will use up all allocated credits soon compared with other VMs and steal credits from others, which may cause unfairness. And your suggestion that thinking the opposite way is reasonable. An efficient method to reduce scheduling latency for a specific VM is to preempt current running VM when a interrupt is coming. However, too frequently context switch and interrupt processing may negatively impact the performance as well. BTW, do you know how to give a VM running mix workload a shorter time-slice (ex. 5ms) and keep other VMs the default value (30ms)? 

> I've just been talking to one of our engineers here who used to work for
> a company which sold network cards.  Our discussion convinced me that we
> shouldn't really need any more information about a VM than the
> interrupts which have been delivered to it: even devices which go into
> polling mode do so for a relatively brief period of time, then re-enable
> interrupts again.

Do you think a pending interrupt generally indicates a latency-intensive workload? From my point of view, it means there is a I/O-intensive workload which may not be latency-intensive but only require high throughput. 

> Yes, I look forward to seeing the results of your work.  Are you going
> to be doing this on credit2?

I am not familiar with credit2, but I will delve into it in future. Of course, If I have any new progress, I will share my results with you. 

2011/6/9 George Dunlap <george.dunlap@xxxxxxxxxx>
On Wed, 2011-06-08 at 22:43 +0100, David Xu wrote:
> Hi George,
>
>
> Thanks for your reply. I have similar ideas to you, adding another
> parameter that indicates the required latency and then letting
> scheduler determine latency characteristics of a VM automatically.
> Firstly, adding another parameter and let users set its value in
> advance sounds similar to SEDF. But sometimes the configuration
> process is hard and inflexible when workloads in VM is complex. So in
> my opinion, a task-aware scheduler is better. However, manually
> configuration can help us to check out the effectiveness of the new
> parameter.

Great!  Sounds like we're on the same page.

>  For another hand, as you described, it is also not easy and accurate
> to make scheduler  determine the latency characteristics of a VM
> automatically with some information we can get from hypervisor, for
> instance the delay interrupt. Therefore, the key point for me is to
> find and implement a scheduling helper to indicate which VM should be
> scheduled soon.

Remember though -- you can't just give a VM more CPU time.  Giving a VM
more CPU at one time means taking CPU time away at another time.  I
think they key is to think the opposite way -- taking away time from a
VM by giving it a shorter timeslice, so that you can give time back when
it needs it.

> For example, for TCP network, we can implement a tool similar to a
> packet sniffer to capture the packet and analyze its head information
> to infer the type of workload. Then the analysis result can help
> scheduler to make a decision. In fact, not all I/O-intensive workloads
> require low latency, some of them only require high-throughput. Of
> course, scheduling latency impact significantly the throughput (You
> handled this problem with boost mechanism to some extension).

The boost mechanism (and indeed the whole credit1 scheduler) was
actually written by someone else. :-)  And although it's good in theory,
the way it's implemented actually causes some problems.

I've just been talking to one of our engineers here who used to work for
a company which sold network cards.  Our discussion convinced me that we
shouldn't really need any more information about a VM than the
interrupts which have been delivered to it: even devices which go into
polling mode do so for a relatively brief period of time, then re-enable
interrupts again.

> What I want to is to only reduce the latency of a VM which require low
> latency while postpone other VMs, and use other technology such as
> packet offloading to compensate their lost and improve their
> throughput.
>
>
> This is just my course idea and there are many problems as well. I
> hope I can often discuss with you and share our results. Thanks very
> much.

Yes, I look forward to seeing the results of your work.  Are you going
to be doing this on credit2?

Peace,
 -George





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.