WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Cosmetic change to schedule_cpu_switch

To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Cosmetic change to schedule_cpu_switch
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Wed, 19 May 2010 11:55:54 +0100
Cc:
Delivery-date: Wed, 19 May 2010 03:56:47 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C8195DB6.14979%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acr2x/6jGaXcMQLqT2OF/gaDH5dkvAAZA47QAAVzWZM=
Thread-topic: [Xen-devel] [PATCH] Cosmetic change to schedule_cpu_switch
User-agent: Microsoft-Entourage/12.24.0.100205
On 19/05/2010 09:19, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:

> Thanks, I'll fold this into my next patch. You'll see from my recent
> changesets that I'm currently tearing into the scheduler and cpupool code as
> part of my CPU hotplug cleanup.

I'm finished as of xen-unstable:21422, by the way.

 -- Keir

> I think there must be scope for further
> rationalisation of the sched-if interfaces as the sched_ops have sprouted a
> bewildering array of extra functions for cpupool support. I'm sure it's over
> complicated.
> 
>  -- Keir
> 
> On 18/05/2010 21:22, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> wrote:
> 
>> Using 'v' generally means that you mean any generic vcpu, not
>> a particular vcpu.  In this case, we always use the idle vcpu;
>> I think naming it explicitly idle_vcpu makes the code easier to grok.
>> 
>> No functional changes.
>> 
>> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>> 
>> diff -r c6db509d7e46 -r ebad6ba33a8f xen/common/schedule.c
>> --- a/xen/common/schedule.c Tue May 18 15:18:26 2010 +0100
>> +++ b/xen/common/schedule.c Tue May 18 15:22:27 2010 -0500
>> @@ -1151,7 +1151,7 @@
>>  void schedule_cpu_switch(unsigned int cpu, struct cpupool *c)
>>  {
>>      unsigned long flags;
>> -    struct vcpu *v;
>> +    struct vcpu *idle_vcpu;
>>      void *ppriv, *ppriv_old, *vpriv = NULL;
>>      struct scheduler *old_ops = per_cpu(scheduler, cpu);
>>      struct scheduler *new_ops = (c == NULL) ? &ops : c->sched;
>> @@ -1159,21 +1159,21 @@
>>      if ( old_ops == new_ops )
>>          return;
>>  
>> -    v = per_cpu(schedule_data, cpu).idle;
>> +    idle_vcpu = per_cpu(schedule_data, cpu).idle;
>>      ppriv = SCHED_OP(new_ops, alloc_pdata, cpu);
>>      if ( c != NULL )
>> -        vpriv = SCHED_OP(new_ops, alloc_vdata, v, v->domain->sched_priv);
>> +        vpriv = SCHED_OP(new_ops, alloc_vdata, idle_vcpu,
>> idle_vcpu->domain->sched_priv);
>>  
>>      spin_lock_irqsave(per_cpu(schedule_data, cpu).schedule_lock, flags);
>>  
>>      if ( c == NULL )
>>      {
>> -        vpriv = v->sched_priv;
>> -        v->sched_priv = per_cpu(schedule_data, cpu).sched_idlevpriv;
>> +        vpriv = idle_vcpu->sched_priv;
>> +        idle_vcpu->sched_priv = per_cpu(schedule_data, cpu).sched_idlevpriv;
>>      }
>>      else
>>      {
>> -        v->sched_priv = vpriv;
>> +        idle_vcpu->sched_priv = vpriv;
>>          vpriv = NULL;
>>      }
>>      SCHED_OP(old_ops, tick_suspend, cpu);
>> @@ -1181,7 +1181,7 @@
>>      ppriv_old = per_cpu(schedule_data, cpu).sched_priv;
>>      per_cpu(schedule_data, cpu).sched_priv = ppriv;
>>      SCHED_OP(new_ops, tick_resume, cpu);
>> -    SCHED_OP(new_ops, insert_vcpu, v);
>> +    SCHED_OP(new_ops, insert_vcpu, idle_vcpu);
>>  
>>      spin_unlock_irqrestore(per_cpu(schedule_data, cpu).schedule_lock,
>> flags);
>>  
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>