WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] Fix cache flush bug of cpu offline

To: "Liu, Jinsong" <jinsong.liu@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] Fix cache flush bug of cpu offline
From: Keir Fraser <keir.xen@xxxxxxxxx>
Date: Fri, 11 Mar 2011 17:29:25 +0000
Cc: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>, "Shan, Haitao" <haitao.shan@xxxxxxxxx>, "Wei, Gang" <gang.wei@xxxxxxxxx>, "Yu, Ke" <ke.yu@xxxxxxxxx>, "Li, Xin" <xin.li@xxxxxxxxx>
Delivery-date: Fri, 11 Mar 2011 09:30:19 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:user-agent:date:subject:from:to:cc:message-id :thread-topic:thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; bh=XwkmC+m39RSBWMhP8PbzaDAi/pF2PshnbikpGfe0YW0=; b=sji5Uvjw2uCGZEwz9m+DKaTo9zeRlp9p+99IuPoQjRYmys5O6/MDx1sTPbuvcjP976 E+ZN+ZkVqNEoLNC6r+XCZqGs2ZiOEHM9Vd1yjBFQ3UG/Uwg/efP+zX7EY1QdqKjKM33h 539awhEvLEi/X41mbJEXQ2jGEdGNqmbV6YnWU=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=FP3w67k0typN6h2aPa2N9w9lbJ4Vf5Cloqmv7OqFEGqOe54e/SYTcE+BOxBqeWc3qZ hqCheZxFPTVTyOAvB+T/udYf5z5t87jtal2JUHAqNYnK3TLBDeMMFbrpZqEdrW45rkPk xRFr28SjOr7DZ9UsvMeSb+KpemA2JQ1hRfqp4=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <BC00F5384FCFC9499AF06F92E8B78A9E1FCCF29CCF@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acvf+Wq8eUV5g7VvTPu6ARIzvUA0UgAD5esZAAA/DUAAAffKGQ==
Thread-topic: [PATCH] Fix cache flush bug of cpu offline
User-agent: Microsoft-Entourage/12.28.0.101117
On 11/03/2011 16:50, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:

> we did experiment, if did wbinvd at current position (at play_dead), sometimes
> it bring strange issue when repeatly cpu offline/online.
> so for cpu dead, the near wbinvd to last step, the safer.
> wbinvd would better be the last ops before cpu dead, to avoid potential cache
> coherency break.

Okay, I applied your patches. However in a follow-up patch (c/s 23025) I
have removed the WBINVD instructions from the default paths (i.e., the HLT
loops) as the CPU still does cache coherency while in HLT/C1 state.

Does that look okay to you?

 -- Keir

> In fact, it can do wbinvd inside loop, but as cpu_offline_3.patch said,
> at Xen 7400 when hyperthreading, the offlined thread may be spuriously waken
> up by its brother, and frequently waken inside the dead loop.
> In such case, considering heavy workload of wbinvd, we add a light-weight
> clflush instruction inside loop.
> 
> Thanks,
> Jinsong
> 
> 
>> 
>>> Signed-off-by: Liu, Jinsong <jinsong.liu@xxxxxxxxx>
>>> 
>>> diff -r 2dc3c1cc1bba xen/arch/x86/acpi/cpu_idle.c
>>> --- a/xen/arch/x86/acpi/cpu_idle.c Mon Mar 07 05:31:46 2022 +0800
>>> +++ b/xen/arch/x86/acpi/cpu_idle.c Thu Mar 10 23:40:51 2022 +0800
>>> @@ -562,11 +562,14 @@ static void acpi_dead_idle(void)
>>>      if ( (cx = &power->states[power->count-1]) == NULL )
>>> goto default_halt;
>>> 
>>> +    /*
>>> +     * cache must be flashed as the last ops before cpu going into
>>> dead, +     * otherwise, cpu may dead with dirty data breaking cache
>>> coherency, +     * leading to strange errors.
>>> +     */
>>> +    wbinvd();
>>>      for ( ; ; )
>>>      {
>>> -        if ( !power->flags.bm_check && cx->type == ACPI_STATE_C3 )
>>> -            ACPI_FLUSH_CPU_CACHE();
>>> -
>>>          switch ( cx->entry_method )
>>>          {
>>>              case ACPI_CSTATE_EM_FFH:
>>> @@ -584,6 +587,7 @@ static void acpi_dead_idle(void)      }
>>> 
>>>  default_halt:
>>> +    wbinvd();
>>>      for ( ; ; )
>>>          halt();
>>>  }
>>> diff -r 2dc3c1cc1bba xen/arch/x86/domain.c
>>> --- a/xen/arch/x86/domain.c Mon Mar 07 05:31:46 2022 +0800
>>> +++ b/xen/arch/x86/domain.c Thu Mar 10 23:40:51 2022 +0800
>>> @@ -93,6 +93,12 @@ static void default_idle(void)
>>> 
>>>  static void default_dead_idle(void)
>>>  {
>>> +    /*
>>> +     * cache must be flashed as the last ops before cpu going into
>>> dead, +     * otherwise, cpu may dead with dirty data breaking cache
>>> coherency, +     * leading to strange errors.
>>> +     */
>>> +    wbinvd();
>>>      for ( ; ; )
>>>          halt();
>>>  }
>>> @@ -100,7 +106,6 @@ static void play_dead(void)
>>>  static void play_dead(void)
>>>  {
>>>      local_irq_disable();
>>> -    wbinvd();
>>> 
>>>      /*
>>>       * NOTE: After cpu_exit_clear, per-cpu variables are no longer
>>> accessible,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel