This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] c/s 18470

To: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Subject: RE: [Xen-devel] c/s 18470
From: "Liu, Jinsong" <jinsong.liu@xxxxxxxxx>
Date: Thu, 18 Sep 2008 15:39:04 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 18 Sep 2008 00:39:57 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <48D0FC07.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <48D0D138.76E4.0078.0@xxxxxxxxxx> <48D0FC07.76E4.0078.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckYsoOTT6JexBj/Q/2nGqsZCFUrPAAqOiPg
Thread-topic: [Xen-devel] c/s 18470
Jan Beulich wrote:
>>>> "Jan Beulich" <jbeulich@xxxxxxxxxx> 17.09.08 09:43 >>>
>> This changeset reverts two previous corrections, for reasons that
>> escape me. 
>> First, the domain map is again being confined to NR_CPUS, which I had
>> submitted a patch to fix recently (yes, I realize the code has a
>> TODO in there, but those really get forgotten about far too often).
>> Second, the platform hypercall was reverted back to require all
>> information to be passed to Xen in one chunk, whereas I recall that
>> even Intel folks (not sure if it was you) agreed that allowing
>> incremental information collection was more appropriate.
>> Could you clarify why these changes were necessary and if/when you
>> plan to address the resulting issues?
> Also, were these changes tested on AMD CPUs? It would seem to me
> that the cpufreq_cpu_policy array would remain uninitialized here, and
> hence the first access in the powernow code would dereference a NULL
> pointer.

  No, we didn't test on AMD CPUs, we don't have AMD platform. AMD
powernow copy our cpufreq code, and share some data structure, this in
fact may result in bugs.
  Are you review/update powernow code recently? Recently, we rebase
cpufreq logic greatly, and add support to IPF arch, this work will
complete in 1 week.
  After cpufreq rebase and IPF support complete, 
  1. all arch-independent logic (like policy, governor algorithm, px
statistic, S3 suspend-resume, ppc dynamic handle, and most cpufreq init
logic, etc) will move to hypervisor common part (xen/drivers/cpufreq and
  2. all arch-dependent part (only cpufreq_driver) reside in arch
dependent dir (like xen/arch/x86/cpufreq (for x86 cpu) and
xen/arch/ia64/cpufreq (for ia64 cpu));

  So how about update AMD powernow cpufreq 1 week later? at that time,
all powernow current policy/governor/init logic can be cancelled, share
our common logic is OK, only need to leave powernow's cpufreq_driver,
which is AMD arch-dependent.

> Likewise the calls to cpufreq_{add,del}_cpu() from the CPU hot(un)plug
> paths seem to consider the Intel case only (as the functions
> themselves are Intel specific).

  Not quite sure about your question. I just check the code:
  1. cpufreq_add/del_cpu() is arch-independent, since all arch-dependent
part has been handled by cpufreq_driver->init/exit().
  2. cpu online/offline path is also arch-independent.
  Would you please tell me more clear where is intel specific?


> Jan

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>