WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] RE: [PATCH 01/13] Nested Virtualization: tools

To: Andre Przywara <andre.przywara@xxxxxxx>
Subject: RE: [Xen-devel] RE: [PATCH 01/13] Nested Virtualization: tools
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Tue, 7 Sep 2010 20:39:30 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "Egger, Christoph" <Christoph.Egger@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Delivery-date: Tue, 07 Sep 2010 05:44:39 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C860984.2040702@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <201009011654.55291.Christoph.Egger@xxxxxxx> <1A42CE6F5F474C41B63392A5F80372B22A7C1CE5@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4C80BC84.3010104@xxxxxxx> <1A42CE6F5F474C41B63392A5F80372B22A7C2772@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4C860984.2040702@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActOcfq+WtIFraGfTxKOqPgziAUSswAFm2ZA
Thread-topic: [Xen-devel] RE: [PATCH 01/13] Nested Virtualization: tools
Andre Przywara wrote:
> Dong, Eddie wrote:
>> Andre Przywara wrote:
>>> Dong, Eddie wrote:
>>>> Dong, Eddie wrote:
>>>>> # HG changeset patch
>>>>> # User cegger
>>>>> # Date 1283345869 -7200
>>>>> tools: Add nestedhvm guest config option
>>>>> 
>>>>> diff -r 80ef08613ec2 -r ecec3d163efa tools/libxc/xc_cpuid_x86.c
>>>>> --- a/tools/libxc/xc_cpuid_x86.c
>>>>> +++ b/tools/libxc/xc_cpuid_x86.c
>>>>> @@ -30,7 +30,7 @@
>>>>>  #define set_bit(idx, dst)   ((dst) |= (1u << ((idx) & 31)))
>>>>> 
>>>>>  #define DEF_MAX_BASE 0x0000000du
>>>>> -#define DEF_MAX_EXT  0x80000008u
>>>>> +#define DEF_MAX_EXT  0x8000000au
>>>> How can this make Intel CPU happy?
>>>> You may refer to my previous comments in V2.
>>> Correct me if I am wrong, but this is only a max boundary:
>>> tools/libxc/xc_cpuid_x86.c:234
>>>      case 0x80000000:
>>>          if ( regs[0] > DEF_MAX_EXT )
>>>              regs[0] = DEF_MAX_EXT;
>>>          break;
>>> So if an Intel CPU returns 0x80000008 here, this will be in the
>>> regs[0] field and thus any higher value in DEF_MAX_EXT does not
>>> affect the guest's CPUID response.
>>> So as long as Intel CPUs don't return higher values which don't
>>> match the AMD assignment (which is extremely unlikely), extending
>>> DEF_MAX_EXT is fine. 
>>> 
>> But it is called as MAX_EXT and will cause some unreasonable setup
>> of leaves. 
> Where? If DEF_MAX_EXT would be 0x8FFFFFFF, CPUID would still return
> 0x80000008 on Intel CPUs. I don't see any leaves setup because of a
> changed DEF_MAX_EXT value. CPUID will just return the smaller value of
> (DEF_MAX_EXT,native CPUID), any leafs beyond the value will not be
> populated in xc_cpuid_apply_policy() and thus will not appear in the
> HV's struct domain->arch.cpuids array used for delivering CPUID
> results to guests.

Well, what does DEF_MAX_EXT mean then? Isn't it mean default maximum Extended 
Function CPUID Information?

If yes, you shouldn't mislead readers to think virtual Intel CPU has 0x8..A 
Extended Function CPUID Information now.


> 
> 
>> May you split the MACRO to _AMD & _INTEL, or a dynamic variable
>> depending on CPU brand like Keir suggested? 
> I guess that is not needed. The leaf is properly handled in the
> {amd,intel}_xc_cpuid_policy() filters, which will only be called on
> the respective CPUs.

Are you assuming that future Intel processor won't implement any leaf higher 
than 0x8...8/A either?

Software should follow SDM as more as possible.

Thx, Eddie
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel