[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] EFER in HVM guests



Mats, which configurations did you test?  Can you post those results?

Xin, 
Attached is overnight testing on patch you sent for 32bit hv (xls
spreadsheet) on top of 13078, so far looks ok, we have planned to do
some 32bit PAE and 64bit hv with HVM AMD-V guest testing.  With the
interaction/comments from Mats, do you anticipate another patch with
more consolidation of SVM/VMX code?

Thanks
Tom 



> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Li, Xin B
> Sent: Tuesday, December 19, 2006 8:25 AM
> To: Li, Xin B; Petersson, Mats; Woller, Thomas; Keir Fraser; 
> Nakajima, Jun; Jan Beulich; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] EFER in HVM guests
> 
> Mats, Did you find any issue on your side?
> 
> -Xin 
> 
> >-----Original Message-----
> >From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> >[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Li, Xin B
> >Sent: Tuesday, December 19, 2006 2:45 PM
> >To: Petersson, Mats; Woller, Thomas; Keir Fraser; Nakajima, Jun; Jan 
> >Beulich; xen-devel@xxxxxxxxxxxxxxxxxxx
> >Subject: RE: [Xen-devel] EFER in HVM guests
> >
> >>
> >>The 0x80000001 leaf was originally an "AMD only" leaf for 
> adding new 
> >>"non-Intel compatible" features, such as 3DNow! and long-mode, but 
> >>since x86_64 was adopted by Intel, it's available on Intel 
> processors 
> >>too. It should be done the same on both AMD and Intel, and since 
> >>0x80000001 contains another copy of the APIC and PAE bits, 
> they should 
> >>be masked for both processors on both 1 and 0x80000001. [Of 
> course, I 
> >>doubt that anyone would "prefer" to use 0x80000001 from 
> using 1 as the 
> >>index for the leaf unless the coder is already reading
> >>0x800000001 for some other reason - to detect LM for example]. 
> >>
> >>I would like to see the handling of 0x80000001 in the common case 
> >>cover PAE/PSE36/APIC features, as that's nor arch-specific. 
> The fact 
> >>that no-one actually uses it currently isn't a good 
> argument for not 
> >>getting it right at this time rather than fixing hard-to-find bugs 
> >>later on... ;-)
> >>
> >
> >Mats,
> >Leaf 0x80000001 on Intel processors only uses 4 bits in ECX and EDX, 
> >they are:
> >LAHF/SAHF:                   bit 0 of ECX
> >SYSCALL/SYSRET:              bit 11 of EDX
> >Execution Disable bit:       bit 20 of EDX
> >LM bit:                      bit 29 of EDX
> >All other bits are reserved to 0.
> >
> >
> >>Clearing MWAIT bit should also be made common - I doubt anyone will 
> >>notice the single instruction saved by combining it with a bunch of 
> >>other bits, compared to the overall benefit of trivially 
> seeing that 
> >>it's dealt with the same way on both architectures.
> >
> >I did have this in mind when creating this patch, but I'm 
> not sure if 
> >MWAIT virtualization is common on both sides, so just keep it there, 
> >and The patch attached has this fixed.
> >
> >>
> >>Just out of curiosity, why did you change the parameters passed to 
> >>svm_do_cpuid - I can see why you wouldn't need to pass 
> regs->eax when 
> >>it's available in regs, but digging out the vmcb again 
> can't be better 
> >>than passing the already existing one? [Don't worry about 
> it, I'm just 
> >>curious about why the change was made].
> >
> >In my mind, just pass parameters you don't have in hand. And yes, 
> >actually vmcb should be a parameter here :-)
> >
> >-Xin
> >
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 
> 
> 

Attachment: cpuid_boot_test_12_19_2006.xls
Description: cpuid_boot_test_12_19_2006.xls

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.