WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 06/16] vmx: nest: handling VMX instruction exits

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Christoph Egger <Christoph.Egger@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH 06/16] vmx: nest: handling VMX instruction exits
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Mon, 20 Sep 2010 17:33:31 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "He, Qing" <qing.he@xxxxxxxxx>
Delivery-date: Mon, 20 Sep 2010 02:39:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C8BCD4F2.2380F%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1A42CE6F5F474C41B63392A5F80372B22A95F800@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C8BCD4F2.2380F%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActUrjFTQ+ojJjAnSDWc/9iRkRslWQAAQqD5AAF/29AABVpPLQABspUgAAGP4pYA5kz3IAAKg/krAALKLlA=
Thread-topic: [Xen-devel] [PATCH 06/16] vmx: nest: handling VMX instruction exits
Keir Fraser wrote:
> On 20/09/2010 04:13, "Dong, Eddie" <eddie.dong@xxxxxxxxx> wrote:
> 
>>>>> Actually it is an issue now. This has nothing to do with VT-d (ie.
>>>>> IOMMU, irq remapping, etc) but with basic core VMX functionality
>>>>> -- per I/O port direct execute versus vmexit; per virtual-address
>>>>> page 
>>>> 
>>>> I see, for the I/O port, right now we are letting L1 handle it
>>>> though it doesn't expect to :( How about to remove the capability
>>>> of CPU_BASED_ACTIVATE_IO_BITMAP in L1 VMM for now to focus on
>>>> framework? 
>>> 
>>> Well. It'd be better if just worked really, wouldn't it? :-) How
>>> hard can it be?
>> 
>> You are right. It is easy to do, but we have dillemma to either
>> write-protect guest I/O bitmap page, or have to create the shadow
>> I/O bitmap at each vmresume of L2 guest.
> 
> You need that anyway don't you, regardless of whether you are
> accurately deciding whether to inject-to-L1 or emulate-L2 on vmexit
> to L0? Whether you inject or emulate, ports that L1 has disallowed
> for L2 must be properly represented in the shadow I/O bitmap page.

VMX has a feature "always exit" for PIO which doesn't use I/O bitmap.


> 
>> Currently we are injecting to L1 guest, but may be not correct in
>> theory. For now, VMX can trap L2 guest I/O and emulate them in L0,
>> we can revisit some time later to see if we need write-protection of
>> guest I/O bitmap page :) 
> 
> Are you suggesting to always emulate instead of always inject-to-L1?
> That's still not accurate virtualisation of this VMX feature.

L2 PIO is always exiting to L0. So we wither inject to L1, or emulate it in L0, 
base on L1 I/O exiting and bitmap setting.

> 
> Hmm... Are you currently setting up to always vmexit on I/O port
> accesses by L2? Even if you are, that doesn't stop you looking at the

Yes.

> virtual I/O bitmap from in your L0 vmexit handler, and doing the

No, we checked the L1 settings.

> right thing (emulate versus inject-to-L1).
> 

BTW, does SVM side already implemented the write-protection of I/O bitmap & MSR 
bitmap. it seems not.


Thx, Eddie
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>