WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 04/12] Nested Virtualization: core

To: Christoph Egger <Christoph.Egger@xxxxxxx>
Subject: RE: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Sat, 8 Jan 2011 04:39:58 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, Deegan <Tim.Deegan@xxxxxxxxxx>
Delivery-date: Fri, 07 Jan 2011 12:43:26 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <201101071124.49973.Christoph.Egger@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <201012201705.06356.Christoph.Egger@xxxxxxx> <201101031658.57065.Christoph.Egger@xxxxxxx> <1A42CE6F5F474C41B63392A5F80372B231D84EA2@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <201101071124.49973.Christoph.Egger@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcuuVTaZBuY49+p4QYetb61BDN1KlwAUlWog
Thread-topic: [Xen-devel] [PATCH 04/12] Nested Virtualization: core
Glad to see that you eventually take our proposal. Simple is beautiful, that is 
my truth.

BTW, comments to your previous question in case you are still interested in 
them.

>> 
>> It will be much simple. You don't need the
>> nestedhvm_vcpu_iomap_get/put api, nor the refcnt.
> 
> It is intended that the api behaves like a pool from the caller site
> while it is implemented as a singleton.
> The refcnt (or should I call it usagecnt) is needed by the singleton
> design pattern.
> 
> When I remove the refcnt then I have to implement the api as a real
> pool which will result in allocating an io bitmap for each vcpu for
> each l1 guest at runtime.

You don't need the apis w/ pre-allocated pages.

> 
>> 
>> The thing more important is policy: If you are in favoring of memory
>> size or simplicity. If it is for memory size, then you should only
>> allocate 2 io_bitmap pages for VMX. 
>> 
>>> I appreciate opinions from other people on this.
>> 
>> Besides, ideally we should implement per guest io bitmap page, by
>> reusing L1 guest io_bitmap + write protection of the page table.
> 
> That will work fine with 4kb pages but I guess it won't be very
> efficient with 2MB and 1GB pages. Most time will be spent with
> emulating write accesses to the address ranges outside of the io
> bitmaps with large pages. 

That is not true. Even the guest got contuguous machine large pages, but the 
host should still be able to handle mixed page size. a typical usage for this 
is that host may not always be able to get contiguous large page such as after 
migration.

In this case, of course we only protect 2* 4K pages. That doesn't introduce any 
additional issues.

> 
>> At least for both Xen & KVM, the io bitmap is not modified at
>> runtime once it is initialized.
> 
> Yep, that's why we only need to deal with four possible patterns of
> shadow io bitmaps in Xen. We can't assume the l1 guest is not
> modifying it. 

While, for performance, we can assume. For correctness, we need to handle the 
rare situation. That is also why we need to write-protect the bitmap pages, and 
create another seperate shadow io bitmap pages if the guest want to do that for 
correctness. However in dominant case, the host can reuse guest io butmap pages 
for host usage with 2 bits indicating original guest state.

Four pattern really is not related with the topic.

> 
>> The readibility can be improved & the memory page can be saved. We
>> only need 2 bits per L1 guest. 
>> 
>> But if we want simplicity, I am ok too, however the current patch
>> doesn't fit for either of the goal.
> 
> hmm... I think, I need to move that part of common logic into SVM to
> reach consensus... pity.
> 

My idea was given twice before you publically post it :(

Thx, Eddie
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel