WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] HYBRID: PV in HVM container

To: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Subject: Re: [Xen-devel] HYBRID: PV in HVM container
From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Date: Tue, 28 Jun 2011 09:31:57 +0100
Cc: "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 28 Jun 2011 01:32:32 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110627122404.23d2d0ce@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <20110627122404.23d2d0ce@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, 2011-06-27 at 20:24 +0100, Mukesh Rathor wrote:
> Hi guys,
> 
> Cheers!! I got PV in HVM container prototype working with single VCPU
> (pinned to a cpu). Basically, I create a VMX container just like for
> HVM guest (with some differences that I'll share soon when I clean up
> the code). The PV guest starts in Protected mode with the usual
> entry point startup_xen().

Great stuff! I've been eagerly awaiting this functionality ;-)

Do you have any timeline for when you think you might post the code?

I presume you managed to avoid bouncing through the hypervisor for
syscalls?

Cheers,
Ian.

> 
> 0. Guest kernel runs in ring 0, CS:0x10.
> 
> 1. I use xen for all pt management just like a PV guest. So at present
>    all faults are going to xen, and when fixup_page_fault() fails, they
>    are injected into the container for the guest to handle it.
> 
> 2. The guest manages the GDT, LDT, TR, in the container.
> 
> 3. The guest installs the trap table in the vmx container instead of 
>    do_set_trap_table(). 
> 
> 4. Events/INTs are delivered via HVMIRQ_callback_vector.
> 
> 5. MSR_GS_BASE is managed by the guest in the container itself.
> 
> 6. Currently, I'm managing cr4 in the container, but going to xen
>    for cr0. I need to revisit that.
> 
> 7. Currently, VPID is disabled, I need to figure it out, and revisit.
> 
> 8. Currently, VM_ENTRY_LOAD_GUEST_PAT is disabled, I need to look at 
>    that. 
> 
> These are the salient points I can think of at the moment. Next, I am 
> going to run LMBench and figure out the gains. After that, make sure
> SMP works, and things are stable, and look at any enhancements. I need
> to look at couple unrelated bugs at the moment, but hope to return back 
> to this very soon.
> 
> thanks,
> Mukesh
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel