WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Super Pages (2meg/4meg) patch

To: "Woller, Thomas" <thomas.woller@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Super Pages (2meg/4meg) patch
From: "Han, Weidong" <weidong.han@xxxxxxxxx>
Date: Fri, 25 Jan 2008 11:16:09 +0800
Delivery-date: Thu, 24 Jan 2008 19:18:08 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <683860AD674C7348A0BF0DE3918482F606B21333@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <683860AD674C7348A0BF0DE3918482F606B21333@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acheq1bC6zWSD+/sQP6+QWBb+UXz8AAVHtUg
Thread-topic: [Xen-devel] [PATCH] Super Pages (2meg/4meg) patch
With this patch, couldn't create HVM guest with VT-d, prompted "Error:
(14, 'Bad addrees')".

Randy (Weidong)

Woller, Thomas wrote:
> Attached is the latest super pages patch with the modifications
> requested, the main content is consistent with Wei Huang's previous
> post of this patch.
> 
> applies to unstable 16877 cleanly.
> 
> There is a problem that I am working that is related to c/s 16728 (see
> below), so the patch doesn't function quite yet with latest
> unstable/staging (i.e. don't apply to staging quite yet).
> 
> Modifications from last patch:
> xc_hvm_build.c/setup_guest
> 1) all the *_2MB variables are now *_super
> 2) the superpage size is now calculated (currently
>    hardcoded to PAGE_SHIFT + 9 etc) from a single variable
>    describing the order of a superpage (21 or 22)
> 3) the superpage size is determined by querying the xen capabilities:
>    if Xen supports PAE HVM guests then superpages are 2MB, else 4MB.
>    Currently using "if ( strstr(caps, "x86_32p") )" as a string match.
> 
> What is not done:
> 1) I don't have crosscompile for IA64 and PowerPC compilation, I added
> what I think are enough modifications to allow compilation without
> runtime failure, but actual compiling will tell.
> 
> 2) c/s 16728
> Removing 16728 allows guests to start up. Get a "bad address" error
> currently, but removing only the
> xc_domain_memory_decrease_reservation(... [shared_page_nr-3]) call
> (which frees up the guard page), then allows HVM guests to start up. I
> am looking at the issue now, any ideas appreciated.
> 
> Testing:
> 1) ran over a week with PAE x86 hv without issue on 16656 base with
> HVM guests.
> 64b hv (sans 16728) does not show any issues starting up various HVM
> guests with more limited testing.  Used both shadow paging and nested
> paging during the testing.
> 
> 2) in process of starting up 64b hv with patch for extended testing
> once issue with 16728 resolved
> 
> 3) don't plan on testing any 32b hv
> 
> Jan's recent 1Gig patch applies without failure on top of this latest
> patch.
> 
> Please take a look at it, esp the newer mods to xc_hvm_build.c and the
> powerpc/ia64 areas, and provide any comments.  it *should* be almost
> ready to apply.
> 
> Cheers,
> 
> Signed-off-by: Tom Woller <thomas.woller@.com>
> 
>   --Tom
> 
> thomas.woller@xxxxxxx  +1-512-602-0059
> AMD Corporation - Operating Systems Research Center
> 5204 E. Ben White Blvd. UBC1
> Austin, Texas 78741


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>