WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH/RFC] Implement the memory_map hypercall

To: "Glauber de Oliveira Costa" <gcosta@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH/RFC] Implement the memory_map hypercall
From: "Jun Koi" <junkoi2004@xxxxxxxxx>
Date: Fri, 24 Nov 2006 23:36:36 +0900
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 24 Nov 2006 06:36:41 -0800
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=BnVf+pbhorh3gaRTsHIENiYOUErZQO4254XQ8hQVQPhMdTlmuJBPQMAMKjwe4RUsSWkUIWFFuH4IGi2CXFl4YywCZEcF+JUGIrJPL07JHjqqIedAhPCgmjFTxZ6/bbej8MVSxwaD5OQ+mZLpkMXMkAy7oXeDH44LJGSaR30pSLc=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20061124140810.GC7171@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20061124140810.GC7171@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Glauber, what is this hypercall for? To map hypervisor memory from Dom0?

Thanks.
J

On 11/24/06, Glauber de Oliveira Costa <gcosta@xxxxxxxxxx> wrote:
Keir,

Here's a first draft on an implementation on the memory_map
hypercall. I would like to have comments on this, specially at:

1) I set a new field in the domain structure, and use it whenever it's
set to determine the maximum map. In case it's not, using max_mem will
most probably give us a better bound than tot_pages, as it may allow us
to balloon up later even when using tools that does not call the new
domctl (yet to come) that sets the map limit.

2) However, as it currently breaks dom0, I'm leaving it unimplemented in
this case, and plan to do better than that when you apply the changes
you said you would in dom0 max_mem representation.

I'm currently working on the domctl side of things, but I'd like to have
this sorted out first.

Thank you!

--
Glauber de Oliveira Costa
Red Hat Inc.
"Free as in Freedom"


# HG changeset patch
# User gcosta@xxxxxxxxxx
# Date 1164380458 18000
# Node ID da7aa8896ab07932160406c8b19a6ad4a61b3af7
# Parent  47fcd5f768fef50cba2fc6dbadc7b75de55e88a5
[XEN] Implement the memory_map hypercall

It's needed to provide guests with an idea of a physical
mapping that may differ from simply what's needed to fit
tot_pages.

Signed-off-by: Glauber de Oliveira Costa <gcosta@xxxxxxxxxx>

diff -r 47fcd5f768fe -r da7aa8896ab0 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c Fri Nov 17 08:30:43 2006 -0500
+++ b/xen/arch/x86/mm.c Fri Nov 24 10:00:58 2006 -0500
@@ -2976,7 +2976,45 @@ long arch_memory_op(int op, XEN_GUEST_HA

     case XENMEM_memory_map:
     {
-        return -ENOSYS;
+        struct xen_memory_map memmap;
+        struct domain *d;
+        XEN_GUEST_HANDLE(e820entry_t) buffer;
+        struct e820entry map;
+
+        if ( IS_PRIV(current->domain) )
+            return -ENOSYS;
+
+        d = current->domain;
+
+        if ( copy_from_guest(&memmap, arg, 1) )
+            return -EFAULT;
+
+        buffer = guest_handle_cast(memmap.buffer, e820entry_t);
+        if ( unlikely(guest_handle_is_null(buffer)) )
+            return -EFAULT;
+
+        memmap.nr_entries = 1;
+
+        /* if we were not supplied with proper information, the best we can
+         * do is rely on the current max_pages information as a sane bound */
+        if (d->memory_map_limit)
+            map.size = d->memory_map_limit;
+        else
+            map.size = d->max_pages << PAGE_SHIFT;
+
+        /* 8MB slack (to balance backend allocations). */
+        map.size += 8 << 20;
+        map.addr = 0ULL;
+        map.type = E820_RAM;
+
+        if ( copy_to_guest(arg, &memmap, 1) )
+            return -EFAULT;
+
+        if ( copy_to_guest(buffer, &map, 1) < 0 )
+            return -EFAULT;
+
+        return 0;
+
     }

     case XENMEM_machine_memory_map:


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>