This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Sharing dom0 memory with hypervisor across hypercall

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Sharing dom0 memory with hypervisor across hypercall
From: "Mike Sun" <msun@xxxxxxxxxx>
Date: Sat, 27 Sep 2008 16:17:58 -0400
Delivery-date: Sat, 27 Sep 2008 13:18:23 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender :to:subject:mime-version:content-type:content-transfer-encoding :content-disposition:x-google-sender-auth; bh=W0lTA+zdL04kI09HcGeNrOS4kIMhdYJqmOe+pxHt46U=; b=nfxjds40DaBGk2aUUmZUWQsd5JKieVULN9qolsTJoKm9xKP2NbnxMnOBbxVTziD9v4 hdUKvR1zQhFtuWXRBVbr0dLCNiq6yRCEZ7/hx7j97QJdGU5HnkOlLVSvmYoXEVW7TzX+ hxD3AAgcdCa+x0cZjbjnxYgTclgNmW35+AMcc=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition:x-google-sender-auth; b=AmKWzugvqRyKIMdqnmbgsR35vujddQ6pzm2Wcq3vM/eee1smmeX9hJV6SI3TlCERA7 YhuaAP0og9Y8cLXZFhzgGJA2Rl4oGlU63dH2rHJXOePeWyA9//8OYpAle/heLVrn51Bx SSCm1Qyzv057c3LhhUdFH7Q4eOIRJ5Uvfas/s=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

For a research project implementation, I'm trying to allocate a large
buffer of memory in dom0, which would then be passed to the hypervisor
during a hypercall.  I've seen examples of this in xc_domain_save
where a dirty bitmap is passed during a log-dirty hypercall to get the
log-dirty status of pages.  It seems relatively straightforward in its
use of guest handles and copy_to_user().

My situation is different in that the hypervisor must copy to the
shared memory buffer allocated by dom0 not during the hypervisor call,
but upon subsequent faults of another guest domain(specifically, page
faults).  The copy_to_user() method would fail because I would not be
in dom0's address space, but in a guest domain's address space.

My approach was to provide a guest handle to the dom0 allocated buffer
via a hypercall.  The hypercall then determines the mfns of all the
pages of the dom0 buffer and stores this in a data structure in the
hypervisor.  The fault handler then maps the buffer pages into the
hypervisor address space using the mfns previousy determined during
the hypercall.  The fault handler can then copy into dom0's buffer.
Am I on the right track or is this not feasible?

My implementation does not seem to work correctly.  I allocate a
buffer in dom0 using xg_memalign(), pin it down with lock_pages(), and
perform a hypercall in which I pass the virtual address of the buffer
via a guest_handle.  In the hypervisor, the hypercall translates the
virtual address to an mfn by walking dom0's page tables.  These
actions seem to work correctly.  The fault handler maps the mfn of the
buffer using map_domain_page, and a copy is then done.  This seems to

Any ideas?  Have I completely misunderstood something?



Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>