WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: xencomm address space API

To: Hollis Blanchard <hollisb@xxxxxxxxxx>
Subject: [Xen-devel] Re: xencomm address space API
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Tue, 7 Feb 2006 14:28:04 +0000
Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 07 Feb 2006 14:33:17 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1139283935.13776.71.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1139283935.13776.71.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

This means that the hypervisor must track multiple registered buffers
per domain. (In the general case this could be an arbitrary number, but
I guess it would need to be limited to prevent a domain from exhausting
the Xen heap.)

I would expect all registering to be done by the guest kernel. The guest kernel has to register pages of memory that belong to the guest, and specify a location in the hypercall address space to register them at. There is memory usage in Xen to store the translation from hypercall address to physical address, but the transfer memory itself is provided by the guest.

That also means that each hcall must somehow indicate which buffer
should be used with its arguments. I think that could be done by
encoding the buffer ID into the memory reference, necessitating an API
like this:

The alloc_buf() routine returns a handle. We provide a function for turning that into a pointer for the application/kernel to dereference. The handle is poked into hypercall structures/arguments where raw pointers would currently be passed.

By the way -- I mean that alloc_buf() is decoupled from registering hypercall memory. I expect the kernel to register a chunk of memory at start of day, and then run an allocator over that chunk. Only register more/bigger chunks when alloc_buf() cannot succeed.

In Xen, copy_from_user(xenbuf, memref) would then decode memref to
figure out what buffer was being referred to. copy_from_user would then
need to understand the data structures used by userland to track the
memory references within the buffer.

Callers of copy_from_user() already know where the user pointers/handles are that need special treatment.

Problem #2: Spanning pages is still really difficult. One possible
solution (different from above) would be to have the kernel reserve some
physically contiguous pages, and then export that area by having
userland mmap some device.

Easy for application/kernel where the pages can be mapped contiguously. Only a problem for ppc xen which does not run with paging enabled. But page crossings can be hidden inside copy_to_user/copy_from_user.

Problem #3: We need to know beforehand the maximum number of bytes
needed for the buffer.

Nope I don't think so.

Problem #4: The kernel must track the buffers that userland registered,
and unregister them when the process dies, since it may not have been
able to unregister them properly.

Yes. Applications should get hypercall-capable memory by mmap()ing a device file (e.g., privcmd). That can then be resource tracked.

This mail isn't comprehensive, but I think gives some idea of the
complexity involved. So a solution like replacing pointers with embedded
structures is far more attractive.

Not sure what you mean. Can you give an example?

 -- Keir

--
Hollis Blanchard
IBM Linux Technology Center



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel