WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 3/3] libvchan: interdomain communications library

On Wed, 2011-09-21 at 17:31 +0100, Daniel De Graaf wrote:
> On 09/21/2011 06:53 AM, Ian Campbell wrote:
> > On Mon, 2011-09-19 at 23:43 +0100, Daniel De Graaf wrote:
> >> This library implements a bidirectional communication interface between
> >> applications in different domains, similar to unix sockets. Data can be
> >> sent using the byte-oriented libvchan_read/libvchan_write or the
> >> packet-oriented libvchan_recv/libvchan_send.
> >>
> >> Channel setup is done using a client-server model; domain IDs and a port
> >> number must be negotiated prior to initialization. The server allocates
> >> memory for the shared pages and determines the sizes of the
> >> communication rings (which may span multiple pages, although the default
> >> places rings and control within a single page).
> >>
> >> With properly sized rings, testing has shown that this interface
> >> provides speed comparable to pipes within a single Linux domain; it is
> >> significantly faster than network-based communication.
> >>
> >> Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
> > 
> > I only skimmed this one I had a few minor thoughts below but really I'm
> > pretty much OK for it to go in (modulo any fallout from comments on
> > patches 1+2).
> > 
> > Definite Bonus Points for the doxygen/kernel doc commentary in the
> > headers, which tool parses them? (a few comments in the code itself seem
> > to have the "/**" marker but not the rest of the syntax).
>  
> I think doxygen parses them, but I haven't personally run doxygen to
> verify that it works as expected.

That's ok, just having the comments at all is much appreciated!

> >> +static int init_gnt_srv(struct libvchan *ctrl)
> >> +{
> >> +       int pages_left = ctrl->read.order >= PAGE_SHIFT ? 1 << 
> >> (ctrl->read.order - PAGE_SHIFT) : 0;
> > 
> > Here you do >= PAGE_SHIFT but on the out_unmap_left path you do > 11.
> > 
> > (am I right that left == server and right == client in the libvhan
> > terminology?)
> > 
> 
> From the public/io/libvchan.h header:
>   * Standard consumer/producer interface, one pair per buffer
>   * left is client write, server read
>   * right is client read, server write
> 
> So the client is on the left, if you assume the writer owns the buffer.

Heh, I guess having praised the docs I should read them ;-)

> > What is the significance of 2^24?
> > 
> 
> Actually, this should be 20 to match MAX_RING_SIZE in init.c;

OK, then I think MAX_RING_SIZE should be in a header and reused here
instead of a hard-coded 20 or 24.

>  that number
> is derived from 1024 bytes of grant data. An order of 22 will always cause
> the list of grants to overrun the primary shared page; an order of 21 on
> both sides will also cause this, and can also cause the grant list to overlap
> the LARGE_RING area. From testing, the performance gain from larger rings
> begins to drop off before 2^20 (although this may depend on the size of
> your L2/L3 cache). Also, gntalloc is restricted to 1024 pages by default
> (this can be adjusted via sysfs or a module parameter).

Makes sense. 

[...]
> Allowing the caller to specify the xenstore path would make the interface
> more flexible, and also removes the arbitrary port numbers which already
> seem likely to collide.

Agreed.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel