WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Interdomain comms

To: andrew.warfield@xxxxxxxxxxxx
Subject: Re: [Xen-devel] Re: Interdomain comms
From: Eric Van Hensbergen <ericvh@xxxxxxxxx>
Date: Sun, 8 May 2005 10:27:43 -0500
Cc: Eric Van Hensbergen <ericvh@xxxxxxxxxxxxxxxxxxxxx>, Mike Wray <mike.wray@xxxxxx>, Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>, "Ronald G. Minnich" <rminnich@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 08 May 2005 15:27:23 +0000
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=tZusHnFrTUMIgxiyXoUxOzvD7JqbOTUVM9OC4Es13jXbQ9n3VKDSkx0IRT03ewjBd7Z9glz5Ybk4fLYYng1ZdwlIoBfd7Kcl1sbxojm+TTWeSFp64WFQHaT/A2mO07kUTryCSTwcm6VcuqXbPPGTsYZv6gdYVG1bApqU8H8W8yY=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <eacc82a4050508011979bda457@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <0BAE938A1E68534E928747B9B46A759A6CF3AC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <1115421185.4141.18.camel@localhost> <a4e6962a0505061719165b32e4@xxxxxxxxxxxxxx> <1115472417.4082.46.camel@localhost> <Pine.LNX.4.58.0505071009150.13088@xxxxxxxxxxxxxxx> <1115486227.4082.70.camel@localhost> <a4e6962a050507142932654a5e@xxxxxxxxxxxxxx> <1115503861.4460.2.camel@localhost> <a4e6962a050507175754700dc8@xxxxxxxxxxxxxx> <eacc82a4050508011979bda457@xxxxxxxxxxxxxx>
Reply-to: Eric Van Hensbergen <ericvh@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On 5/8/05, Andrew Warfield <andrew.warfield@xxxxxxxxx> wrote:
> Hi Eric,
> 
>    Your thoughts on 9P are all really interesting -- I'd come across
> the protocol years ago in looking into approaches to remote device/fs
> access but had a hard time finding details.  It's quite interesting to
> hear a bit more about the approach taken.
>

There's a bigger picture that Ron and I have discussed, but there are
lots of details that need to be resolved/proven before its worth
talking about.  The overall goal we are working towards is a simple
interface with equivalent or better performance.  The generality of
the approach we have discussed is appealing, but we are well aware it
won't be adopted unless we can show competitive performance and
reliability.

> 
>    For the more specific cases of FE/BE comms, I think the devil may
> be in the details more than the current discussion is alluding to.
> 

I agree completely, in fact there are likely several devils unique to
each target architecture ;)  But that's just the sort of thing that
keeps system programming interesting.  These sorts of things will only
be worked out once we have a prototype which to drive into the various
brick walls.

> 
> There are also a pile of complicating cases with regards cache
> eviction from a BE domain, migration, and so on that make the
> accounting really tricky.  I think it would be quite good to have a
> discussion of generalized interdomain comms address the current
> drivers, as well as a hypothetical buffer cache as potential cases.
> Does 9P already have hooks that would allow you to handle this sort of
> per-application special case?
>

Page table and cache manipulation would likely sit below the 9P layer
to keep things portable and abstract (as I've said earlier, perhaps
Harry's IDC proposal is the right layer to handle such things).  9P as
a protocol is quite flexible, but existing implementations are
somewhat simplistic and limited in regard to more advanced buffer/page
sharing capabilities.  We are exploring some of these issues in our
exploration of using Plan 9 and 9P in HPC cluster environments (using
cluster interconnects) -- in fact, the idea of using 9P between VMM
partitions fell out of that exploration.

> 
>    Additionally, I think we get away with a lot in the current drivers
> from a falure model that excludes transport.  The FE or BE can crash,
> and the two drivers can be written defensively to handle that.  How
> does 9P handle the strangenesses of real distribution?
> 

With existing 9P implementations, defensive drivers are the way to go.
 There have been three previous solutions proposed for failure
detection and recovery with 9P.  One handled recovery of 9P state
between client and server, the other handled reliability of the
underlying RPC transport, and the third was a layered file server
providing defensive semantics.  As I said earlier, we'll be exploring
these more fully (along with looking at fail over) this summer --
specifically in the context of VMMs.

9P isn't a magic bullet here, and there will be lots of issues that
will need to be dealt with either in the layer above it, or below it.

        -eric

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>