WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Interdomain comms

To: andrew.warfield@xxxxxxxxxxxx
Subject: Re: [Xen-devel] Re: Interdomain comms
From: Mike Wray <mike.wray@xxxxxx>
Date: Tue, 10 May 2005 09:31:12 +0100
Cc: Eric Van Hensbergen <ericvh@xxxxxxxxx>, Eric Van Hensbergen <ericvh@xxxxxxxxxxxxxxxxxxxxx>, Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>, "Ronald G. Minnich" <rminnich@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 10 May 2005 08:43:11 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <eacc82a4050508011979bda457@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <0BAE938A1E68534E928747B9B46A759A6CF3AC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <Pine.LNX.4.58.0505060940420.10357@xxxxxxxxxxxxxxx> <1115421185.4141.18.camel@localhost> <a4e6962a0505061719165b32e4@xxxxxxxxxxxxxx> <1115472417.4082.46.camel@localhost> <Pine.LNX.4.58.0505071009150.13088@xxxxxxxxxxxxxxx> <1115486227.4082.70.camel@localhost> <a4e6962a050507142932654a5e@xxxxxxxxxxxxxx> <1115503861.4460.2.camel@localhost> <a4e6962a050507175754700dc8@xxxxxxxxxxxxxx> <eacc82a4050508011979bda457@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.2 (X11/20050317)
Andrew Warfield wrote:
Hi Eric,

   Your thoughts on 9P are all really interesting -- I'd come across
the protocol years ago in looking into approaches to remote device/fs
access but had a hard time finding details.  It's quite interesting to
hear a bit more about the approach taken.

   Having a more accessible inter-domain comms API is clearly a good
thing, and extending device channels (in our terminology -- shared
memory + event notification) to work across a cluster is something
that we've talked about on several occasions at the lab.

   I do think though, that as mentioned above there are some concerns
with the VMM environment that make this a little trickier.  For the
general case of inefficient comms between VMs, using the regular IP
stack may be okay for many people.  The net drivers are being fixed up
to special-case local communications.

   For the more specific cases of FE/BE comms, I think the devil may
be in the details more than the current discussion is alluding to. Specifically:


c) As long as the buffers in question (both *buf and the buffer cache
entry) were page-aligned, etc. -- we could play clever VM games
marking the page as shared RO between the two partitions and alias the
virtual memory pointed to by *buf to the shared page.  This is very
sketchy and high level and I need to delve into all sorts of details
-- but the idea would be to use virtual memory as your friend for
these sort of shared read-only buffer caches.  It would also require
careful allocation of buffers of the right size on the right alignment
-- but driver writers are used to that sort of thing.


   Most of the good performance that Xen gets off of block and net
split devices are specifically because of these clever VM games. Block FEs pass page references down to be mapped directly for DMA. Net devices pass pages into a free pool, and actually exchange
physical pages under the feet of the VM as inbound packets are
demultiplexed.  The grant tables that have recently been added provide
separate mechanisms for the mapping and ownership transfer of pages
across domains.  In addition to these tricks, we make careful use of
timing event notification in order to batch messages.

It should be possible to still use the page mapping in the i/o transport.
The issue right now is that the i/o interface is very low-level and
intimately tangled up with the structs being transported.

And with the domain control channel there's an implicit assumption
that 'there can be only one'. This means for example, that domain A
using a device with backend in domain B can't connect directly to domain B,
but has to be 'introduced' by xend. It'd be better if it could connect
directly.

Something like what Harry proposes should still be able to use
page mapping for efficient local comms, but without _requiring_
it. This opens the way for alternative transports, such as network.

Rather than going straight for something very high-level, I'd prefer
to build up gradually, starting with a more general message transport
api that includes analogues to listen/connect/recv/send.


   In the case of the buffer cache that has come up several times in
the thread, a cache across domains would potentially neet to pass read
only page mappings as CoW in many situations, and a fault handler
somewhere would need to bring in a new page to the guest on a write. There are also a pile of complicating cases with regards cache
eviction from a BE domain, migration, and so on that make the
accounting really tricky.  I think it would be quite good to have a
discussion of generalized interdomain comms address the current
drivers, as well as a hypothetical buffer cache as potential cases. Does 9P already have hooks that would allow you to handle this sort of
per-application special case?

   Additionally, I think we get away with a lot in the current drivers
from a falure model that excludes transport.  The FE or BE can crash,
and the two drivers can be written defensively to handle that.  How
does 9P handle the strangenesses of real distribution?

Anyhow, very interesting discussion... looking forward to your thoughts.

a.


Mike

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>