WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: shared-memory filesystem

To: "King, Steven R" <steven.r.king@xxxxxxxxx>
Subject: [Xen-devel] Re: shared-memory filesystem
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Fri, 4 Nov 2005 01:10:02 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 04 Nov 2005 01:11:20 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <44BDAFB888F59F408FAE3CC35AB470410249FBA3@orsmsx409>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <44BDAFB888F59F408FAE3CC35AB470410249FBA3@orsmsx409>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.8.3
> Will your Xen guest memory sharing approach create OS portability
> headaches?

Yes, probably ;-)

Seriously, though, filesystem drivers are *extremely* OS-dependent.  XenFS 
probably even more so since it's very intimately tied to the memory 
management routines.  Some of it will be portable (I'm splitting these bits 
out), but there'll always need to be a large proportion of OS-dependent 
filesystem goop.  It'll certainly be a while before it works with anything 
other than Linux.

The other issue is that *fully* supporting XenFS will require source code 
access.  Windows would either need to use a dumb XenFS client, *or* have some 
outside assistance (Michael Vrable and I have been talking about similar 
techniques, though for slightly different reasons).

> It would obviously be desirable to maintain one general 
> scheme that works in *nix, Windows, etc.  Windows processes can map
> files with MapViewOfFile(), but my understanding is that creating a
> Windows file system is difficult.

It wouldn't surprise me!  My understanding is that Linux is one of the nicer 
OSes to write a filesystem for, although that information may be out of date 
(and it's faaaaaaar from straightforward).

> Sending IOCTL's to exotic character devices is boring and not half as
> elegant, but isn't it the most portable approach?

The filesystem-based mmap trick basically comes "for free" as a result of my 
implementation.  If you really want the semantics of file-backed memory it 
may make more sense; if you just want plain shared memory the special device 
you propose would be better.

We should do both :-)

Cheers,
Mark

> -steve
>
> -----Original Message-----
> From: Mark Williamson [mailto:mark.williamson@xxxxxxxxxxxx]
> Sent: Thursday, November 03, 2005 3:22 PM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Christopher Clark; King, Steven R; NAHieu
> Subject: Re: [Xen-devel] Question on xc_gnttab_map_grant_ref()
>
> To expand:
>
> I'm working towards a shared-memory NFS-style filesystem for Xen guests.
> This will allow high-performance data sharing within one host.  This
> leverages direct memory sharing to maximise performance and make better
> use of the available RAM.
>
> A bonus feature of this direct sharing approach is that applications
> running in different domains on the same host should be able to share
> memory by both using a simple mmap() call.  This avoids us having to
> introduce any new wacky semantics / exotic character devices; sharing
> should work similarly to the case of two applications in one domain.
>
> Cheers,
> Mark

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>