WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Xen Interdomain Semaphore

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Xen Interdomain Semaphore
From: Timothy Hayes <hayesti@xxxxxx>
Date: Mon, 9 Mar 2009 21:25:50 +0000
Delivery-date: Mon, 09 Mar 2009 14:26:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,
 
I'm working with some data that's been mapped into two virtual machine address spaces (this is kernel module code). Right now if I try to manipulate the data there are race conditions so naturally I need a constraint like a mutex or a semaphore. I'm not 100% certain, but I'm guessing the Linux kernel semaphore isn't going to work as expected since it puts a process to sleep for a wait() call and wakes a process up for the signal() call. The domain in question won't be the same domain that created the process.
 
I'm wondering if there is an interdomain semaphore for Xen; maybe someone has written one already? Maybe there are some "best practices" when it comes to something like this. Any tips would be really appreciated.
 
Kind regards
Tim Hayes
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>