WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] yanked share problem

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Subject: Re: [Xen-devel] yanked share problem
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Wed, 14 Dec 2005 16:57:02 +0000
Cc: NAHieu <nahieu@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, "King, Steven R" <steven.r.king@xxxxxxxxx>
Delivery-date: Wed, 14 Dec 2005 17:06:23 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <571ACEFD467F7749BC50E0A98C17CDD802C06AB1@pdsmsx403>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <571ACEFD467F7749BC50E0A98C17CDD802C06AB1@pdsmsx403>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.8.3
> >> Could you tell me what happen if the DomU_A crash while DomU_B still
> >> accesses the  memory it is granted? And moreover, how can DomU_A knows
> >> that his friend has just "died"?
> >
> >AFAIK, A will be prevented from being fully destroyed until B drops the
> >reference to that page of memory.  The page will be around as long as B
> > wants it.
> >
> >Cheers,
> >Mark
>
> How about B is waiting for A's notification to end reference, but A crashed
> before sending out notification? One immediate example is the shared ring
> buf between backend and frontend. Backend may not access the shared ring
> buf when A is crashed. But that doesn't mean backend won't access that
> address later since that virtual address is legally allocated from linux
> buddy pool. We need provide a way to notify reference side something going
> abnormally, and let reference side to drop reference and release local
> resource.

So, if the frontend domain crashes but the backend driver is still accessing 
the comms ring?

It won't actually break things if the backend accesses the comms ring for a 
crashed domain, it just won't be able to do sensible IO requests anymore.  In 
the case you describe, the virtual device in the backend would get destroyed, 
since the device would disappear out of Xenstore when the crashed domain is 
destroyed.  This will cause the backend to unmap the granted page, which'll 
then get returned to Xen (allowing the frontend domain to be fully 
destroyed).

> One possible way is to register a grant call back for each driver. When Xen
> detects A crashed, xen notifies registered callback. For example, backend
> can register a callback which check whether any reference on-going. If yes,
> waiting for those reference done. Finally release all references to grant
> entries of crashed domain and also release local resource back to linux and
> exit the driver. After all callbacks are done, Xen then free that machine
> page.

I think you should be able to achieve most of what you want by co-ordinating 
access to the share using Xenstore: you'll need to use the store to set up 
the location of the shared memory anyhow, so you might as well use it to be 
notified of when the other domain goes away?

Does that sound about right?

Cheers,
Mark

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel