WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] mem_sharing: fix race condition of nominate and

To: Jui-Hao Chiang <juihaochiang@xxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
From: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Date: Thu, 13 Jan 2011 09:24:27 +0000
Cc: MaoXiaoyun <tinnycloud@xxxxxxxxxxx>, xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 13 Jan 2011 01:25:53 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTimOz_uauDEnu_XaPEgwD1EZJWEgOO1oiFccFNs1@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinmpiusLqegGZA+bZWpDXPM+7Wq2nt8MZa0Ocet@xxxxxxxxxxxxxx> <AANLkTinf-A_4NEPQeCw0pftM5Bks8BYPRhMx3-stTHxa@xxxxxxxxxxxxxx> <BLU157-ds1E01DEBE18840E5FBB6D1DA0E0@xxxxxxx> <AANLkTikBaB5awvRu3Sn3WfuoS3LRmeBH=pG8c7H1n4Cw@xxxxxxxxxxxxxx> <AANLkTinoXve=zBzB9qN1qXRz+iJmhiQ+-gB7MwFoY5Dg@xxxxxxxxxxxxxx> <20110112105405.GH5651@xxxxxxxxxxxxxxxxxxxxxxx> <BLU157-w59C63325262D0BE99E6C43DAF10@xxxxxxx> <20110112140223.GI5651@xxxxxxxxxxxxxxxxxxxxxxx> <BLU157-w1861EFE53CB51FC710011FDAF10@xxxxxxx> <AANLkTimOz_uauDEnu_XaPEgwD1EZJWEgOO1oiFccFNs1@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.20 (2009-06-14)
At 02:26 +0000 on 13 Jan (1294885615), Jui-Hao Chiang wrote:
> There seems a typo:
> "if ( get_page_and_type(page, d, PGT_shared_page) )" should be changed
> to "if ( !get_page_and_type(page, d, PGT_shared_page) )"

Oops!  Yes, thanks for that. :)

> (2) The major problem is the __put_page_type() never handle the
> special case for shared pages.
> 
> If the (1) is changed as I said, the problem still exists as the following
> /* Before nominating domain 1, gfn 0x63 */
> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> ci=8000000000000002, ti=0, owner_id=1
> /* After a failed nominate  [desired: ci=8000000000000002, ti=0]*/
> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> ci=8000000000000002, ti=8400000000000000, owner_id=1

Is this causing a real problem other than this printout?

One of the reasons to use get_page_and_type/put_page_and_type was that
it gets rid of the code that requires pages to have PGT_none before
they're shared. 

As I have been trying to explain, when a page has typecount 0 its type
is only relevant for the TLB flushing logic.  If there's still a place
in the page-sharing code that relies on (type == PGT_none && count == 0)
then AFAICS that's a bug. 

Cheers,

Tim.

> 2011/1/12 MaoXiaoyun <tinnycloud@xxxxxxxxxxx>:
> > Hi Tim:
> >
> >         That's it, I am running the test, so far so good, I'll test more,
> > thanks.
> >
> >       Currently from the code of tapdisk, it indicates only *read only* IO
> > with secs 8 has the
> > chance to be shared, so does it mean only the parent image can be shared,
> > still it needs to
> > be opened read only, right?
> >
> >       So it looks like page sharing are refer to those pages contain disk
> > data been loaded
> > into Guest IO buffer, and this is the page sharing in Xen, right?
> >
> >
> 
> Bests,
> Jui-Hao

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>