This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: [PATCH] mem_sharing: fix race condition of nominate and

To: MaoXiaoyun <tinnycloud@xxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
From: Jui-Hao Chiang <juihaochiang@xxxxxxxxx>
Date: Thu, 13 Jan 2011 10:26:55 +0800
Cc: xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, tim.deegan@xxxxxxxxxx
Delivery-date: Wed, 12 Jan 2011 18:27:47 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=DxEckFTpMQMRIzd0JzoH8/Znr1KMqxdg73KOYdSdwgA=; b=W++Bd1YIKB5tohJ7e7cAb05QhXIVR+9GWDu7XC8erjAVaUUu/hQDDAxBUaPE3rcz2s 7m1R2cSMXSHR4Mwx9y2y6/vnrD/QmP5kv33XrYERcYEAFp3I+sUKPXjvnI2ZCNx0IvPC kzu/kHhtTliMnYElIP+CzoxImFixQI1I7wqO0=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=LzZu9ahVwdPLq2411wuIb9RBZ8lJ21B3w9dG6FREcNQVLvdWW2Ha7Ed+/6F28rxTi3 o49nhngDoRstSfWJFzE/pC0mVnkvb7+23wZh3zpJU3or2SgR2IsKrNEv3YnEGb80U9U+ D4wXdx5nZGzc3SDFQXPHBXy1PQ+0iTBr7DsYA=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <BLU157-w1861EFE53CB51FC710011FDAF10@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTinMp1v1zex2BfcUuszotPuxJFWZQNUp40gu_gxL@xxxxxxxxxxxxxx> <20110106165450.GO21948@xxxxxxxxxxxxxxxxxxxxxxx> <AANLkTinmpiusLqegGZA+bZWpDXPM+7Wq2nt8MZa0Ocet@xxxxxxxxxxxxxx> <AANLkTinf-A_4NEPQeCw0pftM5Bks8BYPRhMx3-stTHxa@xxxxxxxxxxxxxx> <BLU157-ds1E01DEBE18840E5FBB6D1DA0E0@xxxxxxx> <AANLkTikBaB5awvRu3Sn3WfuoS3LRmeBH=pG8c7H1n4Cw@xxxxxxxxxxxxxx> <AANLkTinoXve=zBzB9qN1qXRz+iJmhiQ+-gB7MwFoY5Dg@xxxxxxxxxxxxxx> <20110112105405.GH5651@xxxxxxxxxxxxxxxxxxxxxxx> <BLU157-w59C63325262D0BE99E6C43DAF10@xxxxxxx> <20110112140223.GI5651@xxxxxxxxxxxxxxxxxxxxxxx> <BLU157-w1861EFE53CB51FC710011FDAF10@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi, all:

I think there is still a problem.
(1) I think using the get_page_and_type is definitely better since
it's a function already implemented there
There seems a typo:
"if ( get_page_and_type(page, d, PGT_shared_page) )" should be changed
to "if ( !get_page_and_type(page, d, PGT_shared_page) )" because the
function return 1 on success.

(2) The major problem is the __put_page_type() never handle the
special case for shared pages.

If the (1) is changed as I said, the problem still exists as the following
/* Before nominating domain 1, gfn 0x63 */
(XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
ci=8000000000000002, ti=0, owner_id=1
/* After a failed nominate  [desired: ci=8000000000000002, ti=0]*/
(XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
ci=8000000000000002, ti=8400000000000000, owner_id=1

2011/1/12 MaoXiaoyun <tinnycloud@xxxxxxxxxxx>:
> Hi Tim:
>         That's it, I am running the test, so far so good, I'll test more,
> thanks.
>       Currently from the code of tapdisk, it indicates only *read only* IO
> with secs 8 has the
> chance to be shared, so does it mean only the parent image can be shared,
> still it needs to
> be opened read only, right?
>       So it looks like page sharing are refer to those pages contain disk
> data been loaded
> into Guest IO buffer, and this is the page sharing in Xen, right?


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>