WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Biweekly VMX status report. Xen: #21438 & Xen0: #a3e7c7.

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Xu, Jiajun" <jiajun.xu@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Biweekly VMX status report. Xen: #21438 & Xen0: #a3e7c7...
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Wed, 2 Jun 2010 21:33:00 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc:
Delivery-date: Wed, 02 Jun 2010 06:33:59 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C82C0A56.16765%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <789F9655DD1B8F43B48D77C5D30659731E7ECF6D@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C82C0A56.16765%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acr7WP6tLI1/MbF5Rx6MiSRtri82UQACJ+lgACA1w7AAAgVQywAAD39BAVwtyIAABGU8GAAAFUsgAC8XDq8AArjzkAAAxWuBAAFqwYAABAb0fwACeBaQ
Thread-topic: [Xen-devel] Biweekly VMX status report. Xen: #21438 & Xen0: #a3e7c7...
Oops, I didn't notice this.
Thanks for your patch, I will test them tomorrow.

--jyh

>-----Original Message-----
>From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>Sent: Wednesday, June 02, 2010 8:17 PM
>To: Jiang, Yunhong; Xu, Jiajun; xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: Re: [Xen-devel] Biweekly VMX status report. Xen: #21438 & Xen0:
>#a3e7c7...
>
>That version of alloc_xenheap_pages is not built for x86_64.
>
> K.
>
>On 02/06/2010 11:23, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>
>> But in alloc_xenheap_pages(), we do unguard the page again, is that useful?
>>
>> --jyh
>>
>>> -----Original Message-----
>>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>>> Sent: Wednesday, June 02, 2010 5:41 PM
>>> To: Jiang, Yunhong; Xu, Jiajun; xen-devel@xxxxxxxxxxxxxxxxxxx
>>> Subject: Re: [Xen-devel] Biweekly VMX status report. Xen: #21438 & Xen0:
>>> #a3e7c7...
>>>
>>> On 02/06/2010 10:24, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>>>
>>>> (XEN) Pagetable walk from ffff83022fe1d000:
>>>> (XEN)  L4[0x106] = 00000000cfc8d027 5555555555555555
>>>> (XEN)  L3[0x008] = 00000000cfef9063 5555555555555555
>>>> (XEN)  L2[0x17f] = 000000022ff2a063 5555555555555555
>>>> (XEN)  L1[0x01d] = 000000022fe1d262 5555555555555555
>>>>
>>>> I really can't imagine how this can happen considering the vmx_alloc_vmcs()
>>>> is
>>>> so straight-forward. My test machine is really magic.
>>>
>>> Not at all. The free-memory pool was getting spiked with guarded (mapped
>>> not-present) pages. The later unlucky allocator is the one that then
>>> crashes.
>>>
>>> I've just fixed this as xen-unstable:21504. The bug was a silly typo.
>>>
>>> Thanks,
>>> Keir
>>>
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel