WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] tmem: fix to 20945 "When tmem is enabled, reserv

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] tmem: fix to 20945 "When tmem is enabled, reserve a fraction of memory"
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Thu, 18 Feb 2010 07:37:49 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 17 Feb 2010 23:38:33 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <a14f9c9d-76b9-4b9b-a5e5-3447361efdf0@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcqwHHwJYkXelZ0dQPaoVCIGGOCIpQAUMkHe
Thread-topic: [PATCH] tmem: fix to 20945 "When tmem is enabled, reserve a fraction of memory"
User-agent: Microsoft-Entourage/12.23.0.091001
If you don't want verbosity in the failed-allocation path, remove your
printk. Notice non-tmem code doesn't make a noise in this case.
tmem_relinquish_pages() takes an order parameter, and the normal path
through the allocator unconditionally calls it. Hence it doesn't make sense
to logically separate order=0 from order>=9 in this case either, from the
p.o.v. of the caller.

 -- Keir

On 17/02/2010 21:49, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:

> Hi Keir --
> 
> Hmmm... one other consequence of the change you made to the patch
> (as checked in as 20955 in staging) is that every attempt to allocate
> any 2MB page for a new domain (when memory is scarce) will result
> in a complaint printk'ed from tmem_relinquish_pages() before
> the domain builder falls back to order==0 pages.  This
> verbosity is probably not desirable in a product, though
> it may be very desirable with debug enabled as we track
> down other order>0 allocations.
> 
> Changing back to the "goto fail" avoids the verbosity without
> losing the debug capability.
> 
> Dan
> 
>> -----Original Message-----
>> From: Dan Magenheimer
>> Sent: Wednesday, February 17, 2010 8:13 AM
>> To: Jan Beulich; Keir Fraser
>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Subject: RE: [PATCH] tmem: fix to 20945 "When tmem is enabled, reserve
>> a fraction of memory"
>> 
>>> From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx]
>>> Subject: Re: [PATCH] tmem: fix to 20945 "When tmem is enabled,
>> reserve
>>> a fraction of memory"
>>> 
>>>>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 17.02.10 13:10 >>>
>>>> On 16/02/2010 18:30, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx>
>>> wrote:
>>>> 
>>>>> +        if ( order == 0)
>>>>> +            goto try_tmem;
>>>>> +        if ( order >= 9)
>>>>> +            goto fail;
>>>> 
>>>> Why not try_tmem in the case that order>=9, too, rather than fail
>>> outright?
>>> 
>>> It could be done that way, but wouldn't have any effect, as tmem
>>> doesn't even try to relinquish any memory when order > 0.
>> 
>> Correct.  To explain (if anyone is interested):
>> 
>> Tmem maintains queues of order==0 pages internally because
>> if a page is released to the xenheap/domheap, it must be scrubbed.
>> But tmem is highly likely to use the page again (and SOON).
>> If tmem immediately reclaims the page, the scrubbing is wasted
>> cycles.  But if it does not and some other xenheap/domheap allocation
>> obtains the page, the contents of an unscrubbed page could
>> reveal data from another domain so would be a potential
>> security hole.
>> 
>> When a domain is being created, a large number of pages
>> may be (scrubbed and) transferred from tmem to Xen/domheap.
>> While this transfer, in combination with the "buddying"
>> in xenheap/domheap, may result in some order>0 chunks of
>> memory, there is no guarantee that it will.
>> 
>> I considered adding some kind of "buddying" to tmem's "free"
>> pages (and the interface to tmem_relinquish_pages() from
>> alloc_heap_pages() allows for an order>0 to be requested),
>> but due to fragmentation it would only rarely have any
>> value, especially for order>1, so I never implemented it.
>> 
>> So, in the end, the real solution is to change any allocations
>> in Xen, at least any allocations that occur after dom0 is
>> running, to no longer require order>0 allocations.
>> 
>> Dan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel