WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC] design/API for plugging tmem into existing xen phy

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC] design/API for plugging tmem into existing xen physical memory management code
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Sat, 14 Feb 2009 20:20:25 +0000
Cc:
Delivery-date: Sat, 14 Feb 2009 12:21:28 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <10adc262-1374-440e-a5d9-46a1343eb4e4@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcmOvS7omHcKabAbTluaO4HTK+h/1gAJHxvN
Thread-topic: [Xen-devel] [RFC] design/API for plugging tmem into existing xen physical memory management code
User-agent: Microsoft-Entourage/12.15.0.081119
On 14/02/2009 15:58, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:

> Are all of these allocated at domain startup only?  Or
> are any (shadow pages perhaps?) allocated at relatively
> random times?  If random, what are the consequences
> if the allocation fails?   Isn't it quite possible
> for a random order>0 allocation to fail today due
> to "natural causes"?  E.g. because the currently running
> domains by coincidence (or by ballooning) have used
> up all available memory?  Have we just been "lucky"
> to date, because fragmentation is so bad and ballooning
> is so rarely used, that we haven't seen failures
> of order>0 allocations? (Or maybe have seen them but
> didn't know it because the observable symptoms are
> a failed domain creation or a failed migration?)

I think the per-domain shadow pool is pre-reserved, so it should be okay.
Lack of memory simply causes domain creation failure. Any extra memory that
shadow code would try to allocate would just be gravy I'm pretty sure.

> Perhaps Jan's idea of using xenheap as an "emergency
> fund" for free pages is really a good idea?

It's a can of worms. How big to make the pool? Who should be allowed to
allocate from it and when? What if the emergency pool becomes exhausted?

> That's a reasonable idea... maybe with a "scrub_me"
> flag set in the struct page_info by tmem and checked by the
> existing alloc_heap_pages (and ignored if a memflags flag
> is passed to alloc_xxxheap_pages() set to "ignore_scrub_me")?
> There'd also need to be a free_and_scrub_domheap_pages().
>
> If you prefer that approach, I'll give it a go.  But still
> some (most?) of the time, there will be no free pages so
> alloc_heap_pages will still need to have a hook to tmem
> for that case.

I'm not super fussed, it's just an idea to consider. Could doing it this new
way make it less possible to scrub pages asynchronously before they're
needed?

> I *think* these calls are just in python code (domain creation
> and ballooning) and, if so, will just go through the existing
> tmem hypercall.

Well, probably okay.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel