WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC][PATCH] 0/9 Populate-on-demand memory

To: 'Tim Deegan' <Tim.Deegan@xxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC][PATCH] 0/9 Populate-on-demand memory
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Mon, 5 Jan 2009 14:08:49 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 04 Jan 2009 22:10:04 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20090102100330.GA12729@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <de76405a0812240642i42f9c1f2ud7ca7d9d1bf4e400@xxxxxxxxxxxxxx> <30c85335-4729-4ae4-bb24-0c9dc2abe3cf@default> <de76405a0812240746h4fca29fbg97010f76e7c14ba9@xxxxxxxxxxxxxx> <20081230092637.GB7747@xxxxxxxxxxxxxxxxxxxxx> <0A882F4D99BBF6449D58E61AAFD7EDD603BB4A2B@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20090102100330.GA12729@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclswWa+zv0o3Yq+R0WFOJU5ggK81wCOinZQ
Thread-topic: [Xen-devel] [RFC][PATCH] 0/9 Populate-on-demand memory
>From: Tim Deegan [mailto:Tim.Deegan@xxxxxxxxxx] 
>Sent: Friday, January 02, 2009 6:04 PM
>Hi, 
>
>At 09:40 +0800 on 31 Dec (1230716432), Tian, Kevin wrote:
>> >From: Tim Deegan [mailto:Tim.Deegan@xxxxxxxxxx] 
>> >At 15:46 +0000 on 24 Dec (1230133560), George Dunlap wrote:
>> >> At any rate, I suppose it might not be a bad idea to 
>*try* to allocate
>> >> more memory in an emergency.  I'll add that to the list of
>> >> improvements.
>> >
>> >Please don't do this.  It's not OK for a domain to start using more
>> >memory without the say-so of the tool stack.  Since this emergency
>> >condition means something has gone wrong (balloon driver failed to
>> >start) then you're probably just postponing the inevitable, 
>and in the
>> >meantime you might cause problems for domains that *aren't* 
>> >misbehaving.
>> >
>> 
>> Then a user controlled option would fit here, which indicate whether
>> given domain is important and then emergency expansion could be
>> allowed in such case if mandatory kill is not acceptable.
>
>What if you're booting two important domains, one of which misbehaves
>and uses extra memory, causing the second boot to fail?  They were both
>important, and you've just chosen the buggy one. :)
>
>Anyway, the only way to guarantee that a domain will boot even if it
>fails to launch its balloon driver is to make sure there is enough
>memory around for it to populate its entire p2m -- in which case you
>might as well just allocate it all that memory in the first place and
>avoid the extra risk of a bug in the pod code nobbling this important
>domain.
>
>The marginal benefit of allowing it to break the rules in the 
>case where
>things go "slightly wrong" (i.e. it overruns its allocation but somehow
>recovers before using all available memory) seems so small to me that
>it's not even worth the extra lines of code in Xen and xend.  
>Especially
>since probably either nobody would turn it on, or everyone 
>would turn it
>on for every domain.
>

ok, a sound argument.

Thanks,
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>