WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-api

RE: [Xen-API] Squeezed + slow domain = broken dynamic memory

To: Dave Scott <Dave.Scott@xxxxxxxxxxxxx>
Subject: RE: [Xen-API] Squeezed + slow domain = broken dynamic memory
From: George Shuklin <george.shuklin@xxxxxxxxx>
Date: Fri, 14 Jan 2011 19:23:44 +0300
Cc: "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 14 Jan 2011 08:24:02 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:subject:from:to:cc:in-reply-to:references :content-type:date:message-id:mime-version:x-mailer :content-transfer-encoding; bh=9HWUagyj381jzuWuuXljMKRRVpLZ/m/N5cYIQzryQSw=; b=eb3TnuNoaXjQO7iKx4cue8tCByTguqOTicOwWMBwTHzaCJWKvnPGAuRsm63txgF1N0 9wQZx4AWKTDiWmMqfUo7wtUFtHgNJhUKKi8i+47kwuJjNoo2HRvCDtpsBjWjlKNaIPOT Pmh0GpEiOsGYmFrpuup4J5rekheKOiCjbfuAk=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:from:to:cc:in-reply-to:references:content-type:date :message-id:mime-version:x-mailer:content-transfer-encoding; b=i+/GC+74b3qiNcW7qtJhKoSNKTtSjKnPIF3mE7WsTN3kFplww5uH2bbkx1iBTkbBvr L8DIBUMZp45o3yAUmlVPlMoYg/W/0LiBp+jagDbD6st+RcJ13RVUBtWglS0wvlYuWpa5 xO4fRdzx33usnej36ijj4Pb+bRIH7kklyboCk=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <81A73678E76EA642801C8F2E4823AD2193347428B2@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <1295020846.1973.26.camel@mabase> <81A73678E76EA642801C8F2E4823AD2193347428B2@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx

В Пт., 14/01/2011 в 16:06 +0000, Dave Scott пишет:
> Hi George,
> 
> > Good day.
> > 
> > I found strange limitation in squeezed code: if domain's balloon does
> > not reply to memory change request it marked as 'uncooperative'.
> > 
> > This cause stops to react to vm-memory-dynamic-set commands and stay
> > until xe toolstak restarted.
> 
> Hm, I thought if you used vm-memory-dynamic-set then the new memory target 
> would still be written in xenstore -- does it remain at the old value?

Yes, I really thought this too. But our memory-on-demand service use
direct interaction with domain's balloon via xenstore and suddenly it
stops to work. Host have huge memory reserves (about 20-30G - more than
sum of all memory-static-max for all VM's), but memory suddenly stops to
adjusting to bigger values. (we can lower, we can come back, but we can
go upper). And xe vm-memory-target fail silently too - it change XenAPI
values for dynamic-min/dynamic-max, but do not change actual TotalMem
value in guest vm (and mem_kb value for xc.domain_getinfo()). After
xe-toolstack-restart problem solves (without guest restart), so it does
not related to 'uncooperative' balloon driver. (other solution is
migrate vm to other host).

This problem is very futile and occur rarely (about one case in 3-5 days
with about 400 VM's on 8 host's pool), but with constant symptoms. 

After squeezed code research I saw error message text and found them in
squeezed.log for 'failing' domains:

20110114T11:14:05.376Z|debug|server|0|reserve_memory_range(xapi, 359424,
359424)|xenops] domid 125 has been declared inactive


Thank you very much for reply.

---
wBR, George Shuklin


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api

<Prev in Thread] Current Thread [Next in Thread>