WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Hypervisor crash(!) on xl cpupool-numa-split

To: Andre Przywara <andre.przywara@xxxxxxx>
Subject: Re: [Xen-devel] Hypervisor crash(!) on xl cpupool-numa-split
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Mon, 31 Jan 2011 15:28:54 +0000
Cc: Keir Fraser <keir@xxxxxxx>, Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Delivery-date: Mon, 31 Jan 2011 07:29:44 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=LxJPTLb1Kw8kzt9/MzNi+x+i6cA42hCanxD2cznOaBM=; b=H6CrV0Td4Sm3gTuT4T4NhU48ktn9bQeVpOHZOEVm0n+hcnzvys9+7yEuhr/PR5JTwJ rsjj8/loRflKNk9nlh+OcrbL1RRqlXp6U0cnAQM1eK2jDbo5WeG5YHi6h/c97ReEDA9p FkfwAnyVawKdRTzqUnKXLWyPuSNnCpXQACJwc=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; b=DyR1VD4RGeMUoxOPHpgSpucCTm/QYpIQq/conrg5gpP/tL+urWX6ppZgDupTok0PYn 0v1y5ZwQMF9XiIw3kXvt3EqzzZZGYDrLC/04DRROPFd++cdrhjepJfzFVUvQmC/B3MEz 6PUAb41g+SfCGKCC32fHv3ndATynkXsalYsxA=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D46CE4F.3090003@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4D41FD3A.5090506@xxxxxxx> <4D426673.7020200@xxxxxxxxxxxxxx> <4D42A35D.3050507@xxxxxxx> <4D42AC00.8050109@xxxxxxxxxxxxxx> <4D42C153.5050104@xxxxxxx> <4D465F0D.4010408@xxxxxxxxxxxxxx> <4D46CE4F.3090003@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Mon, Jan 31, 2011 at 2:59 PM, Andre Przywara <andre.przywara@xxxxxxx> wrote:
> Right, that was also my impression.
>
> I seemed to get a bit further, though:
> By accident I found that in c/s 22846 the issue is fixed, it works now
> without crashing. I bisected it down to my own patch, which disables the
> NODEID_MSR in Dom0. I could confirm this theory by a) applying this single
> line (clear_bit(NODEID_MSR)) to 22799 and _not_ seeing it crash and b) by
> removing this line from 22846 and seeing it crash.
>
> So my theory is that Dom0 sees different nodes on its virtual CPUs via the
> physical NodeID MSR, but this association can (and will) be changed every
> moment by the Xen scheduler. So Dom0 will build a bogus topology based upon
> these values. As soon as all vCPUs of Dom0 are contained into one node (node
> 0, this is caused by the cpupool-numa-split call), the Xen scheduler somehow
> hicks up.
> So it seems to be bad combination caused by the NodeID-MSR (on newer AMD
> platforms: sockets C32 and G34) and a NodeID MSR aware Dom0 (2.6.32.27).
> Since this is a hypervisor crash, I assume that the bug is still there, only
> the current tip will make it much less likely to be triggered.
>
> Hope that help, I will dig deeper now.

Thanks.  The crashes you're getting are in fact very strange.  They
have to do with assumptions that the credit scheduler makes as part of
its accounting process.  It would only make sense for those to be
triggered if a vcpu was moved from one pool to another pool without
the proper accounting being done.  (Specifically, each vcpu is
classified as either "active" or "inactive"; and each scheduler
instance keeps track of the total weight of all "active" vcpus.  The
BUGs you're tripping over are saying that this invariant has been
violated.)  However, I've looked at the cpupools vcpu-migrate code,
and it looks like it does everything right.  So I'm a bit mystified.
My only thought is if possibly a cpumask somewhere that wasn't getting
set properly, such that a vcpu was being run on a cpu from another
pool.

Unfortunately I can't take a good look at this right now; hopefully
I'll be able to take a look next week.

Andre, if you were keen, you might go through the credit code and put
in a bunch of ASSERTs that the current pcpu is in the mask of the
current vcpu; and that the current vcpu is assigned to the pool of the
current pcpu, and so on.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel