This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Race in vlapic init_sipi tasklet

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: [Xen-devel] Race in vlapic init_sipi tasklet
From: George Dunlap <dunlapg@xxxxxxxxx>
Date: Mon, 18 Oct 2010 18:16:33 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Delivery-date: Mon, 18 Oct 2010 10:19:19 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:sender:received:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type; bh=txuYj68cIkUeVHym8xxj7ekTI+L1U1n7i7UosTyHRMU=; b=Jhy5bSF7hYa5ixM/7PIsln/mTVJaf/R4TqSHAkMRqrtXdnUoKvxa+jWUFvXEysqns2 9Elkl8w86Wa7hV7UXbZLFR+2HiVMKLqVprBwe8HvbFt3aXpbLY7KLx2uQyqGW75X067p 5ZXYylKluZ1ohnrHRYesu1XyRliRbFJxJbT6I=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type; b=dVB1iSd+Kk5nOW8TmBJuPc5dNX+u+BQ7Uk7YCelCOdJCw/H0CIrd2+OlWz/ygWLSJ9 gQcWBiVPpMzNqAUid0+NoPwIX+edmJ5fbXd45JHl0ElnribA4ZbIYJdFAbf1n15xrnoR X0YZakTVwObBzCPyTFAzQ2GUFRzxsQe+qXJK8=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
I've been tracking down a bug where a multi-vcpu VM hangs in the
hvmloader on credit2, but not on credit1.  It hangs while trying to
bring up extra cpus.

It turns out that an unintended quirk in credit2 (some might call it a
bug) causes a scheduling order which exposes a race in the vlapic
init_sipi tasklet handling code.

The code as it stands right now, is meant to do this:
* v0 does an APIC ICR write with APIC_DM_STARTUP, trapping to Xen.
* vlapic code checks to see that v1 is down (vlapic.c:318); finds that
it is down, and schedules the tasklet, returning X86_EMUL_RETRY
* Taslket runs, brings up v1.
* v1 starts running.
* v0 re-executes the instruction, finds that v1 is up, and returns
X86_EMUL_OK, allowing the instruction to move forward.
* v1 does some diagnostics, and takes itself offline.

Unfortunately, the credit2 scheduler almost always preempts v0
immediately, allowing v1 to run to completion and bring itself back
offline again, before v0 can re-try the MMIO.  It looks like this:
* v0 does APIC ICR APIC_DM_STARTUP write, trapping to Xen.
* vlapic code checks to see that v1 is down; finds that it is down,
schedules the tasklet, returns X86_EMUL_RETRY
* Tasklet runs, brings up v1
* Credit 2 pre-empts v0, allowing v1 to run
* v1 starts running
* v1 does some diagnostics, and takes itself offline.
* v0 re-executes the instruction, finds that v1 is down, and again
schedules the tasklet and returns X86_EMUL_RETRY.
* For some reason the tasklet doesn't actually bring up v1 again
(presumably because it hasn't had an APIC_DM_INIT again); so v0 is
stuck doing X86_EMUL_RETRY forever.

The problem is that VPF_down is used as the test to see if the tasklet
has finished its work; but there's no guarantee that the scheduler
will run v0 before v1 has come up and gone back down again.

I discussed this with Tim, and we agreed that we should ask you.

One option would be to simply make vlapic_schedule_sipi_init_ipi()
always return X86_EMUL_OK, but we weren't sure if that might cause
some other problems.

The "right" solution, if synchronization is needed, is to have an
explicit signal sent back that the instruction can be allowed to
complete, rather than relying on reading VPF_down, which may cause



Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>