This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] compat_mmuext_op - continuation code is wrong

To: John Levon <levon@xxxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] compat_mmuext_op - continuation code is wrong
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Wed, 20 Feb 2008 18:16:58 +0000
Delivery-date: Wed, 20 Feb 2008 10:17:36 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080220172115.GA7180@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Achz7Me0Bmot5t/gEdydswAX8io7RQ==
Thread-topic: [Xen-devel] compat_mmuext_op - continuation code is wrong
User-agent: Microsoft-Entourage/
On 20/2/08 17:21, "John Levon" <levon@xxxxxxxxxxxxxxxxx> wrote:

> But we never initialised the guest's "pdone" at any point. So we can
> copy in an uninit stack value into 'done', add 'i' to it, then copy it
> back out. On Solaris, we assert that the success count == what we passed
> in, so this breaks Solaris 32/64
> I'm running on 3.1 with a couple of fixes from unstable to make it work
> at all. The below patch works for us but I'm not confident in its
> correctness.

Better to remove the preempt check altogether, as it's already done
(correctly) in do_mmuext_op(). That's what I'll check in.

 -- Keir

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>