This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-ia64-devel] Re: [Xen-devel] IPF/Xen VTI domain testing report for X

To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>, "You, Yongkang" <yongkang.you@xxxxxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-ia64-devel] Re: [Xen-devel] IPF/Xen VTI domain testing report for Xen 3.0.3 RC1
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Fri, 29 Sep 2006 14:44:05 +0100
Delivery-date: Fri, 29 Sep 2006 06:43:34 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <51CFAB8CB6883745AE7B93B3E084EBE207DC5B@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcbjyV9RqKRYxfETTPut7tF0OQB28wAALAwQAADRN3Q=
Thread-topic: [Xen-devel] IPF/Xen VTI domain testing report for Xen 3.0.3 RC1
User-agent: Microsoft-Entourage/
On 29/9/06 14:28, "Xu, Anthony" <anthony.xu@xxxxxxxxx> wrote:

>> 5. LTP testing might run very slow in SMP VTI Domain with credit scheduler.
>> If binding VTI
>> and Xen0 vcpu, this bug won't be there.
> Hi keir,
> In credit scheduler, two vcpus in the same domain may be scheduled on the same
> CPU. For instance, vcpu0 and vcpu1 are running on the same CPU, vcpu0 is doing
> spin_lock in guest, then time slice is due, vcpu0 is scheduled out before
> doing spin_unlock, vcpu1 is scheduled in, vcpu1 is also trying to get the same
> spin lock, yes vcpu1 can't get this lock due to this lock has been got by
> vcpu0, so vcpu1 tight loop until its time slice is due, so this domain is very
> slow.
> Of cause, even vcpu0 and vcpu1 are running on different CPUs, there is still
> similar situation, but the impact is less.
> What do you think about this?

IBM did some tests at one point which concluded that we'd get little or no
benefit from paravirtualising spinlocks to yield-on-spin. This is mainly
because spinlock critical regions are usually very small. The chances of
preempting someone while they are in a spinlock critical region, *and*
scheduling someone else who wants to take the same lock, *and* not having
another CPU idle enough to run the lock-holding VCPU, should be very small.

 -- Keir

Xen-ia64-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>