WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts
From: Avi Kivity <avi@xxxxxxxxxx>
Date: Thu, 19 Mar 2009 11:46:27 +0200
Cc: Nick Piggin <nickpiggin@xxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Linux Memory Management List <linux-mm@xxxxxxxxx>, Ingo Molnar <mingo@xxxxxxx>
Delivery-date: Thu, 19 Mar 2009 02:47:29 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49C18487.1020703@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <49C148AF.5050601@xxxxxxxx> <49C16411.2040705@xxxxxxxxxx> <49C1665A.4080707@xxxxxxxx> <49C16A48.4090303@xxxxxxxxxx> <49C17230.20109@xxxxxxxx> <49C17880.7080109@xxxxxxxxxx> <49C17BD8.6050609@xxxxxxxx> <49C17E22.9040807@xxxxxxxxxx> <49C18487.1020703@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (X11/20090105)
Jeremy Fitzhardinge wrote:

Well, no, not deferring. Making xen_flush_tlb_others() spin waiting for "doing_gup" to clear on the target cpu. Or add an explicit notion of a "pte update barrier" rather than implicitly relying on the tlb IPI (which is extremely convenient when available...).

Pick up a percpu flag from all cpus and spin on each?  Nasty.

Yeah, not great. Each of those flag fetches is likely to be cold, so a bunch of cache misses. The only mitigating factor is that cross-cpu tlb flushes are expected to be expensive, but some workloads are apparently very sensitive to extra latency in that path.

Right, and they'll do a bunch more cache misses, so in comparison it isn't too bad.

And the hypercall could result in no Xen-level IPIs at all, so it could be very quick by comparison to an IPI-based Linux implementation, in which case the flag polling would be particularly harsh.

Maybe we could bring these optimizations into Linux as well. The only thing Xen knows that Linux doesn't is if a vcpu is not scheduled; all other information is shared.


Also, the straightforward implementation of "poll until all target cpu's flags are clear" may never make progress, so you'd have to "scan flags, remove busy cpus from set, repeat until all cpus done".

All annoying because this race is pretty unlikely, and it seems a shame to slow down all tlb flushes to deal with it. Some kind of global "doing gup_fast" counter would get flush_tlb_others bypass the check, at the cost of putting a couple of atomic ops around the outside of gup_fast.

The nice thing about local_irq_disable() is that it scales so well.


You could use the irq enabled flag; it's available and what native spins on (but also means I'll need to add one if I implement this).

Yes, but then we'd end up spuriously polling on cpus which happened to disable interrupts for any reason. And if the vcpu is not running then we could end up polling for a long time. (Same applies for things in gup_fast, but I'm assuming that's a lot less common than disabling interrupts in general).

Right.

--
error compiling committee.c: too many arguments to function


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>