|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts
Jeremy Fitzhardinge wrote:
Avi Kivity wrote:
And the hypercall could result in no Xen-level IPIs at all, so it
could be very quick by comparison to an IPI-based Linux
implementation, in which case the flag polling would be particularly
harsh.
Maybe we could bring these optimizations into Linux as well. The
only thing Xen knows that Linux doesn't is if a vcpu is not
scheduled; all other information is shared.
I don't think there's a guarantee that just because a vcpu isn't
running now, it won't need a tlb flush. If a pcpu does runs vcpu 1 ->
idle -> vcpu 1, then there's no need for it to do a tlb flush, but the
hypercall can make force a flush when it reschedules vcpu 1 (if the
tlb hasn't already been flushed by some other means).
That's what I assumed you meant. Also, if a vcpu has a different cr3
loaded, the flush can be elided. Looks like Linux does this
(s/vcpu/process/).
(I'm not sure to what extent Xen implements this now, but I wouldn't
want to over-constrain it.)
Well, kvm does this.
The nice thing about local_irq_disable() is that it scales so well.
Right. But it effectively puts the burden on the tlb-flusher to check
the state (implicitly, by trying to send an interrupt). Putting an
explicit poll in gets the same effect, but its pure overhead just to
deal with the gup race.
I guess it hopes the flushes are much rarer. Certainly for threaded
databases doing O_DIRECT stuff, I'd expect lots of gupfs and no tlb flushes.
--
error compiling committee.c: too many arguments to function
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|