This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: One (possible) x86 get_user_pages bug

To: Kaushik Barde <kbarde@xxxxxxxxxx>
Subject: [Xen-devel] Re: One (possible) x86 get_user_pages bug
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Mon, 31 Jan 2011 14:10:11 -0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, 'Kenneth Lee' <liguozhu@xxxxxxxxxx>, 'Peter Zijlstra' <a.p.zijlstra@xxxxxxxxx>, 'Marcelo Tosatti' <mtosatti@xxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, 'Jan Beulich' <JBeulich@xxxxxxxxxx>, wangzhenguo@xxxxxxxxxx, 'Xiaowei Yang' <xiaowei.yang@xxxxxxxxxx>, 'linqaingmin' <linqiangmin@xxxxxxxxxx>, fanhenglong@xxxxxxxxxx, 'Avi Kivity' <avi@xxxxxxxxxx>, 'Wu Fengguang' <fengguang.wu@xxxxxxxxx>, 'Nick Piggin' <npiggin@xxxxxxxxx>
Delivery-date: Mon, 31 Jan 2011 14:13:14 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <003301cbc182$da3affc0$8eb0ff40$@com>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4D416D9A.9010603@xxxxxxxxxx> <4D419416020000780002ECB7@xxxxxxxxxxxxxxxxxx> <4D41B90D.5000305@xxxxxxxx> <4D456139.4090508@xxxxxxxxxx> <001801cbc0cc$00d98d70$028ca850$@com> <4D46F9AE.80606@xxxxxxxx> <003301cbc182$da3affc0$8eb0ff40$@com>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc14 Lightning/1.0b3pre Thunderbird/3.1.7
On 01/31/2011 12:10 PM, Kaushik Barde wrote:
> << I'm not sure I follow you here.  The issue with TLB flush IPIs is that
> the hypervisor doesn't know the purpose of the IPI and ends up
> (potentially) waking up a sleeping VCPU just to flush its tlb - but
> since it was sleeping there were no stale TLB entries to flush.>>
> That's what I was trying understand, what is "Sleep" here? Is it ACPI sleep
> or some internal scheduling state? If vCPUs  are asynchronous to pCPU in
> terms of ACPI sleep state, then they need to synced-up. That's where entire
> ACPI modeling needs to be considered. That's where KVM may not see this
> issue. Maybe I am missing something here.

No, nothing to do with ACPI.  Multiple virtual CPUs (VCPUs) can be
multiplexed onto a single physical CPU (PCPU), in much the same way as
tasks are scheduled onto CPUs (identically, in KVM's case).  If a VCPU
is not currently running - either because it is simply descheduled, or
because it is blocked (what I slightly misleadingly called "sleeping"
above) in a hypercall, then it is not currently using any physical CPU
resources, including the TLBs.  In that case, there's no need to flush
that's VCPU's TLB entries, because there are none.

> << A "few hundred uSecs" is really very slow - that's nearly a
> millisecond.  It's worth spending some effort to avoid those kinds of
> delays.>>
> Actually, just checked IPIs are usually 1000-1500 cycles long (comparable to
> VMEXIT). My point is ideal solution should be where virtual platform
> behavior is closer to bare metal interrupts, memory, cpu state etc.. How to
> do it ? well that's what needs to be figured out :-)

The interesting number is not the raw cost of an IPI, but the overall
cost of the remote TLB flush.


Xen-devel mailing list