This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Re: [PATCH] Fixed legacy issues when extends number of v

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Subject: RE: [Xen-devel] Re: [PATCH] Fixed legacy issues when extends number of vcpus > 32
From: "Li, Xin" <xin.li@xxxxxxxxx>
Date: Mon, 17 Aug 2009 14:40:27 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 16 Aug 2009 23:42:09 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C6AD9633.12289%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <706158FABBBA044BAD4FE898A02E4BC201C0553F3B@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C6AD9633.12289%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcoeF6u/T7x3fDZwRTGhuLTahDvILQAL2GrBAAHwiDAAAjqxHAAoiqng
Thread-topic: [Xen-devel] Re: [PATCH] Fixed legacy issues when extends number of vcpus > 32
>> Keir Fraser wrote:
>>> Let me think about these. For patch 1 I think we can perhaps do more
>>> work in the loop which matches vlapic identifiers, and thus avoid
>>> needing a "temporary cpumask" to remember matches. For patch 2 I've
>>> been intending to throw away the VMX VPID logic and share the SVM
>>> logic, as it flushes TLBs no more than the VMX logic and doesn't
>>> suffer the same problems with VPID/ASID exhaustion.
>> We have 2^16 vpids after removing the limit, so it should support 65535 vcpus
>> runing concurrently in a system, so we don't need to consider the exhaustion
>> case from this point of view ?
>Why have two sets of logic when one is superior to the other? It doesn't
>make sense. I'll take a look at your patch and apply it for now, however.

On hardware side, the key difference is that VMX VPID space is very large:
2^16 VPIDs, and 0 is reserved for VMX root mode. So 65535 VPIDs can be
assigned to VMX vCPUs. We use a bitmap to manage the VMX VPID space:
we reclaim freed VPIDs and reuse them later globally.

If I understand correctly, Xen manages SVM ASIDs per LP. So Xen needs to
allocate a new ASID on the target LP after each vCPU migration. To accelerate
ASID allocation after each vCPU migration, Xen doesn't use a bitmap to claim
freed ASIDs, while just performs a TLB flush and forces each vCPU on this LP to
regenerate a ASID when ASID exhaustion happens on a LP.

I'd agree that as VPID space is per LP, it's not necessary to be globally 
If we manage such a big VPID space using a bitmap on each LP, it will require 
a few memory and be inefficient on VPID allocation and reclaim. So probably we
can apply the current ASID allocation approach to VPID assuming VPID exhaustion
will be much less.

On the other side, I can't understand why we need to consider the overflow of


Xen-devel mailing list