WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT

To: "Munoz, Alberto J" <alberto.j.munoz@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT
From: Matt Chapman <matthewc@xxxxxxxxxxxxxxx>
Date: Sun, 1 May 2005 12:21:00 +1000
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, ipf-xen <ipf-xen@xxxxxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 01 May 2005 02:21:07 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <01EF044AAEE12F4BAAD955CB750649430372FF02@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <01EF044AAEE12F4BAAD955CB750649430372FF02@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040523i
(I'm coming in late here, so apologies if I'm missing something.)

> > No, multiple page sizes are supported, though there does have
> > to be a system-wide minimum page size (e.g. if this were defined
> > as 16KB, a 4KB-page mapping request from a guestOS would be rejected).

Of course if necessary smaller page sizes could be supported in software
at a performance cost, as suggested in the ASDM (2.II.5.5 Subpaging).

> In my opinion this is a moot point because in order to provide the
> appropriate semantics for physical mode emulation (PRS.dt, or PSR.it, or
> PSR.rt == 0) it is necessary to support a 4K page size as the minimum
> (unless you special case translations for physical mode emulation).

Can you explain why this is the case?  Surely the granularity of the
metaphysical->physical mapping can be arbitrary?

> Also in
> terms of machine memory utilization, it is better to have smaller pages (I
> know this functionality is not yet available in Xen, but I believe it will
> become important once people are done working on the basics).

Below you say "Memory footprint is really not that big a deal for these
large machines" ;)  As it is, just about everyone runs Itanium Linux
with 16KB page size, so 16KB memory granularity is obviously not a big
deal.

Since the mappings inserted by the hypervisor are limited to this
granularity (at least, without some complicated superpage logic to
allocate and map pages sequentially), I like the idea of using a larger
granularity in order to increase TLB coverage.

> > Purging is definitely expensive but there may be ways to
> > minimize that.  That's where the research comes in.
> 
> It is not just purging. Having a global VHPT is, in general, really bad for
> scalability. Every time the hypervisor wants to modify anything in the VHPT,
> it must guarantee that no other processors are accessing that VHPT (this is
> a fairly complex thing to do in TLB miss handlers).

I think there are more than two options here?  From what I gather, I
understand that you are comparing a single global lVHPT to a per-domain
lVHPT.  There is also the option of a per-physical-CPU lVHPT, and a
per-domain per-virtual-CPU lVHPT.

When implementing the lVHPT in Linux I decided on a per-CPU VHPT for the
scalability reasons that you cite.  And one drawback is, as Dan says,
that it may be difficult to find a large enough chunk of free physical
memory to bring up a new processor (or domain in the per-domain case).

> Another important thing is hashing into the VHPT. If you have a single VHPT
> for multiple guests (and those guests are the same, e.g., same version of
> Linux) then you are depending 100% on having a good RID allocator (per
> domain) otherwise the translations for different domains will start
> colliding in your hash chains and thus reducing the efficiency of your VHPT.
> The point here is that guest OSs (that care about this type of stuff) are
> designed to spread RIDs such that they minimize their own hash chain
> collisions, but there are not design to not collide with other guest's.
> Also, the fact that the hash algorithm is implementation specific makes this
> problem even worse.

RID allocation is certainly an issue, but I think it's an issue even
with a per-domain VHPT.  If you have a guest that uses the short VHPT,
such as Linux by default, it may not produce good RID allocation even
with just one domain.  For best performance one would need to either
modify the guest, or virtualise RIDs completely, in which case a global
or per-physical-CPU VHPT can be made to work well too.

Matt


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>