WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT

To: "Munoz, Alberto J" <alberto.j.munoz@xxxxxxxxx>, "Yang, Fred" <fred.yang@xxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Subject: [Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT
From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
Date: Sun, 1 May 2005 11:41:54 -0700
Cc: ipf-xen <ipf-xen@xxxxxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 01 May 2005 18:41:45 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVKfesR741jQGkzQvWmdNAbNvskDgAAPPdQAAnH97AAJJJm8AAGJlmgACr730AABzFgcAAYTImwABTis7AAAN6T8AAIbpPQAAEKV1AAP8dSsAAHW6aAABkvMiA=
Thread-topic: Xen/ia64 - global or per VP VHPT
> > Yes, that's basically what I am saying.  I understand why a
> > VTi implementation needs to handle every possibly situation
> > because silicon rolls are very expensive.  It's not nearly
> > as important for paravirtualized.  For example, Vmware didn't
> > support Linux 2.6 until their "next release" (I don't remember
> > what the release number was).
> 
> I am not sure how the VMware example is relevant here (it 
> certainly has
> nothing to so with VHPTs).

Sorry, what I meant is that when an OS needs to run on Xen/ia64,
if it uses some feature that isn't supported on Xen/ia64 today,
it just isn't supported until that feature gets added to the
next version of Xen/ia64.  I.e, it's not as necessary in Xen
design as it is in CPU design to ensure that every possible crazy
thing that an guest MIGHT do is handled properly in the first
release of Xen/ia64.  Thus iterative design is much more
acceptable for Xen than for CPU design.

Translating that into the current VHPT discussion, I'm using
it as a reason to try both.  If per-domain VHPT proves to be
much better than global VHPT, let the best code win; global
VHPT will go unused and eventually be pulled out.  If one
design is better for some workloads, and the other is better
for other workloads, we should allow both.

> Please let's talk about specifics 
> and how they
> relate to the issues of:
> 
> - Scalability (additional contention in a Global VHPT)

I see your lock contention argument.  Is the contention any
worse for 10 domains contending for a global VHPT than an existing
10-way SMP (e.g. HP-UX, not virtualized) contending for an lVHPT?

If a global VHPT is bad for this environment, a global VHPT shouldn't
be used for this environment.

> - The need to minimize guest interference (one guest/domain having the
> ability to interfere with another through a shared resource)

I see your argument here too.  Some of this will be mitigated by
the hash algorithm.  And I suspect what's left is pretty
far down on the list of performance isolation and DoS issues
in Xen.

> I have not seen this. Would you mind sending me a pointer to 
> this. I tend to
> follow these discussions sporadically, so I missed that one email.

http://lists.xensource.com/archives/html/xen-ia64-devel/2005-04/msg00012
.html

Please note this is just a couple week's work (based on experience
from vBlades) so please ask questions rather than shoot
bullets at it.  It's definitely a work in progress.

 
> I don't doubt that everything you mention above is possible. 
> All I am saying
> is that it would be very useful to specify exactly what 
> paravirtualization
> is doing before making claims that certain issues will not be 
> relevant in a
> paravirtualized environment.

Fair enough.

> If your domains can grow/shrink (the ballooning case you 
> mention above) to
> use 4GB - 64 GB of memory, then in the case of a single VHPT 
> it is OK to
> just allocate X, although this is wasteful if you are not 
> using all 64 GB
> (e.g., you are running two domains each using 4GB of memory), 
> but you do not
> have a choice (other than dynamically growing/shrinking the 
> VHPT).

That's the point I was trying to make.  Wasteful is not
strong enough though... if you have 64 such domains, all
of memory is used for VHPTs.  So I think some mechanism
for growing/shrinking per-domain VHPTs needs to part of the
design or a lot of "utility computing" flexibility is lost.

> What I think I said is that having collision chains from the VHPT is
> critical to avoiding forward progress issues. The problem is 
> that IPF may
> need up to 3 different translations for a single instruction. 
> If you do not
> have collision chains and the translations required for a 
> single instruction
> (I-side, D-side and RSE) happen to hash to the same VHPT 
> entry, you may get
> into a situation in which the entries keep colliding with 
> each other and the
> guest makes no forward progress (it enters a state in which 
> it alternates
> the I-side, D-side and RSE faults). By the way, this is not just
> theoretical, I have seen it happen in two different 
> implementations of IPF
> virtual MMUs.

Yes, I've seen it happen even in the current Xen/ia64 implementation.
I fixed it with a single entry vITLB and a single entry vDTLB
which must be kept consistent with the VHPT.

I'm glad it is not a problem for per-domain VHPT also... I didn't
think it was, but I wanted to clarify in case I misunderstood
what you were saying.
 
> You keep on making this differentiation between full and 
> paravirtualization
> (but I don't think that is very relevant to what I am saying), please
> explain how in a paravirtualized guest the example I 
> presented above of 10
> UP VMs having to synchronize updates to the VHPT is not an issue. 

You are likely correct.  But it is a small matter of coding
to add the synchronization.  Then if performance is poor, we
tell system administrators that the the per-domain VHPT
may be preferable on highly-scalable systems -- at the loss
of some flexibility in dynamic domain migration/ballooning.

And if it turns out that per-domain VHPT works "better" for
ALL workloads, then I will admit I was wrong and pull the
support for global VHPT.  But until then it should be left
as an option (for non-VT domains).

Dan


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel