WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT

To: "Munoz, Alberto J" <alberto.j.munoz@xxxxxxxxx>, "Yang, Fred" <fred.yang@xxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Subject: [Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT
From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
Date: Sat, 30 Apr 2005 20:09:19 -0700
Cc: ipf-xen <ipf-xen@xxxxxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sun, 01 May 2005 03:08:53 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVKfesR741jQGkzQvWmdNAbNvskDgAAPPdQAAnH97AAJJJm8AAGJlmgACr730AABzFgcAAYTImwABTis7AAAN6T8AAIbpPQAAEKV1AAP8dSsA==
Thread-topic: Xen/ia64 - global or per VP VHPT
> > In my opinion, performance when emulating physical mode is
> > a moot point.  
> 
> Linux IPF TLB miss handlers turn off PRS.dt. This is very performance
> sensitive.

Um, true, but that's largely irrelevant to discussion of
VHPT capability/performance isn't it?
 
> The way I see you applying this argument here is a bit 
> different, though:
> there are things that Linux does today that will cause 
> trouble with this
> particular design choice, but all I have to do is to make sure these
> troublesome things get designed out of the paravirtualized OS.

Yes, that's basically what I am saying.  I understand why a
VTi implementation needs to handle every possibly situation
because silicon rolls are very expensive.  It's not nearly
as important for paravirtualized.  For example, Vmware didn't
support Linux 2.6 until their "next release" (I don't remember
what the release number was).
 
> In any case, I think it is critical to define exactly what an IPF
> paravirtualized guest is (maybe this has already been done 
> and I missed it)
> before making assumptions as to what the guest will and will not do
> (specially when those things are done by native guests 
> today). I don't think
> it is quiet the same as an X-86 XenoLinux, as a number of the 
> hypercalls are
> very specific to addressing X-86 virtualization holes, which 
> do not have
> equivalents in IPF. 

There is a paravirtualized design and Xenlinux implementation
available. (See an earlier posting.) It's still a work in
progress but its proceeding nicely.

> I know that there have been attempts at paravirtualizing 
> (actually more like
> dynamically patching) IPF Linux before (e.g., vBlades, you 
> may be familiar
> with it :-), but I am not sure if the Xen project for IPF has decided
> exactly what an IPF paravirtualized XenoLinux will look like. 
> I am also not
> sure if it has also been decided that no native IPF guests (no binary
> patching) will be supported.

An entirely paravirtualized guest (no patching) is certainly
feasible.  I could have it running in a couple weeks time,
but haven't considered it high on the priority list.

Another interesting case (I think suggested by Arun) is a
"semi-paravirtualized" system where some paravirtualization
is done but priv-sensitive instructions are handled by
hardware (VT) rather than binary patching.  Or perhaps that's
what you meant?

In any case, there's a lot of interesting possibilities here
and, although there are many opinions about which is best,
I think we should preserve the option of trying/implementing
as many as possible.  I'm not "black and white"... I'm more
of an RGB kinda guy :-)
 
> Let's define "big" in an environment where there are multiple 
> cores per die...

Not sure what your point is.  Yes, SMP systems are becoming
more common, but that doesn't mean that every system is
going to be running Oracle or data mining.  In other words,
it may be better to model a "big" system as an independent
collection of small systems (e.g. utility data center).

> > E.g., assume an administrator automatically configures all domains
> > with a nominal 4GB but ability to dynamically grow up to 64GB.  The
> > per-guest VHPT would need to pre-allocate a shadow VHPT for the
> > largest of these (say 1% of 64GB) even if each of the domains never
> > grew beyond the 4GB, right?  (Either that or some kind of VHPT
> > resizing might be required whenever memory is "hot-plugged"?)
> 
> I am not sure I understand your example. As I said in my 
> previous posting,
> experience has shown that the optimal size of the VHPT (for 
> performance) is
> dependent of the number of physical pages it supports (not 
> how many domains,
> but how many total pages those domains will be using). In 
> other words, the
> problem of having a VHPT support more memory is independent 
> of whether it
> represents one domain or multiple domains. It depends on how 
> many total
> memory pages are being supported. 

OK, let me try again.  Let's assume a system has 64GB and (by whatever
means) we determine that a 1GB VHPT is the ideal size for a 64GB
system.  Now let's assume an environment where the "load" (as measured
by number of active guests competing for a processor) is widely
variable... say maybe a national lab where one or two hardy allnighters
run their domains during the night but 16 or more run during the day.
Assume also that all those domains are running apps that are heavily
memory intensive, that is they will use whatever memory is made
available but can operate as necessary with less memory.  So
when the one domain is running, it balloons up to a full 64GB
but when many are running, they chug along with 4GB or less each.

The global VHPT allocates 1GB at Xen boot time.

How big a VHPT do you allocate for each of the 16 domains?  Surely
not 
Or are you ballooning the VHPT size along with memory size?

On a related note, I think you said something about needing to
guarantee forward progress.  In your implementation, is the VHPT
required for this?  If so, what happens when a domain migrates
at the exact point where the VHPT is needed to guarantee forward
progress?  Or do you plan on moving the VHPT as part of migration?
 
> I see it a bit more black and white than you do.

Black and white invariably implies a certain set of assumptions.
I'm not questioning your position given your set of assumptions,
I'm questioning your assumptions -- as those assumption may
make good sense in some situations (e.g. VT has to implement
all possible cases) but less so in others (e.g. paravirtualized).

Dan

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel