WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] RE: rid virtualization

To: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Subject: [Xen-ia64-devel] RE: rid virtualization
From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
Date: Sat, 3 Sep 2005 13:24:43 -0700
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sat, 03 Sep 2005 20:22:25 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcWvMTOFrA1NTBYWSFqi8GJ1sqb6yAACBR0wABR/zTAAEUNDQAAUbbwgACh2a5A=
Thread-topic: rid virtualization
> > First question: Will the VHPT distribution problem still exist when
> > we are running multiple domains?
>
> I think so. For global VHPT case, multiple domain impose 
> entries from different guest. 
> The only difference is that that guest has high bits rid difference.

Exactly my point.  Don't the high rid bits participate in
the hash (especially after mangling), thus more guests would
use more of the VHPT?
 
> > Second question: Can the problem be fixed by improving the 
> "mangling"
> > code?  (I picked up this code from vBlades, but never really did
> > a thorough analysis that it provided a good distribution.)
> VTI code ever did this by choising different swap algorithm, 
> but no siginificant difference, 
> they all are in 20-30%. 

This seems very counter-intuitive.  What is the hardware hash algorithm?
Surely there is a way to "mangle" rid bits to match this
algorithm and use more of the VHPT?

> > Third question: If we go to a new "random rid distribution" model,
> > can this be designed with very little memory usage and ensure
> > that "garbage collection" is efficient when domains migrate very
> > dynamically?  I'd be concerned if, for example, we kept a 2^24 map
> > of what domain owns what rid.
> Yes, memory consumption is a concern. We are paying for the 
> lunch. The exact size of 
> g2m_rid_map will depend on the VHPT size. The entry numbers 
> in VHPT and g2m_rid_map 
> should be same.
> Different aproach exist here for g2m_rid_map, we can choice a 
> global map, per domain map 
> or per VP map. And the rid recycle can be eagle or lazy. For 
> global map, vcpu migration 
> doesn't have impact on that, but for per VP g2m_rid_map + 
> eagle rid reuse policy, vcpu migration
>  needs to recycle all rids used by this VP.
> To solve your concern, a global g2m_rid_map may be the 1st 
> choice although our design should
> cover more complicate situation.

This all seems very complicated if it is unnecessary.  I would
like to understand first why a different "mangling" algorithm
can't be made to use more of the VHPT.  If it can, then
using a different mangling algorithm is just fixing a bug.
If it can't, then  we need to understand exactly why as
the same results may occur even with a more complicated
(and memory-consuming) design.

I'm a big fan of Occam's razor.

> > Also, I'm fairly sure that the code to walk the collision
> > chains in assembly has never been enabled?
> It is enabled in VTI branch previously, do you want us to 
> move that to non VTI branch too?

Sure, please submit a patch.  (Ideally it should be tied
to an ifdef as we can measure what the performance difference
is... I seem to recall from vBlades that it didn't make
much difference but it seems as though it should, so
I'd like to measure.)

Dan

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>