|
|
|
|
|
|
|
|
|
|
xen-ia64-devel
[Xen-devel] RE: Xen/ia64 - global or per VP VHPT
To: |
"Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Yang, Fred" <fred.yang@xxxxxxxxx> |
Subject: |
[Xen-devel] RE: Xen/ia64 - global or per VP VHPT |
From: |
"Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx> |
Date: |
Sat, 30 Apr 2005 20:16:13 -0700 |
Cc: |
Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, ipf-xen <ipf-xen@xxxxxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx |
Delivery-date: |
Sun, 01 May 2005 03:15:57 +0000 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
Thread-index: |
AcVKfesR741jQGkzQvWmdNAbNvskDgAAPPdQAAnH97AAJJJm8AAGJlmgACr730AABzFgcAAYTImwABTis7AAALccgAAAPutgABLQl4AAN6ez4A== |
Thread-topic: |
Xen/ia64 - global or per VP VHPT |
Moving to xen-ia64-devel only...
> -----Original Message-----
> From: Dong, Eddie [mailto:eddie.dong@xxxxxxxxx]
> Sent: Friday, April 29, 2005 8:11 PM
> To: Magenheimer, Dan (HP Labs Fort Collins); Yang, Fred
> Cc: ipf-xen; Xen-devel; xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: Xen/ia64 - global or per VP VHPT
>
> Hi, Dan:
> see my later comments as I have 15 hours time
> difference with you guys.
>
> Magenheimer, Dan (HP Labs Fort Collins) wrote:
> >>> Per-domain VHPT will have its disadvantages too, namely a large
> >>> chunk of memory per domain that is not owned by the domain.
> >>> Perhaps this is not as much of a problem on VT which will be
> >>> limited to 16 domains, but I hope to support more non-VT domains
> >>> (at least 64, maybe more).
> >> For the quick answer on this, we are using fixed partition on
> >> RID to get 16 domains for start - to get to domainN. But it
> >> is for the basic code to work. The scheme can be switched to
> >> dynamically RID partition to get to >64 domains.
> >
> > But only with a full TLB purge on every domain switch, correct?
> >
> Actually we have designed the rid virtualization
> mechanism but is not in this implementation yet. Actually in
> this area we don't have difference between your approach
> (starting_rid/ending_rid for each domain) and high 4 bits
> indicating domain ID. Merge this problem is quit easy.
>
> In our implementation, full TLB purge happens only when
> all machine tlb is exhausted and HV decide to recycle all
> machine TLBs(like current Linux does). For domain switch, we
> don't have any extra requirement except switching machine
> PTA(point to per domain VHPT).
>
> > All this just says that a global VHPT may not be good for a
> > big machine. This may be true. I'm not suggesting that
> > Xen/ia64 support ONLY a global VHPT or even necessarily that
> > it be the default, just that we preserve the capability to
> > configure either (or even both).
> I am afraid supporting for both solution is
> extremely high burden as VMMU is a too fundmental thing. For
> example: How to support hypercall information passing between
> guest and HV? You are using poorman's exception handler now
> that is OK for temply debug effort. But as we discussed, it
> has critical problem/limitations.
> The solution to solve that in our vMMU is that we keep
> all guest TLBs in HV internal data structure, and we have
> defined a seperate TLB section type like ForeignMap(Term in
> X86 XEN)/Hypercall sharedPage in vTLB. Xenolinux or Device
> model or others can insert special maps for that. This type
> of section will not be automatically purged when the
> collision chain is full. In this way guest will not see tlb
> miss for "uaccess" in HV to access guest data.
> How to solve that in global VHPT? I am afraid it is
> really hard. Why do we want to spend more time to discard
> existing approach and investigate on no hints direction?
>
> BTW, how do you support MMIO map for DOM-N if the
> domain-N is a non modified Linux? I am afraid global VHPT
> will also eventually need a similar vTLB data struture to support.
>
> > Is the per-domain VHPT the same size as whatever the domain
> allocates
> > for its own VHPT (essentially a shadow)? Aren't there purge
> > performance problems with this too?
> In our vMMU implementation, the per domain VHPT is only
> used to assit the software data structure (per domain VTLB).
> So we are actually not shadow.
>
> Eddie
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|