WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] [RFC] paravirt_alt

On Wed, Jul 18, 2007 at 05:22:18PM -0600, Alex Williamson wrote:

>    This is a very interesting patch, and quite well done.  Nice work.
> If I understand correctly, the performance benefit of ENTRY=n is that
> br.cond.sptk.many is overwritten with a nop, so we fall through to the
> xen code without the branch penalty that the generic paravirt_entry code
> causes, correct?

Yes.


>  For a xenlinux kernel running on bare metal, it seems
> there's no performance difference between ENTRY=y/n.  I assume ENTRY=y
> would be necessary to support a non-Xen PV technology, correct?

That's right.
Another idea is to modify all of those entry callers using
relocation entries. So the runtime overhead would be zero in theory with
relocation cost at boot time.
However I haven't found any good way to obtain necessary relocation
entries except creating a custom tool to get them from whole relocation
entries which is gotten by --emit-relocs ld option.


>   I haven't found a significant performance difference with the patch,
> but the potential certainly seems to exist for it.  My tests may not be
> producing enough memory pressure to really see an improvement from
> removing the running_on_xen memory reference.
> 
>    Would the long term plan for the paravirt_*.c files (excluding
> paravirt_xen.c) be to move to arch/ia64/kernel, or maybe
> arch/ia64/kernel/paravirt?  For now the xen directory may be an
> appropriate place since there isn't another caller.

Yes.

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>