xen-devel
[Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation
To: |
Zachary Amsden <zach@xxxxxxxxxx> |
Subject: |
[Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation |
From: |
Chris Wright <chrisw@xxxxxxxxxxxx> |
Date: |
Tue, 14 Mar 2006 13:27:42 -0800 |
Cc: |
Andrew Morton <akpm@xxxxxxxx>, Joshua LeVasseur <jtl@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Pratap Subrahmanyam <pratap@xxxxxxxxxx>, Wim Coekaerts <wim.coekaerts@xxxxxxxxxx>, Jack Lo <jlo@xxxxxxxxxx>, Dan Hecht <dhecht@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxxxx>, Christopher Li <chrisl@xxxxxxxxxx>, Chris Wright <chrisw@xxxxxxxxxxxx>, Virtualization Mailing List <virtualization@xxxxxxxxxxxxxx>, Linus Torvalds <torvalds@xxxxxxxx>, Anne Holler <anne@xxxxxxxxxx>, Jyothy Reddy <jreddy@xxxxxxxxxx>, Kip Macy <kmacy@xxxxxxxxxxx>, Ky Srinivasan <ksrinivasan@xxxxxxxxxx>, Leendert van Doorn <leendert@xxxxxxxxxxxxxx>, Dan Arai <arai@xxxxxxxxxx> |
Delivery-date: |
Wed, 15 Mar 2006 10:17:20 +0000 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
In-reply-to: |
<4416078C.4030705@xxxxxxxxxx> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<200603131759.k2DHxeep005627@xxxxxxxxxxxxxxxxxxx> <20060313224902.GD12807@xxxxxxxxxxxxxxxxxx> <4416078C.4030705@xxxxxxxxxx> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
User-agent: |
Mutt/1.4.2.1i |
* Zachary Amsden (zach@xxxxxxxxxx) wrote:
> Pushing up the stack with a higher level API is a serious consideration,
> but only if you can show serious results from it. I'm not convinced
> that you can actually hone in on anything /that isn't already a
> performance problem on native kernels/. Consider, for example, that we
> don't actually support remote TLB shootdown IPIs via VMI calls. Why is
> this a performance problem? Well, very likely, those IPI shootdowns are
> going to be synchronous. And if you don't co-schedule the CPUs in your
> virtual machine, you might just have issued synchronous IPIs to VCPUs
> that aren't even running. A serious performance problem.
>
> Is it? Or is it really, just another case where the _native_ kernel can
> be even more clever, and avoid doing those IPI shootdowns in the
> firstplace? I've watched IPI shootdown in Linux get drastically better
> in the 2.6 series of kernels, and see (anecdotal number quoting) maybe 4
> or 5 of them in the course of a kernel compile. There is no longer a
> giant performance boon to be gained here.
>
> Similarly, you can almost argue the same thing with spinlocks - if you
> really are seeing performance issues because of the wakeup of a
> descheduled remote VPU, maybe you really need to think about moving that
> lock off a hot path or using a better, lock free synchronization method.
>
> I'm not arguing against these features - in fact, I think they can be
> done in a way that doesn't intrude too much inside of the kernel. After
> all, locks and IPIs tend to be part of the lower layer architecture
> anyways. And they definitely do win back some of the background noise
> introduced by virtualization. But if you decide to make the interface
> more complicated, you really need to have an accurate measure of exactly
> what you can gain by it to justify that complexity.
Yes, I completely agree. Without specific performance numbers it's just
hand waving. To make it more concrete, I'll work on a compare/contrast
of the interfaces so we have specifics to discuss.
> >Included just for completeness can be beginning of API bloat.
>
> The design impact of this bloat is zero - if you don't want to implement
> virtual methods for, say, debug register access - then you don't need to
> do anything. You trap and emulate by default. If on the other hand,
> you do want to hook them, you are welcome to. The hypervisor is free to
> choose the design costs that are appropriate for their usage scenarios,
> as is the kernel - it's not in the spec, but certainly is open for
> debate that certain classes of instructions such as these need not even
> be converted to VMI calls. We did implement all of these in Linux for
> performance and symmetry.
Yup. Just noting that API without clear users is the type of thing that
is regularly rejected from Linux.
> >Many of these will look the same on x86-64, but the API is not
> >64-bit clean so has to be duplicated.
>
> Yes, register pressure forces the PAE API to be slightly different from
> the long mode API. But long mode has different register calling
> conventions anyway, so it is not a big deal. The important thing is,
> once the MMU mess is sorted out, the same interface can be used from C
> code for both platforms, and the details about which lock primitives are
> used can be hidden. The cost of which lock primitives to use differs on
> 32-bit and 64-bit platforms, across vendor, and the style of the
> hypervisor implementation (direct / writable / shadowed page tables).
My mistake, it makes perfect sense from ABI point of view.
> >Is this the batching, multi-call analog?
>
> Yes. This interface needs to be documented in a much better fashion.
> But the idea is that VMI calls are mapped into Xen multicalls by
> allowing deferred completion of certain classes of operations. That
> same mode of deferred operation is used to batch PTE updates in our
> implementation (although Xen uses writable page tables now, this used to
> provide the same support facility in Xen as well). To complement this,
> there is an explicit flush - and it turns out this maps very nicely,
> getting rid of a lot of the XenoLinux changes around mmu_context.h.
Are these valid differences? Or did I misunderstand the batching
mechanism?
1) can't use stack based args, so have to allocate each data structure,
which could conceivably fail unless it's some fixed buffer.
2) complicates the rom implementation slightly where implementation of
each deferrable part of the API needs to have switch (am I deferred or
not) to then build the batch, or make direct hypercall.
3) flushing in smp, have to be careful to manage simulataneous defers
and flushes from potentially multiple cpus in guest.
Doesn't seem these are showstoppers, just differences worth noting.
There aren't as many multicalls left in Xen these days anyway.
thanks,
-chris
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] [RFC, PATCH 1/24] i386 Vmi documentation, Zachary Amsden
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Chris Wright
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Zachary Amsden
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation,
Chris Wright <=
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Zachary Amsden
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Chris Wright
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Zachary Amsden
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Daniel Arai
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Chris Wright
- [Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Eli Collins
[Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Rik van Riel
[Xen-devel] Re: [RFC, PATCH 1/24] i386 Vmi documentation, Andi Kleen
|
|
|