|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH] Std VGA Performance
On 24/10/07 22:36, "Ben Guthro" <bguthro@xxxxxxxxxxxxxxx> wrote:
> This patch improves the performance of Standard VGA,
> the mode used during Windows boot and by the Linux
> splash screen.
>
> It does so by buffering all the stdvga programmed output ops
> and memory mapped ops (both reads and writes) that are sent to QEMU.
How much benefit comes from immediate servicing of PIO input ops versus the
massive increase in buffered-io slots? Removing the former optimisation
would certainly make the patch a lot smaller!
What happens across save/restore? The hypervisor's state cache will go away,
won't it? I suppose it's okay if the guest is in SVGA LFB mode at that point
(actually, that's another thing - do you correctly handle hand-off between
VGA and SVGA modes), but I don't know that we want to rely on that.
-- Keir
> We maintain locally essential VGA state so we can respond
> immediately to input and read ops without waiting for
> QEMU. We snoop output and write ops to keep our state
> up-to-date.
>
> PIO input ops are satisfied from cached state without
> bothering QEMU.
>
> PIO output and mmio ops are passed through to QEMU, including
> mmio read ops. This is necessary because mmio reads
> can have side effects.
>
> I have changed the format of the buffered_iopage.
> It used to contain 80 elements of type ioreq_t (48 bytes each).
> Now it contains 672 elements of type buf_ioreq_t (6 bytes each).
> Being able to pipeline 8 times as many ops improves
> VGA performance by a factor of 8.
>
> I changed hvm_buffered_io_intercept to use the same
> registration and callback mechanism as hvm_portio_intercept
> rather than the hacky hardcoding it used before.
>
> In platform.c, I fixed send_timeoffset_req() to sets its
> ioreq size to 8 (rather than 4), and its count to 1 (which
> was missing).
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|