Hi,
At 16:10 +0800 on 28 Jul (1185639053), Huang, Xinmei wrote:
> With current accelerated VGA for qemu-dm, guest can access LFB directly,
> however, qemu-dm is not conscious of these accesses to LFB. The
> accompanying task is to determine the range of LFB to be redrawn on
> guest display window. Current qemu-dm maintains a copy of LFB, and gets
> the LFB dirty-bitmap through memcmp. This patch adopts another way to
> get the LFB dirty-bitmap: one hypercall to instruct hypervisor to fill
> the dirty-bitmap. Hypervisor checks the D-bit of PTEs and updates the
> dirty-bitmap.
Thanks for this -- those numbers look very good!
The shadow-code modifications seem to have a lot of moving parts,
though. Since we expect that the guest will have a single, contiguous,
kernel-mode mapping of the LFB, we should be able to do this with less
administration:
- Figure out the VA of the writable mapping of the LFB.
- When asked for the bitmap, walk the shadow linear page tables of the
area, recording and clearing the _PAGE_DIRTY bits. If you see a PTE
pointing at the wrong place, back off and tell qemu to try the slow
way. If you see an LFB mfn with a writeable count > 1, either give
up or assume it's dirty. (If you take a page-fault, then the guest
has marked his writeable mapping of the LFB non-writable at a higher
level -- probably just back off at that point).
- When a shadow PTE pointing at the LFB is made or cleared, set the bit
in the bitmap.
That involves a single equality test in sh_page_fault() to spot the VA,
a few lines in shadow_set_l1e() to spot new/departing mappings, and
almost everything else can happen in one routine that reads/writes the
linear pagetables with a single for() loop.
A few other points:
- The assumption that the LFB is MFN-contiguous is not valid. You do
work around the degug=y allocator's habit of handing out pages
backwards, but that's there to alert you to the more general
problem of discontiguous mfns.
- Since the dirty bits are only one per word, they can be atomically
cleared without needing locked operations to protect their
neighbours. That means that you don't need to pause the domain: the
shadow lock will be enough to keep the operation safe.
- After clearing the dirty bits, you need to flush TLBs to make sure
they'll get set again. VMX guests get their TLBs flushed on every
VMEXIT at the moment, but that's not true on SVM on some hardware,
and won't be true on VMX when Intel processors get tagged TLBs.
Cheers,
Tim.
--
Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, XenSource UK Limited
Registered office c/o EC2Y 5EB, UK; company number 05334508
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|