xen-devel-bounces@xxxxxxxxxxxxxxxxxxx <> wrote:
> On 19/03/2009 14:42, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>>>> It is ok for us to use an arbitrary new mfn, and then do the
>>>> update_entry. But what happen if this process failed and we want to turn
>>>> back to the old page? We still need this mechanism at that situation.
>>> If what failed? The update_entry? How could that happen?
>> Per discussion before, when the page is granted to other domain, then
>> after we update all entry, there will still have reference to left.
> Hmmm I don't really understand.
The basic idea to offline a page is:
1) mark a page offline pending
2) If the page is owned by a HVM domain, user have to live migrate it
3) If the page is owned by a PV domain, we will try to exchange the offline
pending page to a new one and free the old page. (This is target of this series
The method to exchange the offline pending page for PV domain is:
1) Suspend the guest.
2) Allocate a new page for the guest
3) Get a copy for the content
4) User space tools will scan all page table page to see if any reference to
the offending page, if yes, then it will hypercall to Xen to replace the entry
to point to the new one. (Through the mmu_*ops)
5) After update all page tables, user space tools will try to exchange the old
page with the new page. If the new mfn has no reference anymore (i.e.
count_info & count_mask = 1), the exchange will update the m2p and return
success, otherwise it will return fail. (the page may be referenced by other
domain, like grant table or foreign mapped).
6) If step 5 is success, user space tools will update the content of the new
page and the p2m table, else it will try to undo step 4 to revert page table
7) Resume the guest.
This requires we need to allocate the new page before the exchange call and we
have to pass both old_mfn and new_mfn in step 5 to exchange the memory.
However, current hypercall will always allocate a new page to replace the old
Currently I try to add a new hypercall for this purpose.
Maybe we can enhance the current XENMEM_exchange to accept a mem_flags, when
that flag is set, the exch.out.extent_start will be the new_mfn instead of the
gpfn, and the gpfn will be always same as corresponding gpfn in the
exch.in.ext_start. But I do think this is a bit tricky, it change the meaning
of exch.out.extent_start and how the gpn is pass down.
> Xen-devel mailing list
Xen-devel mailing list