|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] HVM save/restore issue
On 20/3/07 08:46, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx> wrote:
>> Out of interest: why would you do this? I glanced upon the code you are
>> referring to in xc_hvm_restore.c yesterday, and it struck me as particularly
>> gross. All three PFNs (ioreq, bufioreq, xenstore) could be saved in the
>> store after building the domain and then saved/restored as part of the
>> Python-saved data. The situation is easier than for a PV guest because PFNs
>
> save all PFNs directly is good idea. i have this code to keep create and
> restore
> process similar.
> i'd like directly save/restore all pfns in xc_hvm_{save,restore}. is this your
> want?
Other thoughts on xc_hvm_restore as it is right now, and its use/abuse of
'store_mfn' parameter to pass in memory_static_min. I think this can be
reasonably got rid of:
1. Do the setmaxmem hypercall in Python. There's no reason to be doing it
in xc_hvm_save().
2. Instead of preallocating the HVM memory, populate the physmap on demand
as we do now in xc_linux_restore. I'd do this by having an 'allocated
bitmap', indexed by guest pfn, where a '1' means that page is already
populated. Alternatively we might choose to avoid needing the bitmap by
always doing populate_physmap() whenever we see a pfn, and have Xen
guarantee that to be a no-op if RAM is already allocated at that pfn.
If we go the bitmap route I'd just make it big enough for a 4GB guest up
front (only 128kB required) and then realloc() it to be twice as big
whenever we go off the end of the current bitmap.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|