Jan Beulich wrote:
>>>> "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> 17.09.09 11:05 >>>
>> Can you elaborate it a bit? For example, considering system with
>> following memory layout: 1G ~ 3G, 1024G ~ 1028G, 1056G~1060G, I
>> did't catch you algrithom :$
>
> That would be (assuming it really starts a 0)
>
> 0000000000000000-00000000bfffffff
> 0000010000000000-00000100ffffffff
> 0000010800000000-00000108ffffffff
>
> right? The common non-top zero bits are 36-39, which would reduce the
> virtual address space needed for the 1:1 mapping and frame table
> approximately by a factor of 16 (with the remaining gaps dealt with by
> leaving holes in these tables' mappings).
>
> Actually, this tells me that I shouldn't simply use the first
> range of non-
> top zero bits, but the largest one (currently, I would use bits
> 32-34).
>
> But, to be clear, for the purposes of memory hotplug, the SRAT is what
> the parameters get determined from, not the E820 (since these
> parameters, other than the upper boundaries, must not change post-
> boot).
Hmm, this method is difficult to hotplug. I'm not sure if all new memory will
be reported in SRAT.
Also, did you change the mfn_valid()? Otherwise, the hole in frametable and m2p
table will cause corruption to hypervisor.
Currently I'm using an array to keep track of memory population. Each entry
will guard like 32G memory. If the entry is empty, that mean the mfn is
invalid. I think we can simply keep the entry value to be the virtual address.
But I suspect that's not efficent enough.
--jyh
>
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|