On 22/03/2011 12:36, "Keshav Darak" <keshav_darak@xxxxxxxxx> wrote:
> Keir,
> We are aware of it and we have to use 'opt_allow_superpages' boolean flag
> in our implementation too. But when we use superpages flag in domain
> configuration file,
> entire domain boots on hugepages (superpages).If the specified memory in
> 'hugepages' for the domain is not available, then the domain does not boot.
> But in our implementation , we target to give only those many hugepages (
> using "hugepage_num" option in config file) to the domain that it actually
> requires and hence entire domain need not be booted on hugepages.
> This is to support domains that boot with 4 KB pages and still can use
> hugepages. So,
> the pressure on the number of hugepages required for a domain even to boot is
> reduced to a great extent.
Okay, I don't see why that would need further changes in the hypervisor
itself, however.
-- Keir
> --- On Mon, 3/21/11, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
>>
>> From: Keir Fraser <keir.xen@xxxxxxxxx>
>> Subject: Re: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB
>> pages
>> To: "Keshav Darak" <keshav_darak@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: jeremy@xxxxxxxx
>> Date: Monday, March 21, 2011, 9:31 PM
>>
>> Keshav,
>>
>> There is already optional support for superpage allocations and mappings for
>> PV guests in the hypervisor and toolstack. See the opt_allow_superpages
>> boolean flag in the hypervisor, and the 'superpages' domain config option
>> that can be specified when creating a new domain via xend/xm.
>>
>> -- Keir
>>
>> On 21/03/2011 21:01, "Keshav Darak" <keshav_darak@xxxxxxxxx
>> </mc/compose?to=keshav_darak@xxxxxxxxx> > wrote:
>>
>>> have corrected few mistakes in previously attached xen patch file.
>>> Please review it.
>>>
>>> --- On Sun, 3/20/11, Keshav Darak <keshav_darak@xxxxxxxxx
>>> </mc/compose?to=keshav_darak@xxxxxxxxx> > wrote:
>>>>
>>>> From: Keshav Darak <keshav_darak@xxxxxxxxx
>>>> </mc/compose?to=keshav_darak@xxxxxxxxx> >
>>>> Subject: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB
>>>> pages
>>>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> </mc/compose?to=xen-devel@xxxxxxxxxxxxxxxxxxx>
>>>> Cc: jeremy@xxxxxxxx </mc/compose?to=jeremy@xxxxxxxx> , keir@xxxxxxx
>>>> </mc/compose?to=keir@xxxxxxx>
>>>> Date: Sunday, March 20, 2011, 10:34 PM
>>>>
>>>> We have implemented hugepage support for guests in following manner
>>>>
>>>> In our implementation we added a parameter hugepage_num which is specified
>>>> in
>>>> the config file of the DomU. It is the number of hugepages that the guest
>>>> is
>>>> guaranteed to receive whenever the kernel asks for hugepage by using its
>>>> boot
>>>> time parameter or reserving after booting (eg. Using echo XX >
>>>> /proc/sys/vm/nr_hugepages). During creation of the domain we reserve MFN's
>>>> for these hugepages and store them in the list. The listhead of this list
>>>> is
>>>> inside the domain structure with name "hugepage_list". When the domain is
>>>> booting, at that time the memory seen by the kernel is allocated memory
>>>> less
>>>> the amount required for hugepages. The function reserve_hugepage_range is
>>>> called as a initcall. Before this function the xen_extra_mem_start points
>>>> to
>>>> this apparent end of the memory. In this function we reserve the PFN range
>>>> for the hugepages which are going to be allocated by kernel by incrementing
>>>> the xen_extra_mem_start. We maintain these PFNs as pages in
>>>> "xen_hugepfn_list" in the kernel.
>>>>
>>>> Now before the kernel requests for hugepages, it makes a hypercall
>>>> HYPERVISOR_memory_op to get count of hugepages allocated to it and
>>>> accordingly reserves the pfn range.
>>>> then whenever kernel requests for hugepages it again make hypercall
>>>> HYPERVISOR_memory_op to get the preallocated hugepage and according makes
>>>> the
>>>> p2m mapping on both sides (xen as well as kernel side)
>>>>
>>>> The approach can be better explained using the presentation attached.
>>>>
>>>> --
>>>> Keshav Darak
>>>> Kaustubh Kabra
>>>> Ashwin Vasani
>>>> Aditya Gadre
>>>>
>>>>
>>>>
>>>> -----Inline Attachment Follows-----
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> </mc/compose?to=Xen-devel@xxxxxxxxxxxxxxxxxxx>
>>>> </mc/compose?to=Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> </mc/compose?to=Xen-devel@xxxxxxxxxxxxxxxxxxx> >
>>>> http://lists.xensource.com/xen-devel
>>>
>>>
>>
>>
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|