Current Xen supports 2MB super pages for NPT/EPT. The
attached patches extend this feature to support 1GB pages. The PoD (populate-on-demand)
introduced by George Dunlap made P2M modification harder. I tried to preserve
existing PoD design by introducing a 1GB PoD cache list.
Note that 1GB PoD can be dropped if we don't care about 1GB
when PoD is enabled. In this case, we can just split 1GB PDPE into 512x2MB PDE
entries and grab pages from PoD super list. That can pretty much make
1gb_p2m_pod.patch go away.
Any comment/suggestion on design idea will be appreciated.
The following is the description:
=== 1gb_tools.patch ===
Extend existing setup_guest() function. Basically, it tries
to allocate 1GB pages whenever available. If this request fails, it falls back
to 2MB. If both fail, then 4KB pages will be used.
=== 1gb_p2m.patch ===
Check PSE bit of L3 page table entry. If 1GB is found
(PSE=1), we split 1GB into 512 2MB pages.
Configure the PSE bit of L3 P2M table if page order == 18
Add support for 1GB case when doing gfn to mfn translation.
When L3 entry is marked as POPULATE_ON_DEMAND, we call
2m_pod_demand_populate(). Otherwise, we do the regular address translation (gfn
This is similar to p2m_gfn_to_mfn(). When L3 entry s marked
as POPULATE_ON_DEMAND, it demands a populate using p2m_pod_demand_populate().
Otherwise, it does a normal translation. 1GB page is taken into consideration.
Request 1GB page
Support 1GB while auditing p2m table.
Deal with 1GB page when changing global page type.
=== 1gb_p2m_pod.patch ===
Minor change to deal with PoD. It separates super page cache
list into 2MB and 1GB lists. Similarly, we record last gpfn of sweeping for
both 2MB and 1GB.
Check page order and add 1GB super page into PoD 1GB cache
Grab a page from cache list. It tries to break 1GB page into
512 2MB pages if 2MB PoD list is empty. Similarly, 4KB can be requested from
super pages. The breaking order is 2MB then 1GB.
This function is used to set PoD cache size. To increase PoD
target, we try to allocate 1GB from xen domheap. If this fails, we try 2MB. If
both fail, we try 4KB which is guaranteed to work.
To decrease the target, we use a similar approach. We first
try to free 1GB pages from 1GB PoD cache list. If such request fails, we try
2MB PoD cache list. If both fail, we try 4KB list.
This adds a new function to check for 1GB page. This
function is similar to p2m_pod_zero_check_superpage_2mb().
We add a new function to sweep 1GB page from guest memory.
This is the same as p2m_pod_zero_check_superpage_2mb().
The trick of this function is to do remap_and_retry if
p2m_pod_cache_get() fails. When p2m_pod_get() fails, this function will splits
p2m table entry into smaller ones (e.g. 1GB ==> 2MB or 2MB ==> 4KB). That
can guarantee populate demands always work.