On Mon, 2011-03-14 at 16:58 +0000, Jan Beulich wrote:
> >>> On 14.03.11 at 17:33, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> > On Mon, 2011-03-14 at 16:22 +0000, Jan Beulich wrote:
> >> >>> On 14.03.11 at 17:03, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> >> > On Mon, 2011-03-14 at 15:55 +0000, Jan Beulich wrote:
> >> >> >>> On 14.03.11 at 16:19, Gianni Tedesco <gianni.tedesco@xxxxxxxxxx>
> >> >> >>> wrote:
> >> >> >
> >> >> > This permits suspend/resume to work with 32bit dom0/tools. AFAICT the
> >> >> > limit to MACH2PHYS_COMPAT_NR_ENTRIES is redundant since that refers
> >> >> > to a
> >> >> > limit in 32bit guest compat mappings under 64bit hypervisors, not
> >> >> > userspace where there may be gigabytes of useful virtual space
> >> >> > available
> >> >> > for this.
> >> >> >
> >> >> > Suggested-by: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
> >> >> > Signed-off-by: Gianni Tedesco <gianni.tedesco@xxxxxxxxxx>
> >> >> >
> >> >> > diff -r 8b5cbccbc654 xen/arch/x86/x86_64/compat/mm.c
> >> >> > --- a/xen/arch/x86/x86_64/compat/mm.c Mon Mar 14 14:59:27 2011 +0000
> >> >> > +++ b/xen/arch/x86/x86_64/compat/mm.c Mon Mar 14 15:17:59 2011 +0000
> >> >> > @@ -161,9 +161,7 @@ int compat_arch_memory_op(int op, XEN_GU
> >> >> > if ( copy_from_guest(&xmml, arg, 1) )
> >> >> > return -EFAULT;
> >> >> >
> >> >> > - limit = (unsigned long)(compat_machine_to_phys_mapping +
> >> >> > - min_t(unsigned long, max_page,
> >> >> > - MACH2PHYS_COMPAT_NR_ENTRIES(current->domain)));
> >> >> > + limit = (unsigned long)(compat_machine_to_phys_mapping +
> >> > max_page);
> >> >>
> >> >> While doing this shouldn't hurt (except slightly for performance of
> >> >> the hypercall), I don't see why it's useful: For slots past
> >> >> MACH2PHYS_COMPAT_NR_ENTRIES(current->domain) you
> >> >> wouldn't read non-null page table entries anyway (up to
> >> >> RDWR_COMPAT_MPT_VIRT_END), so I don't see why the tools
> >> >> couldn't equally well do with what we have currently (after all
> >> >> they get told how many slots were filled).
> >> >
> >> > In order to be able to migrate any guest the tools in domain 0 need to
> >> > see the entire of host M2P, not just the subset which the kernel sees
> >> > mapped into its hypervisor hole (which is what
> >> > MACH2PHYS_COMPAT_NR_ENTRIES represents).
> >> >
> >> > The hypercall reads from the global compat M2P mapping, not the guest
> >> > kernel mapping of it, so it should read valid entries all the way up to
> >> > RDWR_COMPAT_MPT_VIRT_END, AFAICT.
> >>
> >> But RDWR_COMPAT_MPT_VIRT_END still doesn't necessarily
> >> cover all of the memory the machine may have (after all the
> >> range is way smaller than RDWR_MPT_VIRT_{START,END}.
> >
> > It's 1GB which is enough to cover 1TB of host memory, which AFAIK is all
> > we support these days. It certainly buys us time compared with currently
> > failing at 160GB.
>
> 1Tb of *contiguous* host memory. And that's certainly not the limit
> Xen has been run on, and Xen itself is set up to handle 5Tb. Which
> I'm seeing to get exceeded on experimental(?) systems...
Cool, I stand corrected.
> And while I agree that failing at 1Tb is better than failing at 160Gb,
> I favor fixing this once and completely over doing a little bit of
> papering over the problem now just to require debugging the same
> issue again later.
Unfortunately it's a bit late to be doing that for 4.1.0 :-(
> >> If that's the goal, then the patch as presented isn't suitable,
> >> as there's not event a compat table set up for all of the
> >> memory.
> >
> > paging_init seems to do the right thing and setup the compat M2P up to a
> > maximum of RDWR_COMPAT_MPT_VIRT_END.
>
> With 1Gb being the theoretical limit of what a 32-bit guest can
> see and access, that's all a guest could ever sensibly ask for (a
> [hypothetical] domain could ask for having a larger than the
> default hole with more of the table mapped in).
The size of a domain's hypervisor hole and how much of the m2p it can
map via XENMEM_machphys_mfn_list have no relationship though -- that's
the point of this change.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|