|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall saf
On Tue, 2010-09-07 at 18:23 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure
> for hypercall safe data buffers"):
> > It's not clear what phase 2 actually is (although phase 3 is clearly
> > profit), I don't think any existing syscalls do what we need. mlock
> > (avoiding the stack) gets pretty close and so far the issues with mlock
> > seem to have been more potential than hurting us in practice, but it
> > pays to be prepared e.g. for more aggressive page migration/coalescing
> > in the future, I think.
>
> Ian and I discussed this extensively on IRC, during which conversation
> I became convinced that mlock() must do what we want. Having read the
> code in the kernel I'm not not so sure.
After we had our discussion some other conversation I had (I forget
where/with whom) which made me pretty sure we were wrong as well.
> The ordinary userspace access functions are all written to cope with
> pagefaults and retry the access. So userspace addresses are not in
> general valid in kernel mode even if you've called functions to try to
> test them.
Correct, the difference between a normal userspace access function and a
hypercall is that it is possible to inject (and handle) and page fault
in the former case whereas we cannot inject a page fault to a VCPU while
it is processing a hypercall.
(Maybe it is possible in principle to make all hypercalls restartable
such that we can return to the guest in order to inject page faults but
its not the case right now and I suspect it would be an enormous amount
of work to make it so)
> It's not clear what mlock prevents; does it prevent NUMA
> page migration ? If not then I think indeed the page could be made
> not present by one VCPU editing the page tables while another VCPU is
> entering the hypercall, so that the 2nd VCPU will get a spurious
> EFAULT.
I think you are right, these kinds of page faults are possible.
It seems that mlock is only specified to prevent major page faults (i.e.
those requiring I/O to service) but doesn't specify anything regarding
minor page faults. It ensures that the data is resident in RAM but not
necessarily that it is continuously mapped into your virtual address
space nor writeable.
Minor page faults could be caused by NUMA migration (as you say), CoW
mappings or by the kernel trying to consolidate free memory in order to
satisfy a higher order allocation (Linux has recently gained this exact
functionality, I believe). I'm sure there are a host of other potential
causes too...
It's possible that historically most of these potential minor fault
causes were either not implemented in the kernels we were using for
domain 0 (e.g. consolidation is pretty new) or not likely to hit in
practice (e.g. perhaps libxc's usage patterns make it likely that any
CoW mappings are already dealt with by the time the hypercall happens).
Going forward I think it's likely that NUMA migration and memory
consolidation and the like will become more widespread.
> OTOH: there must be other things that work like Xen - what about user
> mode device drivers of various kinds ? Do X servers not mlock memory
> and expect to be able to tell the video card to DMA to it ? etc.
DMA would require physical (or more strictly DMA) addresses rather than
virtual addresses so locking the page into a particular virtual address
space doesn't matter all that much from a DMA point of view. I don't
think pure user mode device drivers can do DMA, there is always some
sort of kernel stub required.
In any case the kernel has been moving away from needing privileged X
servers with direct access to hardware in favour of KMS for a while so
I'm not sure an appeal to any similarity we may have with that case
helps us much.
> I think if linux-kernel think that people haven't assumed that mlock()
> actually pins the page, they're mistaken - and it's likely to be not
> just us.
Unfortunately, I think we're reasonably unique.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] [PATCH 07 of 24] libxc: replace open-coded use of XENMEM_decrease_reservation, (continued)
[Xen-devel] [PATCH 13 of 24] libxc: convert shadow domctl interfaces and save/restore over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 15 of 24] libxc: convert watchdog interface over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 16 of 24] libxc: convert acm interfaces over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 12 of 24] libxc: convert domctl interfaces over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 14 of 24] libxc: convert sysctl interfaces over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 17 of 24] libxc: convert evtchn interfaces over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 18 of 24] libxc: convert schedop interfaces over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 19 of 24] libxc: convert physdevop interface over to hypercall buffers, Ian Campbell
[Xen-devel] [PATCH 21 of 24] libxc: convert hvmop interfaces over to hypercall buffers, Ian Campbell
|
|
|
|
|