>>> On 11.03.11 at 10:45, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
> On 11/03/2011 09:25, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
>
>>>>> On 09.03.11 at 12:07, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
>>> It seems unfortunate to propagate this to guests. Perhaps we should be
>>> making a memory pool for Xen's 1:1 mappings, big enough to allow a 4kB
>>> mapping of every page of RAM in the system, and allocate/free pagetables to
>>> that pool? The overhead of this would be no more than 0.2% of system memory,
>>> which seems reasonable to avoid an error case that is surely hard for a
>>> guest to react to or fix.
>>
>> Before starting to look into eventual Linux side changes - do you
>> then have plans to go that pool route (which would make guest
>> side recovery attempts pointless)?
>
> Not really. I was thinking about having a Linux-style mempool for making
> allocations more likely to succeed, but it's all a bit ugly really. It'll be
> interesting to see what you can do Linux-side, and whether it can pass
> muster for the Linux maintainers. You might at least be able to make the io
> remappings from device drivers failable (and maybe they are already).
ioremap() in general can fail, but failure of the writing the page
table entries gets propagated to the caller only on the legacy
kernels iirc (due to the lack of a return value of the accessor for
pv-ops).
The problem at hand, however, is with the vm_insert_...()
functions, which use set_pte_at(), which again has no return
value, so it'll need to be the accessors themselves to
(a) never utilize the writeable page tables feature on any path
that can alter cache attributes, and
(b) handle -ENOMEM from HYPERVISOR_update_va_mapping()
and HYPERVISOR_mmu_update() (without knowing much about
the context they're being called in).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|