|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] big local array in routine in hypervisor
On 27/01/2009 08:58, "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx> wrote:
>> So you want us to wastefully pre-reserve some space for you, but call it the
>> 'stack' to assuage your guilt? ;-) It's common practice not to have very
>> large
>> stacks in kernels, since pre-reservation is wateful, dynamic growth is not
>> necessarily feasible to implement, and kernel code doesn't tend to need lots
>> of local storage or recursion. In Linux you'd be limited to 4kB, and there's
>> a
>> lot more code there living under that stricter regime.
>
> I noticed that the p2m populate-on-demand code also allocates a lot (10kB) of
> stack (in fact this is a bug since the stack is only 8kB!). If these new stack
> users aren't easy to implement in other ways, and are definitely not reentrant
> nor execute in interrupt context (so we know there's only one such big
> allocation at a time) we could perhaps double the primary stack size to 16kB,
> or even to 32kB.
>
> It's a slippery slope though, determining how much stack is enough and how big
> a local array is too big. I generally end up having to check and fix big stack
> frames from time to time, and I'm not sure that even doubling the stack a few
> times would avoid that job!
George,
In the PoD case I think only p2m_pod_zero_check_superpage() needs to be
changed. I'm not actually clear why a separate sweep needs to be done for
guest superpage mappings? Couldn't they get handled by p2m_pod_zero_check()?
Can you not PoD-reclaim a subset of MFNs backing a guest super-GFN?
In any case it could map the individual pages one after the other and check
them. That would reduce pressure on map_domain_page() too (currently it
could probably fail and crash the system on x86_32). And then the 512-entry
arrays would not be needed.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|