|
|
|
|
|
|
|
|
|
|
xen-changelog
[Xen-changelog] [xen-3.4-testing] Make sure the minimum shadow allocatio
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1263909529 0
# Node ID b1d32e0ced2d9a967acfa477b6183f63ae33ee22
# Parent 3c2a1d2c4111a412a6832c4aa4591aa9e2492e78
Make sure the minimum shadow allocation is never zero.
Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
xen-unstable changeset: 20808:db8a882f5515
xen-unstable date: Thu Jan 14 14:11:25 2010 +0000
---
xen/arch/x86/mm/shadow/common.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff -r 3c2a1d2c4111 -r b1d32e0ced2d xen/arch/x86/mm/shadow/common.c
--- a/xen/arch/x86/mm/shadow/common.c Tue Jan 19 13:57:53 2010 +0000
+++ b/xen/arch/x86/mm/shadow/common.c Tue Jan 19 13:58:49 2010 +0000
@@ -1237,10 +1237,11 @@ int shadow_cmpxchg_guest_entry(struct vc
* instruction, we must be able to map a large number (about thirty) VAs
* at the same time, which means that to guarantee progress, we must
* allow for more than ninety allocated pages per vcpu. We round that
- * up to 128 pages, or half a megabyte per vcpu. */
+ * up to 128 pages, or half a megabyte per vcpu, and add 1 more vcpu's
+ * worth to make sure we never return zero. */
static unsigned int shadow_min_acceptable_pages(struct domain *d)
{
- u32 vcpu_count = 0;
+ u32 vcpu_count = 1;
struct vcpu *v;
for_each_vcpu(d, v)
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog
|
<Prev in Thread] |
Current Thread |
[Next in Thread> |
- [Xen-changelog] [xen-3.4-testing] Make sure the minimum shadow allocation is never zero.,
Xen patchbot-3.4-testing <=
|
|
|
|
|