|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Latency spike during page_scrub_softirq
Keir Fraser wrote:
> On 02/07/2009 15:47, "Chris Lalancette" <clalance@xxxxxxxxxx> wrote:
>
>> There are a couple of solutions that I can think of:
>> 1) Just clear the pages inside free_domheap_pages(). I tried this with a
>> 64GB
>> guest as mentioned above, and I didn't see any ill effects from doing so. It
>> seems like this might actually be a valid way to go, although then a single
>> CPU
>> is doing all of the work of freeing the pages (might be a problem on UP
>> systems).
>
> Now that domain destruction is preemptible all the way back up to libxc, I
> think the page-scrub queue is not so much required. And it seems it never
> worked very well anyway! I will remove it.
>
> This may make 'xm destroy' operations take a while, but actually this may be
> more sensibly handled by punting the destroy hypercall into another thread
> at dom0 userspace level, rather than doing the shonky 'scheduling' we
> attempt in Xen itself right now.
Yep, agreed, and I see you've committed as c/s 19886. Except...
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
...
@@ -1247,10 +1220,7 @@ void free_domheap_pages(struct page_info
for ( i = 0; i < (1 << order); i++ )
{
page_set_owner(&pg[i], NULL);
- spin_lock(&page_scrub_lock);
- page_list_add(&pg[i], &page_scrub_list);
- scrub_pages++;
- spin_unlock(&page_scrub_lock);
+ scrub_one_page(&pg[i]);
}
}
}
This hunk actually needs to free the page as well, with free_heap_pages().
--
Chris Lalancette
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|