Hi,
I tested the patch. With the patch, guest domains did not exist in
hypervisor after destroying them by xm destroy command.
But, without the patch, guest domains still existed in hypervisor as
follows after that.
(XEN) *** Serial input -> Xen (type 'CTRL-a' three times to switch input to
DOM0)
(XEN) 'q' pressed -> dumping domain info (now=0x69:8D16B138)
<<snip>>
(XEN) General information for domain 1:
(XEN) refcnt=1 nr_pages=-5 xenheap_pages=5 dirty_cpus={}
(XEN) handle=0c83ca66-94a2-9d78-fbe2-dc363be859eb vm_assist=00000000
(XEN) Rangesets belonging to domain 1:
(XEN) Interrupts { }
(XEN) I/O Memory { }
(XEN) I/O Ports { }
(XEN) dump_pageframe_info not implemented
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU15 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU1: CPU5 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU2: CPU10 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
(XEN) VCPU3: CPU2 [has=F] flags=0 upcall_pend = 00, upcall_mask = 00
dirty_cpus={} cpu_affinity={0-63}
(XEN) No periodic timer
(XEN) Notifying guest (virq 1, port 0, stat 0/0/0)
Best regards,
Kan
Thu, 31 Jan 2008 14:01:04 +0900, Isaku Yamahata wrote:
>Fix the domain refernece counting caused by allocated pages from domheap for
>shared page and hyperregister page.
>Calling share_xen_page_with_guest() with domain heap page is wrong so that
>it increments domian->xenpages which is never decremented. Thus the domian
>refcount doesn't decrease to 0 so that destroy_domain() is never called.
>This patch make the allocation done from xenheap again.
>
>The other way to fix it is to work around domain->xenheap and the page
>refrence count somehow, but it would be very ugly. The right way to do so
>is to enhance the xen page allocator to be aware of this kind of page
>in addition to xenheap and domheap. But we don't want to touch the
>common code.
>And given that the limitation on xenheap of xen/ia64 is much relaxed,
>probably it isn't necessary to be so nervouse not to allocate those pages
>from xenheap.
>If it happend to be necessary to allocate those pages from domheap,
>we could address it at that time. For now just allocate them from
>xenheap.
>
>--
>yamahata
>
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|