WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] suspending a domain in the ngio world

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] suspending a domain in the ngio world
From: Kip Macy <kmacy@xxxxxxxxxxx>
Date: Sat, 15 May 2004 11:10:02 -0700 (PDT)
Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Delivery-date: Sat, 15 May 2004 19:12:25 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: <E1BP3Bv-0006zQ-00@xxxxxxxxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <E1BP3Bv-0006zQ-00@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
The dd is running in DOM1. The OOM killer is getting run in DOM0.
There is clearly a memory leak in the block I/O path.

DOM0 is curly and DOM1 is xen-vm0.

A large amount of memory has already been leaked:

kmacy@curly cat /proc/meminfo
        total:    used:    free:  shared: buffers:  cached:
Mem:  262565888 205619200 56946688        0 23339008 28123136
==
[root@xen-vm0 ~]$ dd if=/dev/zero of=/tmp/bwout bs=1024k count=256
==
kmacy@curly cat /proc/meminfo
        total:    used:    free:  shared: buffers:  cached:
Mem:  262565888 214687744 47878144        0 23339008 28123136
==
[root@xen-vm0 ~]$ dd if=/dev/zero of=/tmp/bwout count=256 bs=1024k
256+0 records in
256+0 records out
==
kmacy@curly cat /proc/meminfo | head -3
        total:    used:    free:  shared: buffers:  cached:
Mem:  262565888 223727616 38838272        0 23339008 28123136
==
[root@xen-vm0 ~]$ dd if=/dev/zero of=/tmp/bwout count=256 bs=1024k
256+0 records in
256+0 records out
==
kmacy@curly cat /proc/meminfo | head -2
        total:    used:    free:  shared: buffers:  cached:
Mem:  262565888 232873984 29691904        0 23339008 28123136

So ~40MB is leaked for every 1GB transferred.

I can give you a stack backtrace of the memory allocation failure in
DOM0 if you like, but as far as I can tell the horse has long since left
the barn at that point.

> This is within DOM1 (i.e., not DOM0) right? If so, I guess that doing
> this 'dd' test within DOM0 doesn't get you similar messages?
>
> This is rather unexpected -- if you could add a stack backtrace to the
> out-of-memory path in the page allocator (page_alloc.c in Xenolinux)
> an d post me that with the kernel image (vmlinux) then I'll see what I
> can work out. I guess I haven't tested all that hard so there might be
> a memory leak.

On a side note - I don't need suspend/restore, I just need coredump and
almost immediately after that PTRACE_STOP. So long as I can stop the
domain long enough to write out its state I have what I need.



-------------------------------------------------------
This SF.Net email is sponsored by: SourceForge.net Broadband
Sign-up now for SourceForge Broadband and get the fastest
6.0/768 connection for only $19.95/mo for the first 3 months!
http://ads.osdn.com/?ad_id=2562&alloc_id=6184&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel