WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] memory question

To: Felix Krohn <felix.krohn@xxxxxxx>
Subject: Re: [Xen-devel] memory question
From: Steve Ofsthun <sofsthun@xxxxxxxxxxxxxxx>
Date: Wed, 23 Apr 2008 09:44:16 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 23 Apr 2008 06:44:46 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080423105500.GE18021@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080423105500.GE18021@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.12 (X11/20080226)
Felix Krohn wrote:

If I boot the dom0 limiting the memory to 64M:
        -> 2GB server boots up and works fine
        -> 4GB server panics while booting: "Out of low memory"

limiting to 128M:
        -> 2GB server boots up and works fine
        -> 4GB boots, but I get many segfaults (even more if I run
                mkfs.ext3 on a large partition)

10:44:23: python[2649]: segfault at 00000000005f8920 rip 00000000005f8920 rsp 
00000000414007c0 error 15
10:46:31: python[4518]: segfault at 00000000006781f0 rip 00000000006781f0 rsp 
00007fffaf0460c0 error 15
10:50:01: grep[4567]: segfault at ffffffff998b94d8 rip 00002aaaaacd81f2 rsp 
00007fff3d330840 error 4
10:51:54: python[4572]: segfault at 0000000000000008 rip 0000000000000008 rsp 
00007fff2dbc0a28 error 14
10:52:27: rtm[4586]: segfault at 0000000000000000 rip 00002aaaab205890 rsp 
00007ffffa453108 error 6
11:06:33: rtm[4970]: segfault at 0000000000000000 rip 0000000000000000 rsp 
00007fffae93bc40 error 14
11:07:15: rtm[4976]: segfault at 00000000000000f0 rip 00002aaaaac4b3ed rsp 
00007fffad391da0 error 4
11:09:13: rtm[4994]: segfault at 0000000000000008 rip 00002aaaaac5b237 rsp 
00007fff23134410 error 4
11:10:14: rtm[4982]: segfault at 000000000000001f rip 00002aaaaac29d73 rsp 
00007fff556cde38 error 4
11:10:17: cron[2709]: segfault at 0000000000000000 rip 0000000000000000 rsp 
00007fff9d32d5f0 error 14
11:10:31: rtm[4956]: segfault at 0000000000000001 rip 00002aaaaac5bc15 rsp 
00007fffd8b9c2a0 error 4
11:10:57: ls[5027]: segfault at 0000000000000000 rip 0000000000000000 rsp 
00007fffbfc0a050 error 14
11:10:59: ls[5025]: segfault at 0000000000000000 rip 00002aaaaaaaf8cb rsp 
00007fffb447e700 error 4
11:11:10: rtm[4953]: segfault at 0000000000000000 rip 00002aaaaac50070 rsp 
00007fff3c5faf10 error 4
11:18:42: fsck.ext3[2361]: segfault at 00002aaaad838010 rip 00002aaaad838010 
rsp 00007fff60d5d500 error 15
11:20:05: init[1]: segfault at 00007fffd41f22d8 rip 00007fffd41f22d8 rsp 
00007fffd41f22b0 error 15
11:20:37: init[1]: segfault at 00007fffd41f22d8 rip 00007fffd41f22d8 rsp 
00007fffd41f22b0 error 15
11:21:50: raid.pl[3141]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 rsp 
00007ffffd0b5120 error 4
11:21:52: usage.pl[3143]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 rsp 
00007fff54997140 error 4
11:21:52: smart.pl[3140]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 rsp 
00007fff549982b0 error 4
11:21:52: hddinfo.pl[3145]: segfault at 0000000000a37c58 rip 00002aaaab1fc879 
rsp 00007fffa794b070 error 4
11:21:52: rtm[3139]: segfault at 00000000007149d0 rip 00000000007149d0 rsp 
00007fff277a6930 error 15

What can I do to find the source of this behaviour? Is it linux, or is
it Xen?

Do you have a swap file configured?  (swapon -s)

mkfs.ext3 will generate a significant buffer cache footprint proportional to 
the filesystem size.

Can you boot the dom0 environment with a native linux kernel limiting the 
memory using mem=XXXM on the kernel boot line?  This will confirm whether your 
linux setup will run in the constrained memory environment you want.

When you are seeing the segfaults, are you also monitoring the kernel message 
log (dmesg or /var/log/messages)?

Steve

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>