WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-bugs

[Xen-bugs] [Bug 675] page allocation failures

To: xen-bugs@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-bugs] [Bug 675] page allocation failures
From: bugzilla-daemon@xxxxxxxxxxxxxxxxxxx
Date: Sat, 4 Apr 2009 15:26:25 -0700
Delivery-date: Sat, 04 Apr 2009 15:26:31 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <bug-675-3@xxxxxxxxxxxxxxxxxxxxxxxxxxx/bugzilla/>
List-help: <mailto:xen-bugs-request@lists.xensource.com?subject=help>
List-id: Xen Bugzilla <xen-bugs.lists.xensource.com>
List-post: <mailto:xen-bugs@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=unsubscribe>
Reply-to: bugs@xxxxxxxxxxxxxxxxxx
Sender: xen-bugs-bounces@xxxxxxxxxxxxxxxxxxx
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=675





------- Comment #3 from marc@xxxxxxxxxxxxxxxxxxxx  2009-04-04 15:26 -------
I am also seeing this in the Debian Lenny stock kernel and hypervisor
(3.2.1-amd64):

Apr  4 15:45:12 node00002 kernel: [1095515.054069] __ratelimit: 32 messages
suppressed
Apr  4 15:45:13 node00002 kernel: [1095515.054113] xen-utils-versi: page
allocation failure. order:0, mode:0x20
Apr  4 15:45:13 node00002 kernel: [1095515.054149] Pid: 5548, comm:
xen-utils-versi Not tainted 2.6.26-1-xen-amd64 #1
Apr  4 15:45:13 node00002 kernel: [1095515.054201]
Apr  4 15:45:13 node00002 kernel: [1095515.054202] Call Trace:
Apr  4 15:45:13 node00002 kernel: [1095515.054251]  <IRQ>  [<ffffffff80269875>]
__alloc_pages_internal+0x399/0x3b2
Apr  4 15:45:14 node00002 kernel: [1095515.054309]  [<ffffffffa023ad5a>]
:bridge:br_dev_queue_push_xmit+0x0/0x79
Apr  4 15:45:14 node00002 kernel: [1095515.054349]  [<ffffffff80287514>]
cache_alloc_refill+0x29f/0x55a
Apr  4 15:45:14 node00002 kernel: [1095515.054388]  [<ffffffff80287221>]
kmem_cache_alloc+0x5f/0xb3
Apr  4 15:45:16 node00002 kernel: [1095515.054430]  [<ffffffffa026ec05>]
:nf_conntrack:nf_conntrack_alloc+0xa2/0x110
Apr  4 15:45:21 node00002 kernel: [1095515.054488]  [<ffffffffa026f534>]
:nf_conntrack:nf_conntrack_in+0x1db/0x4fb
Apr  4 15:45:28 node00002 kernel: [1095515.054531]  [<ffffffff803e0130>]
nf_iterate+0x41/0x7d
Apr  4 15:45:33 node00002 kernel: [1095515.054570]  [<ffffffffa023f1d5>]
:bridge:br_nf_pre_routing_finish+0x0/0x29a
Apr  4 15:45:35 node00002 kernel: [1095515.054623]  [<ffffffff803e01c9>]
nf_hook_slow+0x5d/0xbe
Apr  4 15:45:36 node00002 kernel: [1095515.054659]  [<ffffffffa023f1d5>]
:bridge:br_nf_pre_routing_finish+0x0/0x29a
Apr  4 15:45:41 node00002 kernel: [1095515.054719]  [<ffffffffa023ffad>]
:bridge:br_nf_pre_routing+0x5f0/0x617
Apr  4 15:45:55 node00002 kernel: [1095515.054755]  [<ffffffff803e0130>]
nf_iterate+0x41/0x7d
Apr  4 15:46:00 node00002 kernel: [1095515.054793]  [<ffffffffa023b72d>]
:bridge:br_handle_frame_finish+0x0/0x13e
Apr  4 15:46:03 node00002 kernel: [1095515.054829]  [<ffffffff803e01c9>]
nf_hook_slow+0x5d/0xbe
Apr  4 15:46:03 node00002 kernel: [1095515.054870]  [<ffffffffa023b72d>]
:bridge:br_handle_frame_finish+0x0/0x13e
Apr  4 15:46:03 node00002 kernel: [1095515.054912]  [<ffffffffa023ba0f>]
:bridge:br_handle_frame+0x1a4/0x1c9
Apr  4 15:46:03 node00002 kernel: [1095515.054949]  [<ffffffff803c5c3e>]
netif_receive_skb+0x2e1/0x3f8
Apr  4 15:46:04 node00002 kernel: [1095515.054984]  [<ffffffffa005fc50>]
:atl1:atl1_intr+0xc93/0xcb9
Apr  4 15:46:04 node00002 kernel: [1095515.055020]  [<ffffffff803c8510>]
process_backlog+0x115/0x145
Apr  4 15:46:04 node00002 kernel: [1095515.055054]  [<ffffffff803c7fb3>]
net_rx_action+0xd9/0x24c
Apr  4 15:46:05 node00002 kernel: [1095515.055089]  [<ffffffff80231c9c>]
__do_softirq+0x77/0x103
Apr  4 15:46:05 node00002 kernel: [1095515.055096]  [<ffffffff8020c13c>]
call_softirq+0x1c/0x28
Apr  4 15:46:05 node00002 kernel: [1095515.055096]  [<ffffffff8020e08a>]
do_softirq+0x55/0xbb
Apr  4 15:46:05 node00002 kernel: [1095515.055096]  [<ffffffff8020e16d>]
do_IRQ+0x7d/0x9a
Apr  4 15:46:11 node00002 kernel: [1095515.055096]  [<ffffffff8037d6a4>]
evtchn_do_upcall+0x13c/0x1fc
Apr  4 15:46:14 node00002 kernel: [1095515.055096]  [<ffffffff8020bbde>]
do_hypervisor_callback+0x1e/0x30
Apr  4 15:46:14 node00002 kernel: [1095515.055096]  <EOI>  [<ffffffff80313007>]
clear_page_c+0x7/0x10
Apr  4 15:46:14 node00002 kernel: [1095515.055096]  [<ffffffff802693be>]
get_page_from_freelist+0x40f/0x518
Apr  4 15:46:15 node00002 kernel: [1095515.055096]  [<ffffffff802695b2>]
__alloc_pages_internal+0xd6/0x3b2
Apr  4 15:46:38 node00002 kernel: [1095515.055096]  [<ffffffff8021c0df>]
pte_alloc_one+0x14/0x49
Apr  4 15:46:15 node00002 kernel: [1095515.055096]  [<ffffffff802695b2>]
__alloc_pages_internal+0xd6/0x3b2
Apr  4 15:46:38 node00002 kernel: [1095515.055096]  [<ffffffff8021c0df>]
pte_alloc_one+0x14/0x49
Apr  4 15:46:39 node00002 kernel: [1095515.055096]  [<ffffffff80273aab>]
__pte_alloc+0x12/0x25e
Apr  4 15:46:49 node00002 kernel: [1095515.055096]  [<ffffffff80276291>]
handle_mm_fault+0x1d5/0xc46
Apr  4 15:46:52 node00002 kernel: [1095515.055096]  [<ffffffff80218197>]
do_page_fault+0xb69/0xf46
Apr  4 15:47:01 node00002 kernel: [1095515.055096]  [<ffffffff803115ef>]
__up_write+0x21/0x10e
Apr  4 15:47:17 node00002 kernel: [1095515.055096]  [<ffffffff80436587>]
error_exit+0x0/0x69
Apr  4 15:47:21 node00002 kernel: [1095515.055096]
Apr  4 15:47:25 node00002 kernel: [1095515.055096] Mem-info:
Apr  4 15:47:28 node00002 kernel: [1095515.055096] DMA per-cpu:
Apr  4 15:47:40 node00002 kernel: [1095515.055096] CPU    0: hi:    0, btch:  
1 usd:   0
Apr  4 15:47:40 node00002 kernel: [1095515.055096] DMA32 per-cpu:
Apr  4 15:47:40 node00002 kernel: [1095515.055096] CPU    0: hi:   90, btch: 
15 usd:  26
Apr  4 15:47:40 node00002 kernel: [1095515.055096] Active:22723 inactive:1559
dirty:22 writeback:0 unstable:0
Apr  4 15:47:40 node00002 kernel: [1095515.055096]  free:437 slab:18504
mapped:1509 pagetables:0 bounce:0
Apr  4 15:47:40 node00002 kernel: [1095515.055096] DMA free:1020kB min:124kB
low:152kB high:184kB active:120kB inactive:0kB present:16160kB pages_scanned:0
\
all_unreclaimable? no
Apr  4 15:47:40 node00002 kernel: [1095515.055096] lowmem_reserve[]: 0 244 244
244
Apr  4 15:47:40 node00002 kernel: [1095515.055096] DMA32 free:728kB min:1936kB
low:2420kB high:2904kB active:90772kB inactive:6236kB present:250480kB pages_\
scanned:0 all_unreclaimable? no
Apr  4 15:47:46 node00002 kernel: [1095515.055096] lowmem_reserve[]: 0 0 0 0
Apr  4 15:47:49 node00002 kernel: [1095515.055096] DMA: 1*4kB 1*8kB 1*16kB
1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 1020kB
Apr  4 15:47:49 node00002 kernel: [1095515.055096] DMA32: 0*4kB 1*8kB 1*16kB
0*32kB 1*64kB 1*128kB 0*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 728kB
Apr  4 15:47:50 node00002 kernel: [1095515.055096] 16241 total pagecache pages
Apr  4 15:47:50 node00002 kernel: [1095515.055096] Swap cache: add 8316829,
delete 8304293, find 5231792/6525939
Apr  4 15:47:50 node00002 kernel: [1095515.055096] Free swap  = 721088kB
Apr  4 15:47:50 node00002 kernel: [1095515.055096] Total swap = 979832kB
Apr  4 15:47:51 node00002 kernel: [1095515.055096] 67584 pages of RAM
Apr  4 15:47:51 node00002 kernel: [1095515.055096] 19019 reserved pages
Apr  4 15:47:51 node00002 kernel: [1095515.055096] 3964 pages shared
Apr  4 15:47:51 node00002 kernel: [1095515.055096] 12536 pages swap cached

It repeated quite a lot of times and ultimately caused the dom0 to be quite
unresponsive. All domUs were OK, and this may have been caused by one domU
thrashing a lot and having a huge load 30-80).


-- 
Configure bugmail: 
http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

_______________________________________________
Xen-bugs mailing list
Xen-bugs@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-bugs

<Prev in Thread] Current Thread [Next in Thread>