This has come up on the mailing list a few
times before and I have seen it too on my system.
It happened to me because I had no limit
on the Dom0 memory and this squeezed the memory available for the Xen
hypervisor to use. The solution (or at least, one of the solutions) is to
modify the GRUB / LILO invocation to include a memory allowance for Dom0.
title Xen 3.0 /
/boot/vmlinuz-2.6-xen root=/dev/hda1 ro
This stops Dom0 from using all the unused
memory and keeps the rest for the Xen Hypervisor:
15:47:11 Xen 3.0.2-2
5 domains: 1
running, 3 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
total, 1359456k used, 729052k free CPUs: 1 @ 2693MHz
NAME STATE CPU(sec)
CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS
NETS NETTX(k) NETRX(k) SSID
0.0 262012 12.5
1 1 968744 3329318 0
0.0 283704 13.6 no
1 8 2298855 1518669 0
0.0 261976 12.5
1 1 408774 2551772 0
octopus ------ 6784
0.0 261600 12.5
1 3 4466856 4630781 0
--b--- 13707 0.0 261924
12.6 1 1 535467
This stopped the problems immediately, but
it does require a full reboot…
Hope this helps,
[mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Scott Moe
Sent: 11 January 2007 15:23
Subject: [Xen-users] Memory
squeeze in netback driver
I have had servers run Xen 2 without rebooting Dom0 for most of a year.
I am working now on a very similar machine with Xen 3.03 and Debian etch.
My network setup is slightly customized. I have 3 IP's and use network-route in
xend-config.spx. I specify vif-route as the vif script in the DomU config for 2
of my VM's.
I also create a bridge in /etc/network/interfaces. I have two other VM's that
only interface to this bridge and the VM's with their own IP's have a second
vif on the bridge. I specify the vif-bridge script in vif config for these
Everything boots and runs great, for several hours. Yesterday, for example, the
machine was rebooted in the morning. It ran for 11 hours without errors, then
the kern.log started to fill up with this message:
xen-net: memory squeeze in netback driver
This went on for 13 hours at a rate of something like 5 to 10 messages per
second. Then the log shows each vif on the bridge entering disabled state. The
vif's with dedicated IP's were not mentioned in the log, but this morning only
the dom0 ip was responsive. When I ssh into dom0 and try to ping the IP's
routed to VM's I see no route to host.
I looked at route and everything was listed correctly.
I hope someone has seen a similar problem and can give me some insite. My
experience is limited but to me this smells like a memory leak. Perhaps there
are updates that will fix this problem but have not made it into the Debian