|
|
|
|
|
|
|
|
|
|
xen-users
[Xen-users] memory squeeze in netback driver
I am running CentOS 5.6 with the native version of Xen for that distro
(claims to be 3.0.3 but I seem to remember it's actually newer than
that). I am also running a high availability two-node cluster with
heartbeat and pacemaker. In normal operation, all the DomU's are split
between the two servers. When one server is taken for maintenance, then
all the DomU's run on one server. This really shouldn't be a problem; in
this mode, "xm top" shows that Dom0 still has 46% of the memory; it does
not appear to be short of RAM.
At first, all looks good and as long as I don't mess with it, the DomU's
all function fine in this mode. I do have a couple of VM's set up that
normally do not run. They are there only to be cloned. But as such, I
have to keep them updated, which means they have to be manually started
now and then to apply updates. This is where the trouble starts.
When the DomU's are split between both servers, there is no issue. But
when the DomU's are all on one node, then as soon as I start one of
these extra DomU's, I start seeing these messages:
Apr 23 09:31:57 vmx2 kernel: xen_net: Memory squeeze in netback driver.
Apr 23 09:31:59 vmx2 kernel: xenbr0: port 20(vif49.0) entering disabled
state
Once this happens, all the VM's become unreachable, either from outside
the cluster or from the Dom0. Stopping the extra DomU restores service
immediately. The host system does not appear to be short of RAM as
above.
Does anyone know what causes this or what I could do to fix it? I am
hoping there is just some config parameter that I can specify to give
more of the available RAM to the netback driver (whatever that is) to
alleviate this.
Thanks in advance,
--Greg
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
<Prev in Thread] |
Current Thread |
[Next in Thread> |
- [Xen-users] memory squeeze in netback driver,
Greg Woods <=
|
|
|
|
|