On Fri, Nov 11, 2011 at 10:28 AM, Benjamin Weaver <
benjamin.weaver@xxxxxxxxxxxxx<mailto:
benjamin.weaver@xxxxxxxxxxxxx>> wrote:
I am running xen 4.0.1 with debian squeeze (kernel: Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-38). Below is output indicating the problem.
My vms are Ubuntu (lucid). I cannot save and restore my vms properly. A lucid vm works fine when first created by xm create. But then, when I save (xm save hostname filename), and restore from that file (xm restore filename). I get a vm that lets me login, but then freezes its prompt.
This problem with lucid vms surfaced only a few weeks ago, before which I was running linux-base 2.6.32-35. The problem is related to Bug #644604 (
http://lists.debian.org/debian-kernel/2011/10/msg00183.html).
I had gotten some good suggestions on how and whether to compile a kernel version later than Squeeze, but had some difficulties compiling and in any event would like to get a stable release to run my vms.
MY GUESS AT THE PROBLEM: I have since come to suspect a problem of communication between Xen Virtual Block Devices, Xen drivers (frontend and backend) and dom0 drivers that I thought be fixable.
Please confirm if so; any suggestions as to how to fix this problem would be greatly appreciated!
Output (see below)
I notice a couple of things:
1. a.when the lucid vm is created a df command shows only xvda2 showing up as a filesystem; b. an lsmod shows only xen_blkfront and xen_netfront. This is all in contrast to output from the same commands regarding a hardy or lenny vm. In these cases, df shows several active file systems, and lsmod shows several modules, ipv6, jbd, etc., in fact several things except xen_blkfront and xen_netfront.
2. after the lucid vm is saved no reads or writes are being done to its VBDs.
# df command on lenny
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda2             2064208    392684   1566668  21% /
varrun                  262252        28    262224   1% /var/run
varlock                 262252         0    262252   0% /var/lock
udev                    262252        12    262240   1% /dev
devshm                  262252         0    262252   0% /dev/shm
root@lucidxentest3:~#
# df command on lucid
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda2             2064208    545596   1413756  28% /
none                    240380       120    240260   1% /dev
none                    252152         0    252152   0% /dev/shm
none                    252152        28    252124   1% /var/run
none                    252152         0    252152   0% /var/lock
none                    252152         0    252152   0% /lib/init/rw
root@lucidxentest:~#
# lsmod on lucid vm
Module                  Size  Used by
xen_netfront           17890  0
xen_blkfront           10665  2
root@lucidxentest:~#
# lsmod on hardy vm
Module                  Size  Used by
ipv6                  313960  10
evdev                  15360  0
ext3                  149520  1
jbd                    57256  1 ext3
mbcache                11392  1 ext3
root@lucidxentest3:~#
Before xm save, VBDs on lucid vm show read/write activity with non-zero values.
After save, xm top shows lucid vm VBDs with zeroed-out values for read/write. That is, values of 0 under the following columns of xm top output:
VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  VBD_WSECT.
_______________________________________________
Xen-users mailing list