WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Hotplug scripts not working / problem with loopback driv

To: Craig Webster <craig@xxxxxxxxxx>
Subject: Re: [Xen-users] Hotplug scripts not working / problem with loopback driver
From: Kyrre M Begnum <kyrre@xxxxxxxxx>
Date: Mon, 6 Feb 2006 15:55:44 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 06 Feb 2006 15:06:33 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <0307B0F0-FE09-4E9C-9055-5B11E5449BAC@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <0307B0F0-FE09-4E9C-9055-5B11E5449BAC@xxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx

Craig,

i have a similar experience using Xen 3.0.1 and a pristine Ubuntu 5.04.
First i need to modprobe loop in order to get anywhere.

I simply tried to boot/shutdown the same VM over and over again. For each boot/shutdown i had one more process [loop?] lying around. After some iterations i had the following processes:

root      6190  0.0  0.0      0     0 ?        S<   15:41   0:00 [loop0]
root      6479  0.0  0.0      0     0 ?        S<   15:42   0:00 [loop1]
root      6628  0.0  0.0      0     0 ?        S<   15:42   0:00 [loop2]
root      6998  0.0  0.0      0     0 ?        S<   15:44   0:00 [loop3]
root      7090  0.0  0.0      0     0 ?        S<   15:44   0:00 [loop4]
root      7490  0.0  0.0      0     0 ?        S<   15:45   0:00 [loop5]
root      7609  0.1  0.0      0     0 ?        S<   15:45   0:00 [loop6]
root      8050  0.0  0.0      0     0 ?        S<   15:46   0:00 [loop7]

And from here on i get the following error:

Using config file "/opt/mln/projects/root/ugo/ubuntu_xen.cfg".
Error: Device 770 (vbd) could not be connected. Backend device not found.

I cannot "rmmod loop" either because it says it is in use. I can reboot and do the same thing over again. Is there a problem releasing the loopbacks? Do you see the same accumulation of processes?

Regards

On Feb 5, 2006, at 12:39 PM, Craig Webster wrote:

Hi list,

I'm having a bit of an annoying problem which, being new to Xen and the loopback device, I have no idea how to fix.

Everything was running along fine with 4 VMs, I attempted to add a 5th VM and it couldn't connect to the backend devices. A quick Google suggested that increasing the max_loop parameter for the kernel (loopback driver is compiled in, not a module) would fix this so I added that param to my grub.conf and rebooted.

When I tried to start any of the previously working VMs I now get this happening:

  saturn vm # xm create subversion.cfg -c
  Using config file "subversion.cfg".
Error: Device 769 (vbd) could not be connected. Hotplug scripts not working.
  saturn vm # xm destroy subversion
  saturn vm # xm create subversion.cfg -c
  Using config file "subversion.cfg".
Error: Device 770 (vbd) could not be connected. Backend device not found.
  saturn vm # xm destroy subversion
  saturn vm # xm create subversion.cfg -c
  Using config file "subversion.cfg".
Error: Device 770 (vbd) could not be connected. Backend device not found.

I have since tried removing the max_loop param from my grub.conf and rebooting but the same problem keeps coming up. Google suggested removing the memory limit on dom0 but that didn't make any difference; I still got the same error.

Looking at the logs it appears that there's something wrong with my loopbacks or block hotplug script, but I don't know enough to know what to Google for next.

Commenting out the disk parameter in my vm config file allows the boot process to get much further (until it tries to mount the disks).

These are the log entries:
  saturn vm # tail /var/log/xen-hotplug.log
mkdir: cannot create directory `/var/run/xen-hotplug/block': File exists mkdir: cannot create directory `/var/run/xen-hotplug/block': File exists
  [... repeated lots ...]
mkdir: cannot create directory `/var/run/xen-hotplug/block': File exists
  ioctl: LOOP_SET_FD: Device or resource busy

  saturn vm # tail /var/log/xend.log
      return self.dom.waitForDevices()
File "/usr/lib64/python2.4/site-packages/xen/xend/ XendDomainInfo.py", line 1343, in waitForDevices
      self.waitForDevices_(c)
File "/usr/lib64/python2.4/site-packages/xen/xend/ XendDomainInfo.py", line 971, in waitForDevices_
      return self.getDeviceController(deviceClass).waitForDevices()
File "/usr/lib64/python2.4/site-packages/xen/xend/server/ DevController.py", line 135, in waitForDevices
      return map(self.waitForDevice, self.deviceIDs())
File "/usr/lib64/python2.4/site-packages/xen/xend/server/ DevController.py", line 151, in waitForDevice
      raise VmError("Device %s (%s) could not be connected. "
VmError: Device 770 (vbd) could not be connected. Backend device not found.

My subversion.cfg looks like this:

  saturn vm # cat subversion.cfg
  kernel = "/var/vm/vmlinuz-2.6-xenU"
  memory = 64
  ip = "aaa.bbb.ccc.ddd" # This is a public IP in the cfg file
  netmask = "255.255.255.192"
  gateway = "aaa.bbb.ccc.ddd" # As is this
  vif = ['bridge=xenbr0']
  name = "subversion"
disk = ['file:/var/vm/subversion-hd.img,sda1,w','file:/var/vm/ subversion-swap.img,sda2,w']
  root = "/dev/sda1 ro"

If you have any suggestions which could help me get these VMs running again they would be much appreciated.

Cheers,
Craig

ps apologies for the length of the post -- wanted to include as much information as possible.
--
Craig Webster | t: +44 (0)131 516 8595 | e: craig@xxxxxxxxxx
Xeriom.NET    | f: +44 (0)131 661 0689 | w: http://xeriom.net



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users