WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Daily Xen-HVM Builds: cs9226

To: "Yu, Ping Y" <ping.y.yu@xxxxxxxxx>
Subject: RE: [Xen-devel] Daily Xen-HVM Builds: cs9226
From: Daniel Stekloff <dsteklof@xxxxxxxxxx>
Date: Wed, 15 Mar 2006 22:31:16 -0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Rick Gonzalez <rcgneo@xxxxxxxxxx>
Delivery-date: Thu, 16 Mar 2006 06:32:27 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <2BF508F394C196468CCBEC031320DCDFA9DF19@pdsmsx405>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <2BF508F394C196468CCBEC031320DCDFA9DF19@pdsmsx405>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Thu, 2006-03-16 at 13:25 +0800, Yu, Ping Y wrote:
> Daniel,
> 
> Currently, HVM support multiple disks in QEMU configure, and you can add
> extra disks by configuring options in "disk", for example,
> disk = [ 'file:/var/images/min-el3-i386.img,ioemu:hda,w', 
> 'file:/var/images/min-el3-i386_2.img,ioemu:hdb,w' ]
> Does it meet your requirement?


My requirement for what? I know HVM domains can support more than one
disk image, the idea is to get xm-test to automate creating disk images
for testing HVM domains. My plan is to eventually is to use
device-mapper to present a read only root image, that all the xm-test
HVM test domains will share and then add writable partitions as needed
to test domains. 


> Currently problem is that strict check is added in VBD and forbid one 
> image for multiple HVM and all those test cases in xm-test failed, see 
> information
>  below:
> 
> [dom0] Running `xm create /tmp/xm-test.conf'
> Using config file "/tmp/xm-test.conf".
> Error: Device 768 (vbd) could not be connected.
> File /opt/vmm/control_panel/xm-test/ramdisk/disk.img is loopback-mounted 
> through /dev/loop0,
> which is mounted in a guest domain,
> and so cannot be mounted now.
> Failed to create test domain because:
> Using config file "/tmp/xm-test.conf".
> Error: Device 768 (vbd) could not be connected.
> File /opt/vmm/control_panel/xm-test/ramdisk/disk.img is loopback-mounted 
> through /dev/loop0,
> which is mounted in a guest domain,
> and so cannot be mounted now.
> 
> REASON: Failed to create domain


The vbd issue wasn't that only one image could be loaded for one HVM
domain, if that's what you're saying. The issue was exceeding the number
of loopback devices on the system. Qemu-dm loads disk images using
loopback devices - so you are therefore the number of disk images able
to be mounted is limited to the number of configured loopback devices.

There was a bug in 11_create_concurrent_pos.py in xm-test because it
goes and creates as many concurrent domains as possible based on memory
and a cutoff of 50. This is fine for para virt, but broke for HVM
because of the loopback device limit. I have patched the test and it
should work for you. I have run 11_create_concurrent_pos.py on my x366
where I've changed the kernel option max_loop=256 and been able to load
50 disk images all using the same disk image. 

Thanks,

Dan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel