[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] xen storage options - plase advise



> Uhm, that was supposed to say "XCP can do VHD snapshots with LVM storage,
> shared across all cluster nodes".
> 
> XCP doesn't use LVM snapshots.
> 
> -- Pasi
> 
> > XCP doesn't use CLVM, but it uses other methods to share the LVM volumes
> > across all hosts/nodes in the cluster.
> > 
> > -- Pasi
> > 
> > 

Pasi, 

Can I ask you some questions about XCP? I know XCP and XenServer are
closely related. I've tried XenServer and for the most part I was pretty
happy with it, at least for windows domUs. Converting my sles domUs
seemed problematic. With xen on sles it's very easy to create sles
domUs, but they are not built with pygrub. 

With windows domUs I used clonezilla to backup and restore them. This
seemed to work fairly well. I tried some of the conversion tools, but
they took a lot longer and didn't work very well. 

Besides converting, I also have to resize most of these domUs or I'm
wasting a ton of disk space. Disk space has been discussed in this
thread. In XCP you can't use any growable sparse file correct? So if I
use VLM and start with a certain size shared storage, is it easy to
increase or grow the storage?

Does XCP include high availability which costs extra on XenServer?

One other thing I couldn't figure out how to do on XenServer was copy an
existing VM. Right now if I want to copy one, I just shut it down, copy
the disk image, and create a new one based on the image. I couldn't see
how to do the same thing on XenServer. How would you do that on XCP?

Maybe I'll download and try XCP and see what happens. 

Besides my questions above, any general suggestions are appreciated. 

Thanks,
James




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.