WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iscsi vs nfs for xen VMs

Rudi Ahlers <Rudi@xxxxxxxxxxx> a écrit :

On Thu, Jan 27, 2011 at 1:04 PM, Adi Kriegisch <adi@xxxxxxxxxxxxxxx> wrote:


What do you then do if you want redundancy, between 2 client PC's, i.e
similar to RAID1 ?
Oh well, there are several ways to achieve this, I guess:
* use dm mirroring on top of clvm (I tested this once personally but did
 not need it for production then -- will probably look into it some time
 again).
 I think this is just the way to go although it might be a little slower
 than running a raid in domU.
* Giving two LVs to the virtual machines and let them do the mirroring with
 software raid.
 I think this option offers greatest performance while being robust. The
 only disadvantage I see is that in case of failure you have to recreate
 all the software raids in your domUs. In some hosting environments this
 might be an issue.

Why not just give the 2 LV's to the dom0, and raid it on the dom0
instead? The the domU's still use the "local storage" as before and
they won't know about it.


With that setup, are you able to do live migration ?


* Use glusterfs/drbd/... Performancewise and in terms of reliability and
 stability I do not see any issues here. But to use those you actually do
 not need a SAN as a backend. A SAN always adds a performance penalty due
 to an increase of latency. Local storage always has an advantage over SAN
 in this respect. So in case you plan to use glusterfs, drbd or something
 like that, you should reconsider the SAN issue. This might save alot of
 money as well... ;-)

I would prefer not to use DRBD. Every layer you add, adds more
complication at the end of the day.

And we already have this expensive EMC SAN, so I would like to utilize
it somehow, but with better redundancy.


--
Pierre

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users