|
|
|
|
|
|
|
|
|
|
xen-users
Re: [Xen-users] Xen + SAN
I have personnaly setup two different types of SAN
for use with Xen:
1st was FC fabrics using netapp device.
- performs well even with several thounsands IOps and netapp
embend a lot of tools to performs disk admin (if you don't like
them you can still use LVM).
- Xen host sees LUN as local disks and if like i did you setup
several fabrics to the same target(s) you can setup multipathd to
ensure path redundancy. This is quite a mess to setup but is
truely usefull!
- you export LUN created with netapp tools so there is nothing
like a volume group. Be aware that it is preferable to use
clustered version of LVM2 (clvm) when using lvm on top off your
exported LUNs. Unfortunately some basics features of LVM (like
snapshots) do not work in clustered mode (at least they didn't
when I last tried).
2nd was AoE:
- performs well too if you meet the ethernet network requirement
- a lot cheaper! Many can't afford netapp SAN device with brocade
switch and QLOGIC HBA (for example)
- have natice redundancy if your ethernet switches are.
Redundancy path is a lot easier to setup and maintain than using
iscsi
- you can export raw devices, partitions or LVs (but of course no
volume group) (even file but...)
IMHO AoE is a really good alternative from an admin point of view.
Unfortunately my AoE setup never reached the number a VM I got on
the FC setup. So I am not able to say how many IOps it would
support. However if you plan to deploy some 48 disks SAN using
raid10 or even raid50 array you can expect good IO performances.
regards.
Le 23/06/2011 22:32, Shaun Reitan a écrit :
I'm
just curious what some of you guys out there are using for remote
storage with XEN. We currently are a service provider using xen
for our customers virtual servers. Right now each server is
deployed with a raid controller and 4 disks in a raid 10
configuration. The raid controller + BBU are not cheap and add an
extra expense to the server. Not only that but disk IO is what
causes us to deploy a new host. For the most part these servers
end up with a lot of unused Ram, CPU, and Disk Space. What we are
considering doing is setting up a SAN, something like a 48 disk
raid 10 array that the hosts can be attached to some how.
I'm curious what some of you guys out there are doing and or
using. Our virtual servers right now are PVM's with logical
volumes attached. I've been looking at ISCSI but the problem I'm
seeing with ISCSI is that the disks that are exported to the
initiator just pop up as /dev/sd devices and there seams to be no
simple way to map that device to the guest easily using our
automation system. I've also been looking a little into AOE but
not sure if that would work. If we did disk based images the
solution for the most part would be easy, but from what I've read
LV's attached to the guest perform a lot better than a raw disk
image.
Hopefully some of you can pass on your experience! Thanks!
~Shaun
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
--
|
a_chapellon.vcf
Description: Vcard
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|