WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Shared Storage

Has anyone successfully been able to share the infiniband
card with one or more domU's?  we have the connectx2 cards
from mellanox which are claimed to have the shared channel capacity
which can be shared with one or more virtual machines.

Also, has anyone been able to hook up IB-dedicated storage
to a Xen solution, dom0 or domU--if so, what make and model?

Steve Timm



On Thu, 28 Apr 2011, Joseph Glanville wrote:

I am not 100% familiar with the internals of XCP but after taking a glance
it's based off a Centos 5.4 kernel I believe which is OFED compatible.
You could simply install the OFED RPM and have full Infiniband support.
IPoIB is fine for iSCSI based storage etc.

Joseph.

On 27 April 2011 22:44, <admin@xxxxxxxxxxx> wrote:

 I really like InfiniBand.  However, it is not supported with XCP.



-----Original Message-----
*From:* xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:
xen-users-bounces@xxxxxxxxxxxxxxxxxxx] *On Behalf Of *Joseph Glanville
*Sent:* Wednesday, April 27, 2011 6:28 AM
*To:* xen-users@xxxxxxxxxxxxxxxxxxx
*Subject:* [Xen-users] Shared Storage



SR is the Xen Cloud Platform storage manager deamon I think.


But yes, cLVM is not required to build a setup where a single large LUN is
exported to multiple hypervisors as long as you manage the LVM metadata on a
single host. If you need to manage it on multiple hosts make sure you script
running an lvscan on the other hosts to switch the logical volumes to
active.

If you are running a RHEL environment I highly suggest looking into cLVM
and the rest of the RHEL cluster suite as it makes alot of what you are
trying to do alot easier. If not there is plenty of room left for hackery.
:)

For those that have suggested Infiniband I would also put my vote behind
it. Our solutions are developed on Infiniband and are some of the fastest in
the world (or fastest in the case of cloud storage) and we are yet to
saturate the bandwith of bonded DDR which is 40gbit. Price per port it is
not that far from 10GbE but much more useful and resilient.

Joseph.



On 27 April 2011 03:01, John Madden <jmadden@xxxxxxxxxxx> wrote:

Am I missing something here? Is it possible to do live migrations
using SR type LVMoiSCSI? The reason I ask is because the discussion
made me think it would not be possible.



I don't know what "SR" means but yes, you can do live migrations even
without cLVM.  All cLVM does is ensure consistency in LVM metadata changes
across the cluster.  Xen itself prevents the source and destination dom0's
from trashing the disk during live migration.  Other multi-node trashings
are left to lock managers like OCFS2, GFS, not-being-dumb, etc.



John





--
John Madden
Sr UNIX Systems Engineer / Office of Technology
Ivy Tech Community College of Indiana
jmadden@xxxxxxxxxxx

_______________________________________________

Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



  --

Kind regards,

Joseph.

* *

Founder | Director

*Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56 99
52 | Mobile: 0428 754 846




--

Kind regards,

Joseph.

* *

Founder | Director

*Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56 99
52 | Mobile: 0428 754 846

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users






--
------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm@xxxxxxxx  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>