WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Shared Storage


What I mean by IB dedicated storage is a hardware storage array
that I could plug into my existing IB switch without
buying an ib-to-fibrechannel hybrid switch or having a machine
to bridge between.  Having the SRP drivers actually work to be
able to read it would be a plus too.

Steve


On Thu, 28 Apr 2011, Joseph Glanville wrote:

Hi Steven

What do you mean by IB dedicated storage?
We use IB to power shared storage for our VMs, you can see an overview of
our technology stack here:
http://orionvm.com.au/cloud-services/Our-Technology/
<http://orionvm.com.au/cloud-services/Our-Technology/>
If you mean SRP or iSER then yes and yes, but there is very few good
SRP initiators and even fewer good targets.
Keep an eye on the LIO (Linux-iSCSI.org) project as it is merging with
mainline in the 2.6.38 or .39 merge window and will provide SRP target
support out of the box.
The SRP initiator included in the OFED stack is somewhat suboptimal if you
need to dynamically manage LUNs on dom0's but not too bad if you have a
single LUN with many LVMs as most of the setups in this thread entail.

In terms of Channel I/O Virtualisation. We are looking into this and if I am
successful I will post a howto on our techblog and forward it to the list.

Joseph.


On 28 April 2011 03:29, Steven Timm <timm@xxxxxxxx> wrote:

Has anyone successfully been able to share the infiniband
card with one or more domU's?  we have the connectx2 cards
from mellanox which are claimed to have the shared channel capacity
which can be shared with one or more virtual machines.

Also, has anyone been able to hook up IB-dedicated storage
to a Xen solution, dom0 or domU--if so, what make and model?

Steve Timm




On Thu, 28 Apr 2011, Joseph Glanville wrote:

 I am not 100% familiar with the internals of XCP but after taking a glance
it's based off a Centos 5.4 kernel I believe which is OFED compatible.
You could simply install the OFED RPM and have full Infiniband support.
IPoIB is fine for iSCSI based storage etc.

Joseph.

On 27 April 2011 22:44, <admin@xxxxxxxxxxx> wrote:

  I really like InfiniBand.  However, it is not supported with XCP.



-----Original Message-----
*From:* xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:
xen-users-bounces@xxxxxxxxxxxxxxxxxxx] *On Behalf Of *Joseph Glanville
*Sent:* Wednesday, April 27, 2011 6:28 AM
*To:* xen-users@xxxxxxxxxxxxxxxxxxx
*Subject:* [Xen-users] Shared Storage



SR is the Xen Cloud Platform storage manager deamon I think.


But yes, cLVM is not required to build a setup where a single large LUN
is
exported to multiple hypervisors as long as you manage the LVM metadata
on a
single host. If you need to manage it on multiple hosts make sure you
script
running an lvscan on the other hosts to switch the logical volumes to
active.

If you are running a RHEL environment I highly suggest looking into cLVM
and the rest of the RHEL cluster suite as it makes alot of what you are
trying to do alot easier. If not there is plenty of room left for
hackery.
:)

For those that have suggested Infiniband I would also put my vote behind
it. Our solutions are developed on Infiniband and are some of the fastest
in
the world (or fastest in the case of cloud storage) and we are yet to
saturate the bandwith of bonded DDR which is 40gbit. Price per port it is
not that far from 10GbE but much more useful and resilient.

Joseph.



On 27 April 2011 03:01, John Madden <jmadden@xxxxxxxxxxx> wrote:

Am I missing something here? Is it possible to do live migrations
using SR type LVMoiSCSI? The reason I ask is because the discussion
made me think it would not be possible.



I don't know what "SR" means but yes, you can do live migrations even
without cLVM.  All cLVM does is ensure consistency in LVM metadata
changes
across the cluster.  Xen itself prevents the source and destination
dom0's
from trashing the disk during live migration.  Other multi-node trashings
are left to lock managers like OCFS2, GFS, not-being-dumb, etc.



John





--
John Madden
Sr UNIX Systems Engineer / Office of Technology
Ivy Tech Community College of Indiana
jmadden@xxxxxxxxxxx

_______________________________________________

Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



 --

Kind regards,

Joseph.

* *

Founder | Director

*Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56
99
52 | Mobile: 0428 754 846




--

Kind regards,

Joseph.

* *

Founder | Director

*Orion Virtualisation Solutions* | www.orionvm.com.au | Phone: 1300 56
99
52 | Mobile: 0428 754 846

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users






--
------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm@xxxxxxxx  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.






--
------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm@xxxxxxxx  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>