WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Shared Storage

To: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
Subject: Re: [Xen-users] Shared Storage
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Sun, 24 Apr 2011 22:30:03 +0200
Cc: xen-users@xxxxxxxxxxxxxxxxxxx, Jonathan Dye <jdye@xxxxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 24 Apr 2011 13:33:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <46C13AA90DB8844DAB79680243857F0F0AFFF2@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <1162000549.239932.1303675286200.JavaMail.root@mail> <4DB48268.3080602@xxxxxxxxxx> <46C13AA90DB8844DAB79680243857F0F0AFFF2@xxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.14) Gecko/20110221 SUSE/3.1.8 Thunderbird/3.1.8
There is no such thing as "Linux iSCSI", you have several implementations. I dare say that IET is the most known one, but not the fastest one. That would be SCST.

B.


On 04/24/11 22:24, Jonathan Tripathy wrote:
Well, I'm very familiar with LVM and shrinking and extending LVs and
filesystems. Been doing this for ages.

I would like to use openfiler, however I'd like to script this, so maybe
Linux is still the best option?

And just to confirm, Linux iSCSI will be ok with hundreds of LUNs?
Assume network and spindle hardware is ok.

Thanks



-----Original Message-----
From: Bart Coninckx [mailto:bart.coninckx@xxxxxxxxxx]
Sent: Sun 24/04/2011 21:04
To: Jonathan Dye
Cc: Jonathan Tripathy; xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Shared Storage

I concur, in terms of performance Linux based iSCSI might not be the
fastest, but in terms of what you are familiar with or what is flexible,
it might be a good choice again.

Also it might be worth to look into ATAoE. Not popular, but I'm told it
is fast as hell.

B.



On 04/24/11 22:01, Jonathan Dye wrote:
 > So, linux storage servers then. If I might interject again I would
suggest you try nexenta or solaris 11 express. If not, try a NAS
appliance like FreeNAS or Openfiler - one of the linux based ones is
likely to have done a better job than you will attempting to reproduce
it. If you're brave try clustered storage with Ceph since that's the way
everything is headed anyways (i.e. the way of isilon, luster, GPFS and
the like). After all reasonable options fail, roll your own with LVM.
IMO, making a storage server out of linux is inferior because the volume
management, filesystem, and raid are stratified instead of engineered
together. If you use any modern solaris kernel based distribution, like
the ones named above, and ZFS then I think you'll find that it can fill
your network connection with storage traffic without tweaking. The
downside is you have to be careful about hardware selection.
 >
 > - Jonathan
 >
 > ----- Original Message -----
 > From: "Jonathan Tripathy"<jonnyt@xxxxxxxxxxx>
 > To: "Bart Coninckx"<bart.coninckx@xxxxxxxxxx>,
xen-users@xxxxxxxxxxxxxxxxxxx
 > Sent: Sunday, April 24, 2011 1:43:46 PM
 > Subject: RE: [Xen-users] Shared Storage
 >
 > Thanks Bart. Very helpful info
 >
 > I agree with you about the LVM PV issue. It is indeed very
uncomfortable. I am looking into CLVM (Cluster LVM) though, however this
isn't very well documented.
 >
 > So the current idea is one target per Xen node (hense one target per
RAID array on the storage server), and one LUN per DomU. Is it easy
enough to expand and shrink LUNs? This was the advantage of LVM that I
loved. I guess I would run LVM on the storage server and export the LVs?
 >
 > Thanks
 >
 > -----Original Message-----
 > From: Bart Coninckx [mailto:bart.coninckx@xxxxxxxxxx]
 > Sent: Sun 24/04/2011 20:40
 > To: Jonathan Tripathy
 > Cc: Jonathan Dye; Xen List
 > Subject: Re: [Xen-users] Shared Storage
 >
 > I think you better take one target and then several LUNs on it (one per
 > DomU), that would make more sense. If you don't do that and use just one
 > LUN for several DomU's, you need to create PVM LV's on the newly created
 > disk for each DomU on the hypervisor side, does not really sound
 > comfortable. You would also close any path to HA, unless you maybe
 > introduce some locking system, since every hypervisor would be wanting
 > to try to write to the LUN.
 >
 > B.
 >
 > On 04/24/11 21:35, Jonathan Tripathy wrote:
 >> Hi Guys,
 >>
 >> Please forget the "thousands" number. We would have thousands of DomUs,
 >> but this would be spread over multiple storage servers, so never mind
 >> about that scale.
 >>
 >> If I was exporting "One big LUN" per Xen node, it would contain at most
 >> 80 DomU LVs (In real world usage, closer to 50). Furthermore, each LUN
 >> would be exported from a seperate RAID array. Each storage server would
 >> contain x number of RAID arrays, where x equals the number of Xen nodes
 >> and the number of exported LUNs.
 >>
 >> Of course, if I went with one LUN per DomU, then each storage server
 >> would contain 80x LUNs (closer to 50x though).
 >>
 >> With these numbers, any idea which is better?
 >>
 >> Thanks
 >>
 >>
 >> -----Original Message-----
 >> From: Bart Coninckx [mailto:bart.coninckx@xxxxxxxxxx]
 >> Sent: Sun 24/04/2011 19:36
 >> To: Jonathan Tripathy
 >> Cc: Jonathan Dye; Xen List
 >> Subject: Re: [Xen-users] Shared Storage
 >>
 >> That is completely dependent on your hardware specs and DomU's
properties.
 >> It sounds like a lot though. I seem to remember some time ago you also
 >> stated to want to run at least 100 DomUs on one hypervisor, maybe this
 >> is again pushing it.
 >> With a decent RAID and 10gbit or infiniband you can go a long way
 >> though. You should also consider using SCST instrad of IET as it is
faster.
 >>
 >> B.
 >>
 >>
 >>
 >> On 04/24/11 20:31, Jonathan Tripathy wrote:
 >> > We're talking houndreds, if not thousands of DomUs here. Will iSCSI on
 >> > Linux scale to these large numbers?
 >> >
 >> > Thanks
 >> >
 >> >
 >> > On 24/04/2011 19:13, Jonathan Dye wrote:
 >> >> Why not create one iscsi lun per vm disk instead of carving them
up on
 >> >> the hypervisor? That's more typical, and a more typical state of
 >> >> affairs in linux is your friend. Also, you would have just one lun
 >> >> queue if you exported one big PV, instead of one lun queue per vbd.
 >> >> That becomes a problem at scale.
 >> >>
 >> >> - Jonathan
 >> >>
 >> >> ----- Original Message -----
 >> >> From: "Jonathan Tripathy"<jonnyt@xxxxxxxxxxx>
 >> >> To: "Xen List"<xen-users@xxxxxxxxxxxxxxxxxxx>
 >> >> Sent: Sunday, April 24, 2011 11:25:38 AM
 >> >> Subject: [Xen-users] Shared Storage
 >> >>
 >> >> Hi Everyone,
 >> >>
 >> >> I am consider such a setup where I export an iSCSI target to a
Xen node.
 >> >> This Xen node will then use the iSCSI block device as an LVM PV, and
 >> >> create lots of LVs for DomU use.
 >> >>
 >> >> I was wondering if anyone could make me aware of any special
 >> >> consideration I would need to take. I've posted a similar question to
 >> >> the LVM list to ask for further tips more specific to LVM.
 >> >>
 >> >> Am I barking down the wrong path here? I know it would be very
easy to
 >> >> just an NFS server and use image files, but this will be for a large
 >> >> scale DomU hosting so this isn't really an option. Additionally, if I
 >> >> wanted to make the LVM VG visible to multiple Xen nodes, is it just a
 >> >> matter of running CLVM on each Xen node? Please keep in mind that
only
 >> >> one Xen node will be using an LV at any one time (so no need for
GFS, I
 >> >> believe)
 >> >>
 >> >> Any help or tips would be appreciated
 >> >>
 >> >> Thanks
 >> >>
 >> >> _______________________________________________
 >> >> Xen-users mailing list
 >> >> Xen-users@xxxxxxxxxxxxxxxxxxx
 >> >> http://lists.xensource.com/xen-users
 >> >
 >> > _______________________________________________
 >> > Xen-users mailing list
 >> > Xen-users@xxxxxxxxxxxxxxxxxxx
 >> > http://lists.xensource.com/xen-users
 >>
 >
 >
 > _______________________________________________
 > Xen-users mailing list
 > Xen-users@xxxxxxxxxxxxxxxxxxx
 > http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>