This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] Shared Storage

To: Xen List <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Shared Storage
From: Jonathan Tripathy <jonnyt@xxxxxxxxxxx>
Date: Sun, 24 Apr 2011 18:25:38 +0100
Delivery-date: Sun, 24 Apr 2011 10:26:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv: Gecko/20101125 Thunderbird/3.0.11
Hi Everyone,

I am consider such a setup where I export an iSCSI target to a Xen node. This Xen node will then use the iSCSI block device as an LVM PV, and create lots of LVs for DomU use.

I was wondering if anyone could make me aware of any special consideration I would need to take. I've posted a similar question to the LVM list to ask for further tips more specific to LVM.

Am I barking down the wrong path here? I know it would be very easy to just an NFS server and use image files, but this will be for a large scale DomU hosting so this isn't really an option. Additionally, if I wanted to make the LVM VG visible to multiple Xen nodes, is it just a matter of running CLVM on each Xen node? Please keep in mind that only one Xen node will be using an LV at any one time (so no need for GFS, I believe)

Any help or tips would be appreciated


Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>