WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] XEN+CLVM+GFS

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] XEN+CLVM+GFS
From: Gémes Géza <geza@xxxxxxxxxxxxxxxxxxx>
Date: Wed, 26 Apr 2006 12:13:07 +0200
Delivery-date: Wed, 26 Apr 2006 03:13:44 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5 (Windows/20051201)
Hi,

I'm in the process of planing a SAN based 2 Dom0 redundant solution.
Haven't got the equipment yet to do any testing.
What I would like to achieve:
Have a set of failover DomU-s. Normally Dom0_0 would run DomU_0,
DomU_2,... Dom0_1 would run DomU_1, DomU_3,... this domains need access
to some data, which could be common for some of them (e.g a webserver
and a fileserver). If I keep that data on the SAN on a clvm lv formated
as gfs I can access it from one DomU of each Dom0s so two DomUs in total
(or will Xen allow me to export a lv as a partition to more than one
DomU). This is more a problem in the failover case, when all DomUs are
runing on one Dom0.
I would like any idea on this,

Thanks in advance.

Geza

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>