WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Re: [rhelv5-list] shared storage manual remount ...

To: "Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list" <rhelv5-list@xxxxxxxxxx>
Subject: [Xen-users] Re: [rhelv5-list] shared storage manual remount ...
From: Pasi Kärkkäinen <pasik@xxxxxx>
Date: Thu, 4 Feb 2010 13:20:15 +0200
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 04 Feb 2010 03:20:50 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <96b40cab1002031640t7d02c5bct631c37de00885f48@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <96b40cab1002031640t7d02c5bct631c37de00885f48@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
On Thu, Feb 04, 2010 at 01:40:26AM +0100, Zoran Popovi? wrote:
>    I am wondering if there is a way to solve the following problem: I suppose
>    that the usual way is to establish distributed file system with locking
>    mechanisms like it is possible with GFS and Red Hat Cluster Suite or
>    similar, but I am interested in doing some of this manually and ONLY with
>    raw devices (no file system), or simply in knowing some general
>    principles. The case: I have a VLUN (on FC SAN) presented on two servers,
>    but mounted only on one host - to be more precise, used by a Xen HVM guest
>    system as a raw physical phy:// drive. Then, I put this guest down, and
>    bring it manually up on second host - it can see changed images, and make
>    changes to the presented disks. Then I put it down there, and bring it up
>    again on the first host - BUT THEN, this guest (or host) doesn't see
>    changes made by the second system, it still sees the picture as it was the
>    way it left it.
>    Or even better, if I bring HVM guest on a host, then put it down, make
>    restore of his disks on the storage (I am using HP EVA8400, restoring
>    original disk from a snapshot - it does have redundant controllers but
>    their cache must be in sync for sure), and then bring it up - it still
>    sees things on the disks as they were before restore. But if I _RESTART_
>    the host, it can see restored disks correctly. Now, I am wondering why is
>    this happening, and if it is possible somehow to resync with the storage
>    without restart (I wouldn't like that on production ! and on our windows
>    systems this is possible) ... I've tried sync (but that is like flushing
>    buffer cache), and I didn't try echo 3 > /proc/sys/vm/drop_caches after
>    that (I've just come upon some articles about that), and I am not sure if
>    that would really invalidate cache and help me. What is the right way of
>    dong this ? Please, help ...
>    ZP.

Exactly what changes in the guest are you talking about? (that are not visible 
after
switching hosts).

There was a pygrub caching bug in Xen in EL5, but that shouldn't affect HVM 
guests,
since they don't use pygrub.

If you use phy: backend for the disks, then there should be no caching in dom0.
Please paste your /etc/xen/hvmguest config file.

-- Pasi


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users