WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Re: [rhelv5-list] shared storage manual remount ...

To: "Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list" <rhelv5-list@xxxxxxxxxx>
Subject: [Xen-users] Re: [rhelv5-list] shared storage manual remount ...
From: Pasi Kärkkäinen <pasik@xxxxxx>
Date: Tue, 9 Feb 2010 12:44:53 +0200
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 09 Feb 2010 02:45:28 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <96b40cab1002080746k5c691d87td1debbc92b04343f@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <96b40cab1002031640t7d02c5bct631c37de00885f48@xxxxxxxxxxxxxx> <A3F763A5459FCC4EBC1C9DB41B646FD403530566@xxxxxxxxxxxxxxxxxxxxxxxx> <96b40cab1002080746k5c691d87td1debbc92b04343f@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
On Mon, Feb 08, 2010 at 04:46:59PM +0100, Zoran Popovi? wrote:
>    Tell me what would you like to know more about my environment - I was
>    trying to give all relevant information, at least concerning this issue.
>    And, btw,  echo 1 > /proc/sys/vm/drop_caches  does the  work I have needed
>    - if I do this I get results I need (and, for example, if I don't do that
>    after snapshot restore on the storage, my HVM Windows guest usually start
>    chkdsk during boot).

So you're saying you need drop_caches *with* phy for the other host to see the 
disk contents? 

-- Pasi

>    ZP.
> 
>    2010/2/8 Zavodsky, Daniel (GE Capital) <[1]daniel.zavodsky@xxxxxx>
> 
>      Hello,
>          I have tried this and it works here... caching is not used for phy:
>      devices, only buffering but it is flushed frequently so it is not a
>      problem. Maybe you should post some more info about your setup?
> 
>      Regards,
>          Daniel
> 
>    --------------------------------------------------------------------------
> 
>      From: [2]rhelv5-list-bounces@xxxxxxxxxx
>      [mailto:[3]rhelv5-list-bounces@xxxxxxxxxx] On Behalf Of Zoran Popoviæ
>      Sent: Thursday, February 04, 2010 1:40 AM
>      To: Red Hat Enterprise Linux 5 (Tikanga) discussion mailing-list;
>      [4]xen-users@xxxxxxxxxxxxxxxxxxx
>      Subject: [rhelv5-list] shared storage manual remount ...
>      I am wondering if there is a way to solve the following problem: I
>      suppose that the usual way is to establish distributed file system with
>      locking mechanisms like it is possible with GFS and Red Hat Cluster
>      Suite or similar, but I am interested in doing some of this manually and
>      ONLY with raw devices (no file system), or simply in knowing some
>      general principles. The case: I have a VLUN (on FC SAN) presented on two
>      servers, but mounted only on one host - to be more precise, used by a
>      Xen HVM guest system as a raw physical phy:// drive. Then, I put this
>      guest down, and bring it manually up on second host - it can see changed
>      images, and make changes to the presented disks. Then I put it down
>      there, and bring it up again on the first host - BUT THEN, this guest
>      (or host) doesn't see changes made by the second system, it still sees
>      the picture as it was the way it left it.
>      Or even better, if I bring HVM guest on a host, then put it down, make
>      restore of his disks on the storage (I am using HP EVA8400, restoring
>      original disk from a snapshot - it does have redundant controllers but
>      their cache must be in sync for sure), and then bring it up - it still
>      sees things on the disks as they were before restore. But if I _RESTART_
>      the host, it can see restored disks correctly. Now, I am wondering why
>      is this happening, and if it is possible somehow to resync with the
>      storage without restart (I wouldn't like that on production ! and on our
>      windows systems this is possible) ... I've tried sync (but that is like
>      flushing buffer cache), and I didn't try echo 3 >
>      /proc/sys/vm/drop_caches after that (I've just come upon some articles
>      about that), and I am not sure if that would really invalidate cache and
>      help me. What is the right way of dong this ? Please, help ...
>      ZP.
>      _______________________________________________
>      rhelv5-list mailing list
>      [5]rhelv5-list@xxxxxxxxxx
>      [6]https://www.redhat.com/mailman/listinfo/rhelv5-list
> 
> References
> 
>    Visible links
>    1. mailto:daniel.zavodsky@xxxxxx
>    2. mailto:rhelv5-list-bounces@xxxxxxxxxx
>    3. mailto:rhelv5-list-bounces@xxxxxxxxxx
>    4. mailto:xen-users@xxxxxxxxxxxxxxxxxxx
>    5. mailto:rhelv5-list@xxxxxxxxxx
>    6. https://www.redhat.com/mailman/listinfo/rhelv5-list

> _______________________________________________
> rhelv5-list mailing list
> rhelv5-list@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/rhelv5-list


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users