WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/

To: DOGUET Emmanuel <Emmanuel.DOGUET@xxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Re: Xen Disk I/O performance vs nativeperformance:Xen I/O is definitely super super super slow
From: Matthieu Patou <mat+Informatique.xen@xxxxxxxxx>
Date: Sat, 28 Feb 2009 15:32:22 +0300
Cc:
Delivery-date: Sat, 28 Feb 2009 04:33:01 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <7309E5BCEDC4DC4BA820EF9497269EAD0461B2AC@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <7309E5BCEDC4DC4BA820EF9497269EAD0461B244@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902120602t1be864acm684fbe6b8f0f18aa@xxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B246@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902122011x541c63eewe33fe0ef922cd0c9@xxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B24D@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7309E5BCEDC4DC4BA820EF9497269EAD0461B24F@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx><7207d96f0902132122n409d71ceg2c19e3ec70f52f45@xxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B292@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B295@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B2A3@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <E4953B65D9E5054AA6C227B410C56AA9D133@xxxxxxxxxxxxxxxxxxxxxxx> <7309E5BCEDC4DC4BA820EF9497269EAD0461B2AC@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1b3pre) Gecko/20090224 Shredder/3.0b3pre
On 02/27/2009 04:03 PM, DOGUET Emmanuel wrote:
             I have receive my Battery pack and 512MB upgrade, now it's fine :



                                     Hardware RAID 5

                                        512M cache

                                   Write Cache (25/75)

                                          8 x 146G



dom0 (1024MB, 1 cpu)      213 MB/s

domU ( 512MB, 1 cpu       192 MB/s

domU (4096MB, 2 cpu)      249 MB/s

A lot of hardware card tends not to use write cache when there is no battery because they decide that's it's unsafe (which is not wrong ... to my mind).

There is also a problems with filesystems ie xfs vs ext3 the first one use barrier (if the underlining device support it which means not lvm at least), if you do a bench against xfs it will be light year away from ext3, remounting xfs -o nobarrier gives you back performance (in short using barrier insure you that your metadata are *really* written to the disk before doing something to real data which seems to be an hypothesis of journalised FS) Last year there was some papers in LWN about that and the fact because some wants to turn on by default on ext3.

As a rule of thumbs: always buy battery if you intend to use cache in your raid controller.

Matthieu.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>