WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Differences in performance between file and LVM based im

To: "Alex Iribarren" <Alex.Iribarren@xxxxxxx>
Subject: Re: [Xen-users] Differences in performance between file and LVM based images
From: "Andrew Warfield" <andrew.warfield@xxxxxxxxxxxx>
Date: Thu, 24 Aug 2006 09:51:16 -0700
Cc: xen-users@xxxxxxxxxxxxxxxxxxx, Julian Chesterfield <julian.chesterfield@xxxxxxxxxxxx>
Delivery-date: Thu, 24 Aug 2006 09:52:04 -0700
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=IV4w+dyRP5WXmUaBgg3C6yyR6wXU3BfikwMPC2pKN+T52DjdA3QbQIt85V48fLFTqUEq0h9HIUnXjgnXREvYi32FxDUehQT/8d2HUAIuln4jV2Ccs4fgCn9PVeSwSYiVdkgTn2xRy8w1Hg7Hrm0TDrof6kDmxHKSI7vdITRraSI=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <44EDB065.2040100@xxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <ebvfv7.3vvg6rh.1@xxxxxxxxxxxxxxx> <8061f8830608202027x6b8ff0dxcc50e1936c9a16bb@xxxxxxxxxxxxxx> <ecdh18.37g.1@xxxxxxxxxxxxxxx> <8061f8830608220225q5a853182sb81824378ddce68d@xxxxxxxxxxxxxx> <ech3sm.324.1@xxxxxxxxxxxxxxx> <20060823084958.GA3231@xxxxxxxxxx> <44EDB065.2040100@xxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi Alex,

  The reason that you are getting very fast throughput from file
backends is that the loopback driver buffers in the the page cache and
will acknowledge writes as complete to domU before they actually hit
the disk.  This is obviously unsafe, given that the guest ends up
believing that the disk is in a different state than it really is.  A
second issue with the loopback driver is that it combines poorly with
NFS, and can lead to a situation under heavy write loading in which
most of dom0's memory becoms full of dirty pages and the Linux OOM
killer goes berserk and starts killing off random processes.

  I seem to remember someone saying that they were looking into the
loopback safety issues, but I haven't heard anything directly with
regard to this in a while -- the heavy interactions with the Linux
virtual memory system make this a bit of a challenging one to sort
out. ;)

  You might want to take a look at the blktap driver code in the
unstable tree.  It's basically a userspace implementation of the block
backend driver and so allows file access to be made from a location
that the kernel expects them to come from -- above the VFS interface
and associated with a running process.  It probably won't be as fast
as the loopback results that you are seeing, but should be reasonably
high-performance and safe.  If you turn up any bugs, we're happy to
sort them out for you -- the testing would be very welcome.

Thanks!
a.

On 8/24/06, Alex Iribarren <Alex.Iribarren@xxxxxxx> wrote:
Hi all,

Nobody seems to want to do these benchmarks, so I went ahead and did
them myself. The results were pretty surprising, so keep reading. :)

-- Setup --
Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel S5000PAL, 1x SATA
Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G)
Dom0 and DomU: Gentoo/x86/2006.0, gcc-3.4.6, glibc-2.3.6-r4,
2.6.16.26-xen i686, LVM compiled as a module
IOZone version: 3.242
Contents of VM config file:
name    = "gentoo";
memory  = 1024;
vcpus   = 4;

kernel  = "/boot/vmlinuz-2.6.16.26-xenU";
builder = "linux";

disk = [ 'phy:/dev/xenfs/gentoo,sda1,w', 'phy:/dev/xenfs/test,sdb,w',
'file:/mnt/floppy/testdisk,sdc,w' ];
root = "/dev/sda1 rw";

#vif = [ 'mac=aa:00:3e:8a:00:61' ];
vif = [ 'mac=aa:00:3e:8a:00:61, bridge=xenbr0' ];
dhcp = "dhcp";


-- Procedure --
I created a partition, an LVM volume and a file, all of aprox. 1GB, and
I created ext3 filesystems on them with the default settings. I then ran
IOZone from dom0 on all three "devices" to get the reference values. I
booted my domU with the LVM and file exported and reran IOZone. All
filesystems were recreated before running the benchmark. Dom0 was idle
while domU was running the benchmark, and there were no VMs running
while I ran the benchmark on dom0.

IOZone was run with the following command line:
iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f <file to test>
This basically means that we want to run the test on a 900MB file using
256k as the record size. We want to test sequential write and rewrite
(-i0), sequential read and reread (-i1) and random write and read (-i2).
We want to get some random accesses (-K) during testing to make this a
bit more real-life. Also, we want to use synchronous writes (-o) and
take into account buffer flushes (-M).

-- Results --
The first three entries (* control) are the results for the benchmark
from dom0, so they give an idea of expected "native" performance (Part.
control) and the performance of using LVM or loopback devices. The last
two entries are the results as seen from within the domU.

"Device"          Write        Rewrite         Read           Reread
dom0 Part.    32.80 MB/s    35.92 MB/s    2010.32 MB/s    2026.11 MB/s
dom0 LVM      43.42 MB/s    51.64 MB/s    2008.92 MB/s    2039.40 MB/s
dom0 File     55.25 MB/s    65.20 MB/s    2059.91 MB/s    2052.45 MB/s
domU Part.    31.29 MB/s    34.85 MB/s    2676.16 MB/s    2751.57 MB/s
domU LVM      40.97 MB/s    47.65 MB/s    2645.21 MB/s    2716.70 MB/s
domU File    241.24 MB/s    43.58 MB/s    2603.91 MB/s    2684.58 MB/s

"Device"       Random read    Random write
dom0 Part.    2013.73 MB/s      26.73 MB/s
dom0 LVM      2011.68 MB/s      32.90 MB/s
dom0 File     2049.71 MB/s     192.97 MB/s
domU Part.    2723.65 MB/s      25.65 MB/s
domU LVM      2686.48 MB/s      30.69 MB/s
domU File     2662.49 MB/s      51.13 MB/s

According to these numbers, file-based filesystems are generally the
fastest of the three alternatives. I'm having a hard time understanding
how this can possibly be true, so I'll let the more knowledgeable
members of the mailing list enlighten us. My guess is that the extra
layers (LVM/loopback drivers/Xen) are caching stuff and ignoring IOZone
when it tries to write synchronously. Regardless, it seems like
file-based filesystems are the way to go. Too bad, I prefer LVMs...

Cheers,
Alex



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>