Alex,
Can I first say "Thanks for doing this, and for sharing".
More comments below.
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> Alex Iribarren
> Sent: 24 August 2006 14:58
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-users] Differences in performance between
> file and LVM based images
>
> Hi all,
>
> Nobody seems to want to do these benchmarks, so I went ahead and did
> them myself. The results were pretty surprising, so keep reading. :)
>
> -- Setup --
> Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel S5000PAL, 1x SATA
> Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G)
> Dom0 and DomU: Gentoo/x86/2006.0, gcc-3.4.6, glibc-2.3.6-r4,
> 2.6.16.26-xen i686, LVM compiled as a module
> IOZone version: 3.242
> Contents of VM config file:
> name = "gentoo";
> memory = 1024;
> vcpus = 4;
>
> kernel = "/boot/vmlinuz-2.6.16.26-xenU";
> builder = "linux";
>
> disk = [ 'phy:/dev/xenfs/gentoo,sda1,w', 'phy:/dev/xenfs/test,sdb,w',
> 'file:/mnt/floppy/testdisk,sdc,w' ];
> root = "/dev/sda1 rw";
>
> #vif = [ 'mac=aa:00:3e:8a:00:61' ];
> vif = [ 'mac=aa:00:3e:8a:00:61, bridge=xenbr0' ];
> dhcp = "dhcp";
>
>
> -- Procedure --
> I created a partition, an LVM volume and a file, all of
> aprox. 1GB, and
> I created ext3 filesystems on them with the default settings.
> I then ran
> IOZone from dom0 on all three "devices" to get the reference values. I
> booted my domU with the LVM and file exported and reran IOZone. All
> filesystems were recreated before running the benchmark. Dom0 was idle
> while domU was running the benchmark, and there were no VMs running
> while I ran the benchmark on dom0.
>
> IOZone was run with the following command line:
> iozone -KoMe -s900m -r256k -i0 -i1 -i2 -f <file to test>
> This basically means that we want to run the test on a 900MB
> file using
> 256k as the record size. We want to test sequential write and rewrite
> (-i0), sequential read and reread (-i1) and random write and
> read (-i2).
> We want to get some random accesses (-K) during testing to make this a
> bit more real-life. Also, we want to use synchronous writes (-o) and
> take into account buffer flushes (-M).
>
> -- Results --
> The first three entries (* control) are the results for the benchmark
> from dom0, so they give an idea of expected "native"
> performance (Part.
> control) and the performance of using LVM or loopback
> devices. The last
> two entries are the results as seen from within the domU.
>
> "Device" Write Rewrite Read Reread
> dom0 Part. 32.80 MB/s 35.92 MB/s 2010.32 MB/s 2026.11 MB/s
> dom0 LVM 43.42 MB/s 51.64 MB/s 2008.92 MB/s 2039.40 MB/s
> dom0 File 55.25 MB/s 65.20 MB/s 2059.91 MB/s 2052.45 MB/s
> domU Part. 31.29 MB/s 34.85 MB/s 2676.16 MB/s 2751.57 MB/s
> domU LVM 40.97 MB/s 47.65 MB/s 2645.21 MB/s 2716.70 MB/s
> domU File 241.24 MB/s 43.58 MB/s 2603.91 MB/s 2684.58 MB/s
>
> "Device" Random read Random write
> dom0 Part. 2013.73 MB/s 26.73 MB/s
> dom0 LVM 2011.68 MB/s 32.90 MB/s
> dom0 File 2049.71 MB/s 192.97 MB/s
> domU Part. 2723.65 MB/s 25.65 MB/s
> domU LVM 2686.48 MB/s 30.69 MB/s
> domU File 2662.49 MB/s 51.13 MB/s
>
> According to these numbers, file-based filesystems are generally the
> fastest of the three alternatives. I'm having a hard time
> understanding
> how this can possibly be true, so I'll let the more knowledgeable
> members of the mailing list enlighten us. My guess is that the extra
> layers (LVM/loopback drivers/Xen) are caching stuff and
> ignoring IOZone
> when it tries to write synchronously. Regardless, it seems like
> file-based filesystems are the way to go. Too bad, I prefer LVMs...
Yes, you'll probably get file-caching on Dom0 when using file-based
setup, which doesn't happen on other setups.
The following would be interesting to also test:
1. Test with noticably larger test-area (say 10GB or so).
2. Test multiple domains simultaneously to see if file-based approach is
still the fastest in this approach.
3. Test the new (unstable) Blktap model.
--
Mats
>
> Cheers,
> Alex
>
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|