On Fri, Jul 30, 2010 at 12:28:58PM +0200, Felix Botner wrote:
> Hi
>
> some more information about the xen drbd performance problem (two servers,
> each server has two drbd devices (protocol c, ext4) and is primary for one of
> them). Any test without memory restriction for dom0. The network connection
> between the drbd servers limits the troughput to ~230 MB for each drbd device
> (2 x 1000 Mbps cards, bond mode 0).
>
> The local drive is a hardware raid10 with 12 300gb (3gbps 10k SAS) harddiscs.
> The kernel is the latest debian/squeeze based kernel 2.6.32-5-amd64 (with
> http://svn.debian.org/wsvn/kernel/?op=comp&compare[]=%2Fdists%2Fsid@15998&compare[]=%2Fdists%2Fsid@16001)
>
> bonnie on the local drive with hypevisor
> ~ 400 MB/sec
>
> bonnie on the connected drbd device with hypevisor
> ~ 170 MB/sec (bonnie on one drbd device/server)
> ~ 110 MB/sec (bonnie on both drbd devices/servers)
>
> bonnie on the connected drbd device without hypevisor
> ~ 230 MB/sec (bonnie on one drbd device/server)
> ~ 200 MB/sec (bonnie on both drbd devices/servers)
>
> bonnie on the disconnected drbd device with hypevisor
> ~ 300 MB/sec
>
> bonnie on the disconnected drbd device without hypevisor
> ~ 360 MB/sec
>
> What interests me is the throughput when writing to both servers. With the xen
> hypevisor i get 110 MB/sec on each drbd device (220 MB/sec io throughput on
> each server because drbd writes "locale" and "remote"). Without the
> hypervisor i get 200 MB/sec on each drbd device (400 MB/sec io throughput on
> each server, the maximum of what the io backend allows).
>
> But even with only one drbd device the io is without the hypevisor much
> better
> (230 MB/sec without and 170 MB/sec with hypervisor)
>
> The strange thing is, a drbd resync gives me with or without hypervisor ~ 230
> MB/sec. And when i start one server without hypervisor, bonnie gives me ~ 230
> MB/sec on that server and on the remote server with the hypervisor (because
> drbd also writes remote).
>
> Any hints?
>
I guess DRBD performance mainly depends on the latency you have,
because of the synchronization needed between the hosts.
Have you tried pinning the dom0 vcpus?
-- Pasi
> Thanks,
> Felix
>
> xm info
> host : samla
> release : 2.6.32-ucs9-xen-amd64
> version : #1 SMP Thu Jul 22 04:32:22 UTC 2010
> machine : x86_64
> nr_cpus : 8
> nr_nodes : 1
> cores_per_socket : 4
> threads_per_core : 1
> cpu_mhz : 2133
> hw_caps :
> bfebfbff:28100800:00000000:00000340:009ce3bd:00000000:00000001:00000000
> virt_caps : hvm
> total_memory : 24564
> free_memory : 22230
> node_to_cpu : node0:0-7
> node_to_memory : node0:22230
> xen_major : 3
> xen_minor : 4
> xen_extra : .3
> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler : credit
> xen_pagesize : 4096
> platform_params : virt_start=0xffff800000000000
> xen_changeset : unavailable
> cc_compiler : gcc version 4.3.2 (Debian 4.3.2-1.1.13.200909082302)
> cc_compile_by : root
> cc_compile_domain : [unknown]
> cc_compile_date : Thu Jun 17 14:54:34 UTC 2010
> xend_config_format : 4
>
> Am Dienstag 20 Juli 2010 15:54:21 schrieb Felix Botner:
> > Am Dienstag 20 Juli 2010 14:35:30 schrieb Pasi Kärkkäinen:
> > > On Tue, Jul 20, 2010 at 10:46:34AM +0200, Felix Botner wrote:
> > > > Hi everyone,
> > > >
> > > > i have two servers installed with a debian/lenny based os (64bit), a
> > > > debian/sid based kernel 2.6.32-xen-amd64 and xen 3.4.3-4. Each server
> > > > has two drbd devices (protocol c, formatted with ext4) and is primary
> > > > for one of them. Every drbd pair has a dedicated network interface (a
> > > > bond mode 0 interface with two 1000 Mbps cards).
> > >
> > > <snip>
> > >
> > > > The io performance on the connected drbd devices is significantly worse
> > > > if i start the kernel with the xen hypervisor (with "kernel
> > > > /boot/xen-3.4.3.gz"). Without the hypervisor (but the same kernel) the
> > > > systems are about 50% faster.
> > >
> > > Are you measuring from dom0 or from a guest?
> >
> > From dom0, there are no guests at the moment.
> >
> > > > Why is there such a difference?
> > > > Can i optimize my xend (i already added dom0_mem=2048M dom0_max_vcpus=2
> > > > dom0_vcpus_pin as boot parameter with no effect)?
> > > > Are there any known issues using XEN and bonding/drbd?
> > > >
> > > > Feel free to ask for more information about the system or the setup.
> > >
> > > How much memory does your server have?
> > > ie. how much ram do you have when you boot it baremetal without Xen.
> >
> > ~20GB without Xen. Now i removed the hypervisor param dom0_mem=2048M from
> > menu.lst (in xend-config set (dom0-min-mem 196) and (enable-dom0-ballooning
> > yes)), rebooted and as far as i know there should be no memory restriction
> > for dom0. "xm list" shows me the complete memory for dom0:
> >
> > server1-> xm list
> > Name ID Mem VCPUs State
> > Time(s)
> > Domain-0 0 18323 2 r-----
> > 1785.0
> >
> > server2-> xm list
> > Name ID Mem VCPUs State
> > Time(s)
> > Domain-0 0 22375 2 r-----
> > 1754.2
> >
> > But bonnie++ gives me still bad results:
> >
> > server1,60000M,,,113444,30,55569,13,,,231555,23,622.6,0,16,25634,70,+++++,+
> >++, +++++,+++,+++++,+++,+++++,+++,+++++,+++
> > server2,60000M,,,114014,31,53864,13,,,243541,27,617.4,0,16,+++++,+++,+++++,
> > +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
> >
> > So i don't think caching is the issue, or?
> >
> > bye
> >
> > Felix
> >
> > > Remember all the free memory of the host/dom0 will be used by linux
> > > pagecache.. So if you limit dom0 to 2GB, it'll have less cache than the
> > > baremetal case.
> > >
> > > -- Pasi
>
>
> --
> Felix Botner
>
> Open Source Software Engineer
>
> Univention GmbH
> Linux for your business
> Mary-Somerville-Str.1
> 28359 Bremen
> Tel. : +49 421 22232-0
> Fax : +49 421 22232-99
>
> botner@xxxxxxxxxxxxx>
> http://www.univention.de
>
> Geschäftsführer: Peter H. Ganten
> HRB 20755 Amtsgericht Bremen
> Steuer-Nr.: 71-597-02876
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|