WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Weird network performance behaviour?

To: "Santos, Jose Renato G" <joserenato.santos@xxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Ben Guthro <bguthro@xxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Weird network performance behaviour?
From: "Fischer, Anna" <anna.fischer@xxxxxx>
Date: Mon, 21 Apr 2008 19:47:52 +0000
Accept-language: en-US
Acceptlanguage: en-US
Cc: "'xen-devel@xxxxxxxxxxxxxxxxxxx'" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 21 Apr 2008 12:49:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C7B67062D31B9E459128006BAAD0DC3D07F4E00D0A@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <480CCBB5.3000308@xxxxxxxxxxxxxxx> <C4328B9D.16E07%keir.fraser@xxxxxxxxxxxxx> <C7B67062D31B9E459128006BAAD0DC3D07F4E00CA4@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C7B67062D31B9E459128006BAAD0DC3D07F4E00D0A@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acij1BuCWgvkHg/HEd2FCAAWy6hiGQAAS91QAACmeDAABAYbAA==
Thread-topic: [Xen-devel] Weird network performance behaviour?

Yes, very true – when explicitly pinning all DomUs to the second CPU then all guests achieve the same (better) performance. Very well spotted, many thanks!

 

Anna

 

From: Santos, Jose Renato G
Sent: 21 April 2008 18:50
To: Santos, Jose Renato G; Keir Fraser; Ben Guthro; Fischer, Anna
Cc: 'xen-devel@xxxxxxxxxxxxxxxxxxx'
Subject: RE: [Xen-devel] Weird network performance behaviour?

 

Anna,

 

Looking, closely at the xm output it seems you are using the sedf scheduler. This scheduler is not capable of moving VCPUs to different CPUs and this is the reason for your bad performance. You probably have VM1, Vm3 and VM5 mapped to CPU1 and VM2, VM4 and VM6 mapped to CPU0 which is shared with dom0.

You can either pin your VMs to the right CPU or upgrade to a latest version of Xen that supports the "credit" scheduler that can move VCPUs to different physical.CPUs.

 

Regards

 


From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Santos, Jose Renato G
Sent: Monday, April 21, 2008 10:37 AM
To: Keir Fraser; Ben Guthro; Fischer, Anna
Cc: 'xen-devel@xxxxxxxxxxxxxxxxxxx'
Subject: RE: [Xen-devel] Weird network performance behaviour?

Humm! Not sure if this is the reason for Anna's problem. She says she only has traffic for one VM at a time, so the priority for handling events should not matter too much, as there should be only one event at time when the other VMs are idle (unless there is some other workload on the VMs that we are not aware of)..

The fact that the CPU utilization is 98% at the bad cases is suspicious. It seems to indicate that dom0 and the guest are sharing the same CPU and cannot make use of the 2nd CPU; not sure though what would cause this as the credit scheduler should be able to move the VCPus if it finds a CPU is idle.

It may be worth checking the VCPUs mapping to physical CPUs and see if they are running in the same CPU for some unknown reason .... 

 

Renato

 

 


From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Keir Fraser
Sent: Monday, April 21, 2008 10:21 AM
To: Ben Guthro; Fischer, Anna
Cc: 'xen-devel@xxxxxxxxxxxxxxxxxxx'
Subject: Re: [Xen-devel] Weird network performance behaviour?

There was an issue with dom0 servicing event channels always from port 0 upwards, rather than doing something more fair. That is fixed in linux-2.6.18-xen.hg, which does round-robin servicing of event channels.

There were also some scheduler tweaks suggested in the work that you are referencing, but the event-channel servicing order in dom0 was the biggest win by quite some margin.

 -- Keir

On 21/4/08 18:15, "Ben Guthro" <bguthro@xxxxxxxxxxxxxxx> wrote:

Older versions of Xen suffered from an interrupt-starving problem, I believe. Others please correct me if 've misunderstood this.
I believe that there was a talk about this at the last Xen summit.

That is, domains that were started first were serviced first, when xen went to service the interrupts.
As a result - systems stacked with many guests tended to get starved for both disk, and network requests under high load scenarios.
A different scheduler was later introduced - though I'm not sure when. (sometime 3.0.4 - 3.1, perhaps)

Its strange you are seeing it on odd/even numbers like this...but perhaps it has something to do with this?

Fischer, Anna wrote:


I'm recording TCP network throughput performance between Dom0 IP stack and all my guest domains (currently 6 running simultaneously). All guests are running simultaneously, but I only ever transmit between Dom0 and one of the DomUs while the other five are idle.

When running a netperf TCP_STREAM test (netperf on Dom0, netserver on DomU) I record the following performance numbers, CPU % as reported by xentop (average of Dom0+DomU when transmitting packets):

VM 1: ~725Mbit/s, ~177 % CPU utilization (Dual Core CPU, so max is 200%)
VM 3: ~713Mbit/s, ~176 % CPU utilization (Dual Core CPU, so max is 200%)
VM 5: ~726Mbit/s, ~175 % CPU utilization (Dual Core CPU, so max is 200%)

VM 2: ~543Mbit/s, ~98 % CPU utilization (Dual Core CPU, so max is 200%)
VM 4: ~491Mbit/s, ~99 % CPU utilization (Dual Core CPU, so max is 200%)
VM 6: ~485Mbit/s, ~98 % CPU utilization (Dual Core CPU, so max is 200%)

You can see that VMs 1,3 and 5 achieve higher throughput than VMs 2,4 and 6 but use more CPU while doing so. All VMs have exactly the same configuration, and all VM VIFs are configured in the same way. There's no packet filtering or rate limiting set. I use Xen 3.0.2 (x86_64) in bridged mode, Dom0 kernel is a 2.6.16.13 SLES 10 Linux distribution, DomU is a PV 2.6.16.13 kernel. I haven't pinned VM VCPUs to CPUs and haven't specified any additional scheduling options.

Can anyone explain to me why I'm seeing different behaviours across these VMs that are all configured in the same way? For some more configuration details, please see the attached output below.

Many thanks,
Anna


--------------------------------------------------

cat vm1.cfg
kernel = "/boot/vmlinuz-domU"
ramdisk = "/boot/initrd-domU"
root = "/dev/sda1 ro"
memory = "384"
name = "vm1"
vif = [ 'vifname=v1, mac=00:50:56:88:89:90' ]
disk = [ 'phy:/dev/vg_2_1/xen_9,sda1,w' ]


ifconfig
eth0      Link encap:Ethernet  HWaddr 00:13:21:1F:9D:03

          inet addr:16.25.159.80  Bcast:16.25.159.95  Mask:255.255.255.224

          inet6 addr: fe80::213:21ff:fe1f:9d03/64 Scope:Link

          UP BROADCAST NOTRAILERS RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:75660694 errors:0 dropped:0 overruns:0 frame:0

          TX packets:176286583 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:4995773877 (4764.3 Mb)  TX bytes:259365749835 (247350.4 Mb)



eth1      Link encap:Ethernet  HWaddr 00:13:21:1F:9D:04

          inet addr:16.25.165.100  Bcast:16.25.165.127  Mask:255.255.255.224

          inet6 addr: fe80::213:21ff:fe1f:9d04/64 Scope:Link

          UP BROADCAST NOTRAILERS RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:494 errors:0 dropped:0 overruns:0 frame:0

          TX packets:216 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:38343 (37.4 Kb)  TX bytes:20781 (20.2 Kb)

          Interrupt:17



lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:312 errors:0 dropped:0 overruns:0 frame:0

          TX packets:312 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:22504 (21.9 Kb)  TX bytes:22504 (21.9 Kb)



peth0     Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING NOARP MULTICAST  MTU:1500  Metric:1

          RX packets:236112 errors:0 dropped:0 overruns:0 frame:0

          TX packets:2423885 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:16925277 (16.1 Mb)  TX bytes:2436117866 (2323.2 Mb)

          Interrupt:18



v1        Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:27711731 errors:0 dropped:0 overruns:0 frame:0

          TX packets:56427200 errors:0 dropped:2654 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:1830275087 (1745.4 Mb)  TX bytes:80349454921 (76627.2 Mb)



v2        Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:4616724 errors:0 dropped:0 overruns:0 frame:0

          TX packets:14384939 errors:0 dropped:2014632 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:304703509 (290.5 Mb)  TX bytes:21488056045 (20492.6 Mb)



v3        Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:10927648 errors:0 dropped:0 overruns:0 frame:0

          TX packets:23017367 errors:0 dropped:2854 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:721559417 (688.1 Mb)  TX bytes:32445403841 (30942.3 Mb)



v4        Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:18927006 errors:0 dropped:0 overruns:0 frame:0

          TX packets:54413371 errors:0 dropped:2102067 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:1249197313 (1191.3 Mb)  TX bytes:82154159961 (78348.3 Mb)



v5        Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:9124320 errors:0 dropped:0 overruns:0 frame:0

          TX packets:18253603 errors:0 dropped:2614 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:602303117 (574.4 Mb)  TX bytes:26133009866 (24922.3 Mb)



v6        Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:4119062 errors:0 dropped:0 overruns:0 frame:0

          TX packets:12586116 errors:0 dropped:914780 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:271863493 (259.2 Mb)  TX bytes:19006089826 (18125.6 Mb)



vif0.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:176286596 errors:0 dropped:0 overruns:0 frame:0

          TX packets:75660701 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:259365787844 (247350.4 Mb)  TX bytes:4995774405 (4764.3 Mb)



xenbr0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF

          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:124686 errors:0 dropped:0 overruns:0 frame:0

          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:6268168 (5.9 Mb)  TX bytes:468 (468.0 b)


xm info
release                : 2.6.16.13-4-xen
version                : #1 SMP Wed May 3 04:53:23 UTC 2006
machine                : x86_64
nr_cpus                : 2
nr_nodes               : 1
sockets_per_node       : 2
cores_per_socket       : 1
threads_per_core       : 1
cpu_mhz                : 2605
hw_caps                : 078bfbff:e3d3fbff:00000000:00000010:00000001
total_memory           : 8024
free_memory            : 2
max_free_memory        : 5413
xen_major              : 3
xen_minor              : 0
xen_extra              : .2_09656-4
xen_caps               : xen-3.0-x86_64
platform_params        : virt_start=0xffff800000000000
xen_changeset          : 09656
cc_compiler            : gcc version 4.1.0 (SUSE Linux)
cc_compile_by          : abuild
cc_compile_domain      : suse.de
cc_compile_date        : Tue May  2 11:18:44 UTC 2006


xm dmesg

 http://www.cl.cam.ac.uk/netos/xen

 University of Cambridge Computer Laboratory



 Xen version 3.0.2_09656-4 (abuild@xxxxxxx) (gcc version 4.1.0 (SUSE Linux)) Tue May  2 11:18:44 UTC 2006

 Latest ChangeSet: 09656



(XEN) Command line: /xen.gz  noreboot

(XEN) Physical RAM map:

(XEN)  0000000000000000 - 000000000009f400 (usable)

(XEN)  000000000009f400 - 00000000000a0000 (reserved)

(XEN)  00000000000f0000 - 0000000000100000 (reserved)

(XEN)  0000000000100000 - 00000000f57f6800 (usable)

(XEN)  00000000f57f6800 - 00000000f5800000 (ACPI data)

(XEN)  00000000fdc00000 - 00000000fdc01000 (reserved)

(XEN)  00000000fdc10000 - 00000000fdc11000 (reserved)

(XEN)  00000000fec00000 - 00000000fec01000 (reserved)

(XEN)  00000000fec10000 - 00000000fec11000 (reserved)

(XEN)  00000000fec20000 - 00000000fec21000 (reserved)

(XEN)  00000000fee00000 - 00000000fee10000 (reserved)

(XEN)  00000000ff800000 - 0000000100000000 (reserved)

(XEN)  0000000100000000 - 00000001fffff000 (usable)

(XEN) System RAM: 8023MB (8216144kB)

(XEN) Xen heap: 14MB (14348kB)

(XEN) Using scheduler: Simple EDF Scheduler (sedf)

(XEN) found SMP MP-table at 000f4fa0

(XEN) DMI 2.3 present.

(XEN) Using APIC driver default

(XEN) ACPI: RSDP (v002 HP                                    ) @ 0x00000000000f4f20

(XEN) ACPI: XSDT (v001 HP     A02      0x00000002


 0x0000162e) @ 0x00000000f57f6be0

(XEN) ACPI: FADT (v003 HP     A02      0x00000002

 0x0000162e) @ 0x00000000f57f6c60

(XEN) ACPI: MADT (v001 HP     00000083 0x00000002  0x00000000) @ 0x00000000f57f6900

(XEN) ACPI: SPCR (v001 HP     SPCRRBSU 0x00000001

 0x0
000162e) @ 0x00000000f57f69e0

(XEN) ACPI: SRAT (v001 HP     A02      0x00000001  0x00000000) @ 0x00000000f57f6a60

(XEN) ACPI: DSDT (v001 HP         DSDT 0x00000001 MSFT 0x02000001) @ 0x0000000000000000

(XEN) ACPI: Local APIC address 0xfee00000

(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)

(XEN) Processor #0 15:5 APIC version 16

(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)

(XEN) Processor #1 15:5 APIC version 16

(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] disabled)

(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] disabled)

(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] disabled)

(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] disabled)

(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)

(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)

(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1])

(XEN) ACPI: IOAPIC (id[0x04] address[0xfec00000] gsi_base[0])

(XEN) IOAPIC[0]: apic_id 4, version 17, address 0xfec00000, GSI 0-23

(XEN) ACPI: IOAPIC (id[0x05] address[0xfec10000] gsi_base[24])

(XEN) IOAPIC[1]: apic_id 5, version 17, address 0xfec10000, GSI 24-27

(XEN) ACPI: IOAPIC (id[0x06] address[0xfec20000] gsi_base[28])

(XEN) IOAPIC[2]: apic_id 6, version 17, address 0xfec20000, GSI 28-31

(XEN) ACPI: IOAPIC (id[0x07] address[0xfdc00000] gsi_base[32])

(XEN) IOAPIC[3]: apic_id 7, version 17, address 0xfdc00000, GSI 32-35

(XEN) ACPI: IOAPIC (id[0x08] address[0xfdc10000] gsi_base[36])

(XEN) IOAPIC[4]: apic_id 8, version 17, address 0xfdc10000, GSI 36-39

(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge)

(XEN) ACPI: IRQ0 used by override.

(XEN) ACPI: IRQ2 used by override.

(XEN) Enabling APIC mode:  Flat.  Using 5 I/O APICs

(XEN) Using ACPI (MADT) for SMP configuration information

(XEN) Initializing CPU#0

(XEN) Detected 2605.971 MHz processor.

(XEN) CPU0: AMD Flush Filter disabled

(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)

(XEN) CPU: L2 Cache: 1024K (64 bytes/line)

(XEN) Intel machine check architecture supported.

(XEN) Intel machine check reporting enabled on CPU#0.

(XEN) CPU0: AMD Opteron(tm) Processor 252 stepping 01

(XEN) Booting processor 1/1 eip 90000

(XEN) Initializing CPU#1

(XEN) CPU1: AMD Flush Filter disabled

(XEN) CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)

(XEN) CPU: L2 Cache: 1024K (64 bytes/line)

(XEN) AMD: Disabling C1 Clock Ramping Node #0

(XEN) AMD: Disabling C1 Clock Ramping Node #1

(XEN) Intel machine check architecture supported.

(XEN) Intel machine check reporting enabled on CPU#1.

(XEN) CPU1: AMD Opteron(tm) Processor 252 stepping 01

(XEN) Total of 2 processors activated.

(XEN) ENABLING IO-APIC IRQs

(XEN)  -> Using new ACK method

(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=0 pin2=0

(XEN) checking TSC synchronization across 2 CPUs: passed.

(XEN) Platform timer is 1.193MHz PIT

(XEN) Brought up 2 CPUs

(XEN) Machine check exception polling timer started.

(XEN) Using IPI Shortcut mode

(XEN) *** LOADING DOMAIN 0 ***

(XEN) Domain 0 kernel supports features = { 0000000f }.

(XEN) Domain 0 kernel requires features = { 00000000 }.

(XEN) PHYSICAL MEMORY ARRANGEMENT:

(XEN)  Dom0 alloc.:   000000000e000000->0000000010000000 (1984486 pages to be allocated)

(XEN) VIRTUAL MEMORY ARRANGEMENT:

(XEN)  Loaded kernel: ffffffff80100000->ffffffff80464088

(XEN)  Init. ramdisk: ffffffff80465000->ffffffff80be9200

(XEN)  Phys-Mach map: ffffffff80bea000->ffffffff81b1df30

(XEN)  Start info:    ffffffff81b1e000->ffffffff81b1f000

(XEN)  Page tables:   ffffffff81b1f000->ffffffff81b30000

(XEN)  Boot stack:    ffffffff81b30000->ffffffff81b31000

(XEN)  TOTAL:         ffffffff80000000->ffffffff81c00000

(XEN)  ENTRY ADDRESS: ffffffff80100000

(XEN) Dom0 has maximum 2 VCPUs

(XEN) Initrd len 0x784200, start at 0xffffffff80465000

(XEN) Scrubbing Free RAM: ..................................................................................done.

(XEN) Xen trace buffers: disabled

(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen).

(XEN) mtrr: type mismatch for f6000000,800000 old: uncachable new: write-combining

(XEN) mtrr: type mismatch for f6000000,800000 old: uncachable new: write-combining

  
 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
  

 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel