WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Paravirtualised drivers for fully virtualised domains, r

To: Steven Smith <sos22-xen@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Paravirtualised drivers for fully virtualised domains, rev9
From: Steve Dobbelstein <steved@xxxxxxxxxx>
Date: Thu, 10 Aug 2006 16:48:51 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, sos22@xxxxxxxxxxxxx, xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 10 Aug 2006 14:49:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060810110838.GA4114@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Sensitivity:
Steven Smith <sos22-xen@xxxxxxxxxxxxx> wrote on 08/10/2006 06:08:38 AM:

> I just put a new version of the PV-on-HVM patches up at
> http://www.cl.cam.ac.uk/~sos22/pv-on-hvm/rev9 .  These are against
> 10968:51c227428166, as before.  Hopefully, the problems some people
> have been having with network access from paravirtualised domains and
> domains becoming zombies are now fixed.
>
> Thanks to everyone who submitted bug reports on these.

Hi, Steve.

Thought I'd share my findings so far with rev9.

The good news is that I don't get zombies anymore.  The bad news is that
I'm still getting very poor network performance running netperf, worse than
a fully virtualized domain.  I thought it was something wrong with my test
setup when I was testing rev8, but the test setup looks good and the
results are repeatable.

Here is what I have found so far in trying to chase down the cause of the
slowdown.
The qemu-dm process is running 99.9% of the CPU on dom0.  I ran xenoprofile
to see what functions are chewing up the most time.  Here are the first
several lines of output from the xenoprofile report:

1316786  17.1956  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
system_call
1243487  16.2385  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
do_select
492967    6.4376  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
do_gettimeofday
467692    6.1075  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
sys_select
376844    4.9211  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up fget
330483    4.3157  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
sys_clock_gettime
291153    3.8021  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
ktime_get_ts
291098    3.8014  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
memset
249732    3.2612  xen-unstable-syms        xen-unstable-syms
write_cr3
195102    2.5478  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
fget_light
190663    2.4898  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
__kmalloc
183748    2.3995  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
tty_poll
152136    1.9867  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
copy_user_generic
129317    1.6887  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
tun_chr_poll
115066    1.5026  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
getnstimeofday
94228     1.2305  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
wait_for_completion_interruptible
85598     1.1178  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
copy_from_user
83495     1.0903  qemu-dm                  qemu-dm
qemu_run_timers
82606     1.0787  xen-unstable-syms        xen-unstable-syms
syscall_enter
82507     1.0774  xen-unstable-syms        xen-unstable-syms        FLT2
76960     1.0050  qemu-dm                  qemu-dm
main_loop_wait
71759     0.9371  xen-unstable-syms        xen-unstable-syms
toggle_guest_mode
47744     0.6235  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
sys_read
44890     0.5862  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
pipe_poll
40506     0.5290  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
pty_chars_in_buffer
40210     0.5251  librt-2.4.so             librt-2.4.so
clock_gettime
37866     0.4945  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
normal_poll
35160     0.4591  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
tty_paranoia_check
34715     0.4533  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
poll_initwait
34225     0.4469  xen-unstable-syms        xen-unstable-syms
test_guest_events
32643     0.4263  xen-unstable-syms        xen-unstable-syms
restore_all_guest
31101     0.4061  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
posix_ktime_get_ts
29352     0.3833  qemu-dm                  qemu-dm                  DMA_run
27741     0.3623  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
vfs_read
27443     0.3584  papps1-syms              papps1-syms              (no
symbols)
26663     0.3482  qemu-dm                  qemu-dm
main_loop
26283     0.3432  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up kfree
24446     0.3192  xen-unstable-syms        xen-unstable-syms
__copy_from_user_ll
23117     0.3019  xen-unstable-syms        xen-unstable-syms        do_iret
22559     0.2946  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up fput
20354     0.2658  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
__wake_up_common
19516     0.2549  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
tty_ldisc_deref
19290     0.2519  xen-unstable-syms        xen-unstable-syms
test_all_events
18499     0.2416  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
rw_verify_area
17759     0.2319  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
__wake_up
13282     0.1734  xen-unstable-syms        xen-unstable-syms
create_bounce_frame
11968     0.1563  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
hypercall_page
11211     0.1464  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
set_normalized_timespec
11127     0.1453  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
sysret_check
10494     0.1370  xen-unstable-syms        xen-unstable-syms        FLT131
10467     0.1367  pxen1-syms               pxen1-syms
vmx_asm_vmexit_handler
9478      0.1238  libpthread-2.4.so        libpthread-2.4.so
__pthread_disable_asynccancel
9260      0.1209  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
copy_to_user
9222      0.1204  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
sync_buffer
8616      0.1125  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
tty_ldisc_ref_wait
7985      0.1043  oprofiled                oprofiled
odb_insert
6862      0.0896  libpthread-2.4.so        libpthread-2.4.so
__read_nocancel
6806      0.0889  xen-unstable-syms        xen-unstable-syms        FLT3
6676      0.0872  qemu-dm                  qemu-dm
cpu_get_clock
6576      0.0859  pxen1-syms               pxen1-syms
vmx_load_cpu_guest_regs
6450      0.0842  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
evtchn_poll
6349      0.0829  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
pty_write_room
5978      0.0781  qemu-dm                  qemu-dm
qemu_get_clock
5906      0.0771  pxen1-syms               pxen1-syms
resync_all
5745      0.0750  oprofiled                oprofiled
opd_process_samples
5738      0.0749  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
tty_ldisc_try
4943      0.0645  oprofiled                oprofiled
sfile_find
4803      0.0627  xen-unstable-syms        xen-unstable-syms
pit_read_counter
4194      0.0548  xen-unstable-syms        xen-unstable-syms
copy_from_user
4007      0.0523  pxen1-syms               pxen1-syms
vmx_store_cpu_guest_regs
3838      0.0501  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
n_tty_chars_in_buffer
3507      0.0458  xen-unstable-syms        xen-unstable-syms        FLT6
3501      0.0457  xen-unstable-syms        xen-unstable-syms        FLT4
3474      0.0454  oprofiled                oprofiled
pop_buffer_value
3436      0.0449  xen-unstable-syms        xen-unstable-syms        FLT11
3283      0.0429  vmlinux-2.6.16.13-xen0-up vmlinux-2.6.16.13-xen0-up
poll_freewait
3260      0.0426  pxen1-syms               pxen1-syms
vmx_vmexit_handler

xen-unstable-syms is the Xen hypervisor running on behalf of dom0.
pxen1-syms is the Xen hypervisor running on behalf of the HVM domain.
vmlinux-2.6.16.13-xen0-up is the kernel running in dom0.

It appears that a lot of time is spent running timers and getting the
current time.  Not being familiar with the code, I am now crawling through
it to see how timers are handled and how the xen-vnif PV driver uses them.
I'm also looking for potential differences between rev2 and rev8 since the
network performance of rev2 was pretty equal to that of a PV domain.
Knowing the code, you may have a solution before I find the problem.

Steve D.

P.S.  This just in from a test running while I typed the above.  I noticed
that qemu will start a "gui_timer" when VNC is not used.  I normally run
without graphics (nographic=1 in the domain config file).  I changed the
config file to use VNC. The qemu-dm CPU utilization in dom0 dropped to
below 10%.   The network performance improved from 0.19 Mb/s to 9.75 Mb/s
(still less than the 23.07 Mb/s for a fully virtualized domain).  It
appears there is some interaction between using the xen-vnif driver and the
qemu timer code.  I'm still exploring.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>