|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] open/stat64 syscalls run faster on Xen VM than standard
BTW, Xen I used is compiled from the 2.0-testing source tree. The changeset shown in
"xm dmesg" is:
# xm dmesg | more
__ __ ____ ___ _ _ _
\ \/ /___ _ __ |___ \ / _ \ | |_ ___ ___| |_(_)_ __ __ _
\ // _ \ '_ \ __) || | | |__| __/ _ \/ __| __| | '_ \ / _` |
/ \ __/ | | | / __/ | |_| |__| || __/\__ \ |_| | | | | (_| |
/_/\_\___|_| |_| |_____(_)___/ \__\___||___/\__|_|_| |_|\__, |
|___/
http://www.cl.cam.ac.uk/netos/xen
University of Cambridge Computer Laboratory
Xen version 2.0-testing (root@xxxxxxxxxxx) (gcc version 3.3.5 (Debian 1:3.3.5-13)) Sat Nov 26
20:01:57 CST 2005
Latest ChangeSet: Sat Aug 27 21:43:33 2005
9d3927f57bb21707d4b6f04ff2d8a4addc6f7d71
Thanks.
Xuehai
xuehai zhang wrote:
Anthony Liguori wrote:
This may just be the difference between having the extra level of
block caching from using a loop back device.
Try running the same benchmark on a domain that uses an actual
partition. While the syscalls may appear to be faster, I imagine it's
because the cost of pulling in a block has already been payed so the
overall workload is unaffected.
I created a new domainU by using the physical partition instead of the
loopback file as the backends of the VBDs and I reran the "strace -c
/bin/sh -c /bin/echo foo" benchmark inside of the domU. The following
are the results. Comparing with the results for domU with loopback files
as VBDs I reported in the previous email (quoted below), the average
time of open/stat64 syscalls are very similar, but still much smaller
than the values for standard Linux. If the reason that open/stat64 run
faster on domU with loopback files as VBDs is because of the extra level
of block caching from using a loop back device, why open/stat64 still
run similarily faster on domU with physical partition as VBDs when there
is no extra level of block caching from using a loop back device?
XenLinux (physical partition as VBDs)
root@cctest1:~/c# strace -c /bin/sh -c /bin/echo foo
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.56 0.001955 1955 1 write
18.94 0.000936 936 1 execve
7.65 0.000378 24 16 old_mmap
7.57 0.000374 42 9 2 open
6.27 0.000310 52 6 read
5.10 0.000252 84 3 munmap
4.92 0.000243 9 26 brk
1.92 0.000095 14 7 close
1.78 0.000088 8 11 rt_sigaction
1.40 0.000069 10 7 fstat64
1.01 0.000050 8 6 rt_sigprocmask
0.93 0.000046 23 2 access
0.79 0.000039 13 3 uname
0.69 0.000034 17 2 stat64
0.38 0.000019 19 1 ioctl
0.16 0.000008 8 1 getppid
0.16 0.000008 8 1 getpgrp
0.14 0.000007 7 1 time
0.14 0.000007 7 1 getuid32
0.14 0.000007 7 1 getgid32
0.12 0.000006 6 1 getpid
0.12 0.000006 6 1 getegid32
0.10 0.000005 5 1 geteuid32
------ ----------- ----------- --------- --------- ----------------
100.00 0.004942 109 2 total
Thanks.
Xuehai
xuehai zhang wrote:
Dear all,
When I debugged the execution performance of an application using
strace, I found there are some system calls like open and stat64
which run faster on XenLinux than the standard Linux. The following
is the output of running "strace -c /bin/sh -c /bin/echo foo" on both
systems. An open call runs averagely 109 usec on standard Linux but
only 41 usecs on XenLinux. An stat64 call runs
75 usecs on standard Linux but only 19 usecs on XenLinux.
The Xen VM runs on the same physical machine as the standard Linux.
It uses loopback files in dom0 as the backends of VBDs.
Any insight is highly appreciated.
Thanks.
Xuehai
XenLinux:
# strace -c /bin/sh -c /bin/echo foo
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
39.05 0.001972 1972 1 write
19.35 0.000977 977 1 execve
7.74 0.000391 24 16 old_mmap
7.23 0.000365 41 9 2 open
6.06 0.000306 51 6 read
5.17 0.000261 10 26 brk
4.93 0.000249 83 3 munmap
1.98 0.000100 14 7 close
1.90 0.000096 9 11 rt_sigaction
1.52 0.000077 11 7 fstat64
1.01 0.000051 9 6 rt_sigprocmask
0.95 0.000048 24 2 access
0.81 0.000041 14 3 uname
0.75 0.000038 19 2 stat64
0.38 0.000019 19 1 ioctl
0.18 0.000009 9 1 time
0.18 0.000009 9 1 getppid
0.16 0.000008 8 1 getpgrp
0.16 0.000008 8 1 getuid32
0.14 0.000007 7 1 getgid32
0.12 0.000006 6 1 getpid
0.12 0.000006 6 1 geteuid32
0.12 0.000006 6 1 getegid32
------ ----------- ----------- --------- --------- ----------------
100.00 0.005050 109 2 total
Standard Linux:
# strace -c /bin/sh -c /bin/echo foo
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
22.90 0.000982 109 9 2 open
22.85 0.000980 980 1 execve
10.87 0.000466 29 16 old_mmap
10.45 0.000448 448 1 write
7.06 0.000303 51 6 read
6.67 0.000286 10 30 brk
3.61 0.000155 78 2 access
3.50 0.000150 75 2 stat64
2.91 0.000125 42 3 munmap
2.24 0.000096 14 7 close
2.12 0.000091 13 7 fstat64
1.84 0.000079 7 11 rt_sigaction
1.03 0.000044 7 6 rt_sigprocmask
0.72 0.000031 10 3 uname
0.19 0.000008 8 1 geteuid32
0.16 0.000007 7 1 time
0.16 0.000007 7 1 getppid
0.16 0.000007 7 1 getpgrp
0.16 0.000007 7 1 getuid32
0.14 0.000006 6 1 getpid
0.14 0.000006 6 1 getgid32
0.12 0.000005 5 1 getegid32
------ ----------- ----------- --------- --------- ----------------
100.00 0.004289 112 2 total
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|