|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] a question about popen() performance on domU
waitpid is called by pclose as shown in the glibc source
code. So, my original post questioning the performance of
popen should take pclose into consideration too. A more
accurate question I should post is, popen+pclose executes
faster on my VM than my physical machine. The popen/pclose
benchmark I did narrows the problem down to waitpid that
waitpid somehow is suffering on the physical machine.
So, I did a followup experiment to test the fork and waitpid
performance on both machines. The program is a loop of fork
call with a followup waitpid call. The source of the program
and the strace results are available at
http://people.cs.uchicago.edu/~hai/tmp/gt2gram/strace-fork/str
ace.txt. The strace results confirm the waitpid costs more
time on the physical machine (154 usec/call) than the VM (56
usec/call).
However, the program runs faster on the physical machine (not
like the popen/pclose program) and the results suggest the
fork syscall used on the VM costs more time than the clone
syscall on the physical machine. I have a question here, why
the physical machine doesn't use fork syscall but the clone
syscall for the same program?
Because it's using the same source for glibc! glibc says to use
_IO_fork(), which is calling the fork syscall. Clone would probably do
the same thing, but for whatever good or bad reason, the author(s) of
thise code chose to use fork. There may be good reasons, or no reason at
all to do it this way. I couldn't say. I don't think it makes a whole
lot of difference if the actual command executed by popen is actually
"doing something", rather than just an empty "return".
Mats,
I am not very sure about your comment in the last sentence. Are you suggesting the command passed to
popen should have no big effect on popen's performance?
Thanks.
Xuehai
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|