Christian Limpach wrote:
Another possibility would be to integrate blktap/tapdisk into qemu
which will provide asynchronous completion events and hides the
immediate AIO interaction from qemu. This should also make using qemu
inside a stub domain easier
Sounds like a very good idea indeed.
Do you fancy looking into this?
Unfortunately we've got some nasty blocker bugs left for
Fedora Core 6 which we're trying to track down first...
> The current bottleneck seems to be that MAX_MULT_COUNT is only 16.
Upon closer inspection of the code, this seems to not be the case for
LBA48 transfers.
Any other ideas what could be the bottleneck then?
Probably scheduling latency. I'm running 2 VT domains on this
system, and both qemu-dm processes are taking up to 25% of the
CPU each, on a 3GHz system.
When running top inside the VT guest, a lot of CPU time is spent
in "hi" and "si" time, which is irq code being emulated by qemu-dm.
Of course, with qemu-dm taking this much CPU time, it'll have a
lower CPU priority and will not get scheduled immediately. Still
fast enough to have 10000+ context switches/second, but apparently
not quite fast enough for the VT guest to have decent performance
under heavy network traffic...
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|