Andy,
I put some profiling around calls to GrantAccess and EndAccess, and have
the following results:
XenNet TxBufferGC Count = 108351, Avg Time = 227989
XenNet TxBufferFree Count = 0, Avg Time = 0
XenNet RxBufferAlloc Count = 108353, Avg Time = 17349
XenNet RxBufferFree Count = 0, Avg Time = 0
XenNet ReturnPacket Count = 65231, Avg Time = 1106
XenNet RxBufferCheck Count = 108353, Avg Time = 124069
XenNet Linearize Count = 129024, Avg Time = 29333
XenNet SendPackets Count = 129024, Avg Time = 67107
XenNet SendQueuedPackets Count = 237369, Avg Time = 73055
XenNet GrantAccess Count = 194325, Avg Time = 25878
XenNet EndAccess Count = 194261, Avg Time = 27181
The time for GrantAccess and EndAccess is, I think, quite significant in
the scheme of things, especially as TxBufferGC and RxBufferCheck (the
two large times) will both have multiple calls to GrantAccess and
EndAccess.
What I'd like to do is implement a compromise between my previous buffer
management approach (used lots of memory, but no allocate/grant per
packet) and your approach (uses minimum memory, but allocate/grant per
packet). We would maintain a pool of packets and buffers, and grow and
shrink the pool dynamically, as follows:
. Create a freelist of packets and buffers
. When we need a new packet or buffer, and there are none on the
freelist, allocate them and grant the buffer.
. When we are done with them, put them on the freelist
. Keep a count of the minimum size of the freelists. If the free list
has been greater than some value (32?) for some time (5 seconds?) then
free half of the items on the list.
. Maybe keep a freelist per processor too, to avoid the need for
spinlocks where we are running at DISPATCH_LEVEL
I think that gives us a pretty good compromise between memory usage and
calls to allocate/grant/ungrant/free.
I was going to look at getting rid of the Linearize, but if we don't
Linearize then we have to GrantAccess to the kernel supplied buffer, and
I think a (max) 1500 byte memcpy is going to be cheaper than a call to
GrantAccess...
Thoughts?
James
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|