This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] more profiling

To: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>, "Andy Grover" <andy.grover@xxxxxxxxxx>
Subject: RE: [Xen-devel] more profiling
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Sat, 1 Mar 2008 00:44:38 +1100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 29 Feb 2008 05:46:12 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <AEC6C66638C05B468B556EA548C1A77D0131AF31@trantor>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D0131AF31@trantor>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ach6wj+maafveOiCSzitY54bCV1UyQAFngGQ
Thread-topic: [Xen-devel] more profiling
> What I'd like to do is implement a compromise between my previous
> management approach (used lots of memory, but no allocate/grant per
> packet) and your approach (uses minimum memory, but allocate/grant per
> packet). We would maintain a pool of packets and buffers, and grow and
> shrink the pool dynamically, as follows:
> . Create a freelist of packets and buffers
> . When we need a new packet or buffer, and there are none on the
> freelist, allocate them and grant the buffer.
> . When we are done with them, put them on the freelist
> . Keep a count of the minimum size of the freelists. If the free list
> has been greater than some value (32?) for some time (5 seconds?) then
> free half of the items on the list.
> . Maybe keep a freelist per processor too, to avoid the need for
> spinlocks where we are running at DISPATCH_LEVEL
> I think that gives us a pretty good compromise between memory usage
> calls to allocate/grant/ungrant/free.

I have implemented something like the above, a 'page pool' which is a
list of pre-granted pages. This drops the time spent in TxBufferGC and
SendQueuedPackets by 30-50%. A good start I think, although there
doesn't appear to be much improvement in the iperf results, maybe only

It's time for sleep now, but when I get a chance I'll add the same logic
to the receive path, and clean it up so xennet can unload properly
(currently it leaks and/or crashes on unload).


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>