This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: RX_COPY_THRESHOLD in netfront

To: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: RX_COPY_THRESHOLD in netfront
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Sat, 12 Aug 2006 10:43:18 +0100
Cc: Xen Development Mailing List <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sat, 12 Aug 2006 02:52:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060812002919.GA17238@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Aca98712/CCE4CnmEduxlQANk04WTA==
Thread-topic: RX_COPY_THRESHOLD in netfront
User-agent: Microsoft-Entourage/
On 12/8/06 1:29 am, "Herbert Xu" <herbert@xxxxxxxxxxxxxxxxxxx> wrote:

> I do concede that we're wasting effort in repeatedly initialising skb's
> that we throw away in the case of jumbo packets.  We can remove that
> waste by maintaining our own list of unused skb's that we can simply
> put back on the ring in network_alloc_rx_buffers without going through
> alloc_skb again.

If we're successfully allocating *whole pages* in
network_alloc_rx_buffers(), I'd be surprised if we couldn't allocate a
256-byte skbuff in netif_poll(). On the other hand, just putting the skbuffs
on the rx_batch queue is an easier change to make.

 -- Keir

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>