WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Fix stdvga performance for 32bit ops

To: Ben Guthro <bguthro@xxxxxxxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Fix stdvga performance for 32bit ops
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Wed, 31 Oct 2007 21:59:04 +0000
Cc: Robert Phillips <rphillips@xxxxxxxxxxxxxxx>
Delivery-date: Wed, 31 Oct 2007 14:54:27 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4728E583.4040304@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcgcCUBaftuvPof8EdyjmQAWy6hiGQ==
Thread-topic: [Xen-devel] [PATCH] Fix stdvga performance for 32bit ops
User-agent: Microsoft-Entourage/11.3.6.070618
511 entries? 511*8 + 8 bytes for the read/write pointers == 4096?

 -- Keir

On 31/10/07 20:28, "Ben Guthro" <bguthro@xxxxxxxxxxxxxxx> wrote:

Corrected a bug in the stdvga code where it did not
properly handle 32 bit operations.
The buf_ioreq_t can now store 32 bits of data.
Because this increases its size to 8 bytes,
only 510 elements fit in the buffered_iopage
(down from 672 elements).

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>