WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] x86: consolidate/enhance TLB flushing interface

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] x86: consolidate/enhance TLB flushing interface
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Wed, 17 Oct 2007 08:15:40 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 17 Oct 2007 00:14:33 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C33AA791.16EF9%Keir.Fraser@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <46CB1FF2.76E4.0078.0@xxxxxxxxxx> <C33AA791.16EF9%Keir.Fraser@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> 16.10.07 18:38 >>>
>On 21/8/07 16:25, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>> Folding into a single local handler and a single SMP multiplexor as
>> well as adding capability to also flush caches through the same
>> interfaces (a subsequent patch will make use of this).
>> 
>> Once at changing cpuinfo_x86, this patch also removes several unused
>> fields apparently inherited from Linux.
>
>Applied at last. I just changed the names of a few functions and added a few
>comments. Also, I don't know whether you empirically evaluated CLFLUSH
>versus WBINVD, but your CLFLUSH loop was actually broken because 'sz' was in
>pages rather than bytes. Hence you did not CLFLUSH a big enough area (by a
>large margin) and hence you would vastly underestimate the cost of the
>CLFLUSH approach.

Oh, good you caught this. But no, I didn't do any measurements, I just
wanted to cut off at the point where it is sufficiently sure using wbinvd
wouldn't be slower than looping over clflush, which I estimated at the
point where more data needs flushing than the cache can hold (of course,
if what is being flushed hasn't been referenced recently, this may still
be wrong, but otoh potentially flushing hundreds of megabytes in a loop
seemed wasteful - L2 [or L3 is present] cache size is likely already larger
than the real cutoff point, which in turn cannot reasonably be determined
empirically as it likely depends on the amount of hits the clflush-es would
have).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>