WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [GIT PULL RFC] pvclock cleanups and pvclock vsyscall sup

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] Re: [GIT PULL RFC] pvclock cleanups and pvclock vsyscall support
From: Avi Kivity <avi@xxxxxxxxxx>
Date: Sun, 18 Oct 2009 17:23:12 +0900
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, kurt.hackel@xxxxxxxxxx, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Glauber de Oliveira Costa <gcosta@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, Chris Mason <chris.mason@xxxxxxxxxx>
Delivery-date: Sun, 18 Oct 2009 01:24:42 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4ADACF48.9020907@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1255548516-15260-1-git-send-email-jeremy.fitzhardinge@xxxxxxxxxx> <4AD6C679.3000001@xxxxxxxxxx> <4AD77C21.2050506@xxxxxxxx> <4ADAB8F4.6090502@xxxxxxxxxx> <4ADACF48.9020907@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090814 Fedora/3.0-2.6.b3.fc11 Thunderbird/3.0b3
On 10/18/2009 05:18 PM, Jeremy Fitzhardinge wrote:
On 10/18/09 15:43, Avi Kivity wrote:
On 10/16/2009 04:46 AM, Jeremy Fitzhardinge wrote:
Care to cook up a patch to implement the kvm bits to make sure it all
works OK for you?

I've started to do that, but it occurs to me that we're missing out on
NUMA placement by forcing all clocks to be on the same page. OTOH, if
the clocks are heavily used, they'll stay in cache, and if not, who
cares.
Yes, I'd say so.  I'd expect the data to be very close to read-only, so
the lines should be shared pretty efficiently.


There wouldn't be any sharing since each clock is on its own cache line. But the point is valid regardless.

On the other hand, there's nothing to stop us from moving to multiple
pages in future (either to support NUMA placement, or just more than 64
cpus).

I'm already allocating multiple pages, so we'd just need to adjust the fixmap.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel