This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Benchmarking Xen (results and questions)

To: "Andrew Theurer" <habanero@xxxxxxxxxx>, <David_Wolinsky@xxxxxxxx>
Subject: RE: [Xen-devel] Benchmarking Xen (results and questions)
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Fri, 5 Aug 2005 00:55:59 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 04 Aug 2005 23:54:19 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcWZLheY7mU9/w7eRbCjvnFS4hEx/AAIR5Jg
Thread-topic: [Xen-devel] Benchmarking Xen (results and questions)
> This is much better.  I think my email client just didn't 
> format your first (html?) message.  For JBB, I suspect the 
> degrade is mostly cache thrashing, and the increased 
> timeslice = better cache warmth.  Perhaps there a lot of 
> overhead in the domain context switch as well.  What is the 
> cpu cache size?

Slices over 50ms won't yield much benefit -- it doesn't take a great
deal of time to warm a typical 1MB cache. The actual explicit cost of
performing a context switch is measured in terms of microseconds.

James Bulpin's PhD thesis provides a lot of hard data on stuff like this
for modern x86 CPUs.


Xen-devel mailing list