WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] GPLPV benchmark results

To: James Harper <james.harper@xxxxxxxxxxxxxxxx>
Subject: [Xen-users] GPLPV benchmark results
From: Sandro Sigala <sandro@xxxxxxxxxxxx>
Date: Mon, 14 Jul 2008 13:23:06 +0200
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 30 Jul 2008 02:00:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (Windows/20080421)
Hi James,

I tested (as I wrote a few days ago) the latest PV drivers release, and they
seem to work correctly apart of the "Safely remove Xen Net Device Driver" tray
icon glitch.

I made a series of benchmarks with PassMark's PerformanceTest on 6 twin
concurrent VMs with the PV drivers installed, both with the /GPLPV switch on
and off, and on one lonely VM, still in both situations.

I tested in particular the disk performance and I noticed a decrease of I/O
throughput, while at the same time a significant decrease of the VM CPU
utilization.

The test results can be downloaded from:

http://www.roxantis.com/xen/gplpv_bench.zip

Best Regards,
Sandro



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>