WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Stability report GPLPV 0.11.0.308

To: James Harper <james.harper@xxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Stability report GPLPV 0.11.0.308
From: Andreas Kinzler <ml-xen-devel@xxxxxx>
Date: Mon, 05 Sep 2011 12:13:51 +0200
Cc:
Delivery-date: Mon, 05 Sep 2011 03:14:35 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) Gecko/20110812 Thunderbird/6.0
Hello James,

I am doing quite rigorous torture tests with Xen and GPLPV. Let me first repeat the test setup:

Use Xen 4.1.1 and kernel 2.6.32.36 (commit ae333e9).
Configure 2 HVMs called VM1 and VM2 as follows (per HVM): 2 VCPUs, 2 virtual disks, 1024 MB RAM, viridian=1 Install Windows 2008 R2 SP1, do install everything twice - never clone. Install GPLPV, iometer 2006.07.27, prime95 26.6 x64, ActiveState Perl 5.12.4 x64, wget for Windows and the attached perl script.

Run iometer with 2 workers on the same but separate second virtual disk, queue depth 4 per worker, access specification "All in one". Run prime95 torture test with "In-place large FFTs". On VM1 use the task manager to set affinity to VCPU2, on VM2 set affinity to VCPU1. Run the perl script to fetch a good mix of some large (50-500 MB) and many small (some KB) files from a high performance FTP server on the LAN (I use vsftpd).

This generates quite some load as vmstat shows:
virt5620 ~ # vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 723408 6860 33860 0 0 82113 82132 22503 30252 2 12 84 0 0 0 0 723408 6860 33860 0 0 80117 82913 23109 30776 1 13 83 0 4 0 0 723408 6860 33860 0 0 92555 87013 28411 33283 2 12 84 0 4 0 0 723408 6860 33860 0 0 82678 85775 26228 31739 1 13 83 0 5 0 0 723408 6860 33860 0 0 82252 84837 24180 29723 1 14 82 0

With GPLPV 0.11.0.308 it worked perfectly and with very good performance for over 9 days but then when I wanted to monitor the status, I was no longer able to connect via remote desktop. When examining the file system of the HVMs I found that somehow even the prime95 processes did stop.

Any ideas? Could c/s 948 make any difference? Network worked perfectly for 9 days, so I ask myself if the count of c/s 948 is used at all?

Regards Andreas

Attachment: test-via-wget.pl
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel