WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part

To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxxxxxxx>
Date: Mon, 12 Jan 2009 13:55:47 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Mon, 12 Jan 2009 04:56:15 -0800
Domainkey-signature: s=s768; d=fujitsu-siemens.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=snehnBFz7t0JyDPtWASHF3doVExc76Vyz+DIIhVHJE1jSHVeDHE1gReN KG2uJYz4Q58+FRx7iqWRYn+MxVkXMw2pmoDD6JfiNZup5lGkkptI/evi6 2Hia0UV7z6uuZLL;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <de76405a0812190715r7f105f75rf4cbcede1fdd86e3@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Siemens Computers
References: <494B7892.76E4.0078.0@xxxxxxxxxx> <C5712033.206AD%keir.fraser@xxxxxxxxxxxxx> <de76405a0812190715r7f105f75rf4cbcede1fdd86e3@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla-Thunderbird 2.0.0.19 (X11/20090103)
George Dunlap wrote:
> The general idea seems interesting.  I think we've kicked it around
> internally before, but ended up sticking with a "yield after spinning
> for awhile" strategy just for simplicity.  However, as Jeurgen says,
> this flag could, in principle, save all of the "spin for a while"
> timewasting in the first place.
> 
> As for mis-use: If we do things right, a guest shouldn't be able to
> get an advantage from setting the flag when it doesn't need to. If we
> add the ability to preempt it after 1ms, and deduct the extra credits
> from the VM for the extra time run, then it will only run a little
> longer, and then have to wait longer to be scheduled again time.  (I
> think the more accurate credit accounting part of Naoki's patches are
> sure to be included in the scheduler revision.)  If it doesn't yield
> after the critical section is over, it risks  being pre-empted at the
> next critical section.
> 
> The thing to test would be concurrent kernel builds and dbench, with
> multiple domains, each domain vcpus == pcpus.
> 
> Would you mind coding up a yield-after-spinning-awhile patch, and
> comparing the results to your "don't-deschedule-me" patch kernel build
> at least, and possibly dbench?  I'm including some patches which
> should be included when testing the "yield after spinning awhile"
> patch, otherwise nothing interesting will happen.  They're a bit
> hackish, but seem to work pretty well for their purpose.

It took some time (other problems, as always ;-) ), but here are the results:

Hardware: 4 cpu x86_64 machine, 8 GB memory.
Domain 0 with 4 vcpus, 8 other domains with 1 vcpu each spinning to force vcpu
scheduling.
8 parallel xen hypervisor builds on domain 0 plus scp from another machine to
have some network load.
Additional test with dbench after the build jobs.

Results with patched system (no deschedule):
--------------------------------------------
Domain 0 consumed 581.2 seconds, the other domains each about 535 seconds.
While the builds were running 60 scp jobs finished.
Real time for the build was between 1167 and 1214 seconds (av. 1192 seconds).
Summed user time was 562.77 seconds, system time 12.17 seconds.
dbench: Throughput 141.764 MB/sec 10 procs
System reaction to shell commands: okay

Original system:
----------------
Domain 0 consumed 583.8 seconds, the other domains each about 540 seconds.
While the builds were running 60 scp jobs finished.
Real time for the build was between 1181 and 1222 seconds (av. 1204 seconds).
Summed user time was 563.02 seconds, system time 12.65 seconds.
dbench: Throughput 133.249 MB/sec 10 procs
System reaction to shell commands: slower than patched system

Yield in spinlock:
------------------
Domain 0 consumed 582.2 seconds, the other domains each about 555 seconds.
While the builds were running 50 scp jobs finished.
Real time for the build was between 1226 and 1254 seconds (av. 1244 seconds).
Summed user time was 563.43 seconds, system time 12.63 seconds.
dbench: Throughput 145.218 MB/sec 10 procs
System reaction to shell commands: sometimes "hickups" for up to 30 seconds
Included were the hypervisor patches of George


Conclusion:
-----------
Differences not really big, but my "no deschedule" patch had least elapsed
time for build-jobs, while scp was able to transfer same amount of data as
in slower original system.
The "Yield in spinlock" patch had slightly better dbench performance, but
interactive shell commands were a pain sometimes! I suspect some problem in
George's patches during low system load to be the main reason for this
behaviour. Without George's patches the "Yield in spinlock" was very similar
to the original system.


Juergen

-- 
Juergen Gross                             Principal Developer
IP SW OS6                      Telephone: +49 (0) 89 636 47950
Fujitsu Siemens Computers         e-mail: juergen.gross@xxxxxxxxxxxxxxxxxxx
Otto-Hahn-Ring 6                Internet: www.fujitsu-siemens.com
D-81739 Muenchen         Company details: www.fujitsu-siemens.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel