WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Revisiting XenD / XenStored performance / scalability issues

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Revisiting XenD / XenStored performance / scalability issues
From: "Daniel P. Berrange" <berrange@xxxxxxxxxx>
Date: Wed, 25 Apr 2007 18:20:47 +0100
Delivery-date: Wed, 25 Apr 2007 10:19:35 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Reply-to: "Daniel P. Berrange" <berrange@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.1i
Waay back at the end of 3.0.3 dev cycle I brought up the issue of XenD
running far too many xenstore transactions per-request

http://lists.xensource.com/archives/html/xen-devel/2006-10/msg00487.html

Short summary:  

   # nc -U /var/lib/xend/xend-socket 
   GET /xend/domain/test

Resulted in approx 16 xenstore transactions for a domain with one disk and
one NIC - this increases as # of devices increases.

Since there was major XenAPI work about to be done which would refactor a
large portion of XenD code it was anticipated this situation would improve.
I've just tested again with Xen 3.0.5 rc2, but we seem to have got worse,
now doing approx 30 xenstore transactions for a domain with one disk and
one NIC. Again this figure of 30 increases as you add more devices to
the guest.

As a test case I'm using

  time for i in `seq 1 1000` ; do virsh list > /dev/null  ; done

Which takes approx  1m 45 seconds to complete. During this test run the
CPU usage shown by top gives xenstored 70% utilization and xend about 15%.
As noted before xenstored is bottlenecked doing disk I/O, since each txn
requires a copy of the tdb database - 30 txns * 200 kb =~ 6 MB of I/O.

As a quick 'hack' I changed xend init script to have

     mkdir /dev/shm/xenstored
     mount --bind /dev/shm/xenstored /var/lib/xenstored

Which puts xenstored's database on tmpfs (ie RAM). This reduced the runtime
of the test to 55 seconds on average. OProfile still showed that most of
xenstored's time was spent doing I/O - even though that I/O was to a RAM
disk there was still the data copying overheads between kernel/userspace.
This validated that reducing the xenstored I/O overhead is the way to 
address the performance problem. 

The core problem is that XenD does lots of 'singleton' transactions - ie it
has each individual xenstore read within its own transaction. So I've put
together a proof of concept patch which pulls the transactions up in the
call stack for several key places inside XenD. With this patch applied, a
single 'GET /xend/domain/test' now only does 2 transactions regardless of
how many devices exist in the guest.

This reduced the runtime from the test from 1m 45s, to average of 30s - so
even much better than the tmpfs results. CPU usage from top now shows that 
xenstored is taking <  1% CPU time during the test, and XenD is taking about
25% CPU time. 

The one remaining puzzle is why I can't get XenD to max out a single CPU.
This is a dual core box, so I'd expect XenD CPU time to hit 50%, but it
never went above 25%. I can only imagine there is some kind of 'sleep' state,
or synchronization overhead hiding in the code somewhere, because there
was no I/O wait time reported and no other processes had any CPU time 
against them.

Finally this test case is obviously using the legacy SEXPR API, so a
simple 'virsh list' shoudl be much faster with XenAPI - however, there
do seem to be a number of places where even the new XenAPI ends up doing
huge numbers of 'singleton' transactions - 'xm create' is one - about 80
transactions to create a single domain. 

I'd really like to see all the 'convenience' methods in the object
 xen.xend.xenstore.xstransact removed, and have the caller be responsible
for managing transactions. The convenience APIs are making it very unclear
just where the overhead is coming from since the are a number of call-chains
which can ultimately trigger these transactions.

Attaching the patch against 3.0.5/unstable as reference. 

Dan.
-- 
|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 

Attachment: xen-xs-transactions.patch
Description: Text document

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] Revisiting XenD / XenStored performance / scalability issues, Daniel P. Berrange <=