WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Performance issues when serving a file from one domain to an

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Performance issues when serving a file from one domain to another (same host) on xen 3.4
From: Joe Whitney <jwhitney@xxxxxxxxxxxxxx>
Date: Tue, 26 Jul 2011 17:31:56 -0400
Delivery-date: Wed, 27 Jul 2011 08:11:57 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:from:date:x-google-sender-auth:message-id :subject:to:content-type; bh=ZCbJiVDJnJj9uhnZACEUyTLg5USub+ox4v+6WefUs3g=; b=CSETvbSepqCtiGDQ0BrmPf5Efeq7JMN+Nx7Hhh0a7FPYjJwTxl3+8ng2THfXB8RD9b dYxcd2ha3CCZ4OB54VGABVjXB1U5/3n3AcVbx2BYEsg7DifPsKq6StT6MMnmiPRhvQPV nEP3NlnhTE9jp+xTQXr2BTYyK0q4SjcR348mI=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hello,

I have consistently seen poor performance when a domain is serving a file from a locally-attached storage device over the xenbridge "network" to another, client domain on the same host.  I have reduced the problem to the following very simple scenario involving two domUs: one client and one server.  For my purpose the only difference is that server has an SSD mounted (as a block device) at /mnt.  Each has 1 vcpu and 512mb RAM on a 4-hyperthreaded-core machine (shows as 8 "cores" in dom0).

server: eth1 IP address 192.168.62.110

client: eth1 IP address 192.168.62.202

(in the following, I am executing "echo 3 > /proc/sys/vm/drop_caches" on dom0 before each command shown)


First to test the speed of tearing through a random gigabyte of data I put there for the purpose:
server# time cat /mnt/randgig > /dev/null
~4s (4 seconds, times here are averages over several runs, dropping caches between)

Now let's test the speed of the "network" between client and server without interference from the disk

server# dd if=/dev/zero bs=4096 count=262144 | nc -lvv -p 3500 -q0
client# time nc 192.168.62.110 3500 > /dev/null
~3.5s

Finally, let's actually transfer data from disk to the client
server# dd if=/dev/zero bs=4096 count=262144 | nc -lvv -p 3500 -q0
client# time nc 192.168.62.110 3500 > /dev/null
~18.8s

So you see, it is much slower to both read from disk and transfer over the (xenbridge) network than to do either alone, even though (in theory) I have enough processors (4 or 8 depending on how you count) to do all the work.

If I move the client to a different (identically configured) host attached by 1Gbit ethernet through a switch, I get these revised times:

transfer a gig of /dev/zero from server to client: 9.5s instead of 3.5
transfer a gig of /mnt/randgig from server to client: 14.2s instead of 18.8s !!

This further confirms that there is some bad interaction between disk and network i/o scheduling, presumably in the dom0 backend but I am not sure how to tell for sure.

I have tried every combination of # of vcpus, pinning vcpus, etc on both domUs and dom0.  I have also tried the experiment with dom0 as the server; the main difference is that the performance is worse in all cases but still better if the client is on a different host.

So in summary my questions are:

1) why is it so much slower to transfer a file from disk over the xenbridge network than either reading from the disk or sending bytes over the network alone?
2) what can I do about it?

I have searched in vain for any hint of this problem, except that the Xen documentation says somewhere I should pin and fix the number of dom0 cpus when doing I/O-intensive work in the guests, but I have tried this to no avail.

I would appreciate any insights.

Best,

Joe Whitney

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>