WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and networking.

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Xen and networking.
From: tmac <tmacmd@xxxxxxxxx>
Date: Tue, 1 Jan 2008 08:16:22 -0500
Delivery-date: Tue, 01 Jan 2008 05:17:18 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=gg+aAalYOgdSzeAVO2AlMjBd+TAuxcrJaPOWK6iNq3M=; b=epFTVHVGIgaNof7VTXcMQxEp8uUc4iMNdpINReteaAGQo1cSxdCd6cz+mbS/oiupSDcpj5pXjgdvHM0Mr2GGC7uFQwXDKipn06Mf6Wwi9FInOhB/N8XKK2cUWFAyUb/9A9dj1x2ApOHugvfwKjotc+saAM+zapRZKZjk8YyaOiY=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=GiNb2WrFPX7LO0BmYHesNMD+aIqYjhbNQUL+fb88+wfTHRXrYgfLsIjEIYL94VQCPAVmCWASTHZd8s2jRcemaQbnbawSqwPoEb2JY09ilrR2nPYHCmw7yGWfSwXonM7eCjDf9l/BHYZm9fkO/kvnVFQBHlEK1xhQ/dO0XOQ84R4=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <47778AE1.2060104@xxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <73699aa30712281228j694f9fb5l17860901fad8cd77@xxxxxxxxxxxxxx> <47778AE1.2060104@xxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Ok, well I have it working....

The following NFS mount options:
hard,intr,vers=3,tcp,wsize=32768,wsize=32768,timeo=600

Here are the changes to the /etc/sysctl.conf file on the Guests
(for the host, the last line, sunrpc, is not available, so remove it)

net.core.netdev_max_backlog = 3000
net.core.rmem_default = 256960
net.core.rmem_max = 16777216
net.core.wmem_default = 256960
net.core.wmem_max = 16777216
net.core.rmem_default = 65536
net.core.wmem_default = 65536
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_mem = 4096 4096 4096
sunrpc.tcp_slot_table_entries = 128

Also, add "/sbin/sysctl -p" as the first entry in /etc/init.d/netfs to
make sure that the setrtings get read before any NFS mounts take
place.

For the record, I get 95-102MB/sec each with a simple DD

--tmac

On Dec 30, 2007 7:11 AM, Riccardo Veraldi <Riccardo.Veraldi@xxxxxxxxxxxx> wrote:
>
> if you want to get Gigabit performance on your domU (using HVM
> virtualization)
> you MUST compile the xen unmodified_drivers (in particular Netfront) and
> load
> those drivers as kernel modules on your domU.
> Then you must change the guest machine xen file using netfront insted of
> ioemu
> for the network interface. I have written a page on how to do it but it
> is written in italian.
> Anyway if you follow the instruction you should understand looking at
> the bare commands.
>
> https://calcolo.infn.it/wiki/doku.php?id=network_overbust_compilare_e_installare_il_kernel_module_con_il_supporto_netfront
>
> of couse the xen source coude depends on the xen version you are using
> on your dom0.
> Actually I was not satisfied of Xen 3.0.2 used on RHEL5 so we build rpm
> for Xen 3.1.2
> and actually we are using those.
>
> Rick
>
>
> tmac ha scritto:
>
> > I have a beefy machine
> > (Intel dual-quad core, 16GB memory 2 x GigE)
> >
> > I have loaded RHEL5.1-xen on the hardware and have created two logical 
> > systems:
> > 4 cpus, 7.5 GB memory 1 x Gige
> >
> > Following RHEL guidelines, I have it set up so that eth0->xenbr0 and
> > eth1->xenbr1
> > Each of the two RHEL5.1 guests uses one of the interfaces and this is
> > verified at the
> > switch by seeing the unique MAC addresses.
> >
> > If I do a crude test from one guest over nfs,
> > dd if=/dev/zero of=/nfs/test bs=32768 count=32768
> >
> > This yields almost always 95-100MB/sec
> >
> > When I run two simultaneously, I cannot seem to get above 25MB/sec from 
> > each.
> > It starts off with a large burst like each can do 100MB/sec, but then
> > in a couple
> > of seconds, tapers off to the 15-40MB/sec until the dd finishes.
> >
> > Things I have tried (installed on the host and the guests)
> >
> >  net.core.rmem_max = 16777216
> >  net.core.wmem_max = 16777216
> >  net.ipv4.tcp_rmem = 4096 87380 16777216
> >  net.ipv4.tcp_wmem = 4096 65536 16777216
> >
> >  net.ipv4.tcp_no_metrics_save = 1
> >  net.ipv4.tcp_moderate_rcvbuf = 1
> >  # recommended to increase this for 1000 BT or higher
> >  net.core.netdev_max_backlog = 2500
> >  sysctl -w net.ipv4.tcp_congestion_control=cubic
> >
> > Any ideas?
> >
> >
> >
>
>



-- 
--tmac

RedHat Certified Engineer #804006984323821 (RHEL4)
RedHat Certified Engineer #805007643429572 (RHEL5)

Principal Consultant, RABA Technologies

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>