WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] XCP Linux Para virtualization 10 GigE interfaces

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] XCP Linux Para virtualization 10 GigE interfaces
From: "Hoot Thompson" <hoot@xxxxxxxxxx>
Date: Tue, 19 Apr 2011 10:04:52 -0400
Delivery-date: Tue, 19 Apr 2011 07:06:43 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acv+msCM2jbV2DM+TE2FgozJvzs9NQ==
My test system consists of two XCP 1.0 nodes plus an Ubuntu frontend running
OpenXenManager. The two XCP 1.0 nodes have both GigE and 10 GigE interfaces
but of interest is the 10 GigE.  My goal is to get wire speed on the 10 GigE
from guests running on each of the nodes. So far I have built CentOS 5.6
guests on both the nodes and I have brought up the 10 GigE interface between
the two. Performance is around 20% of wire. My assumption is that I need to
run para virtualized drivers. So that's my question, how do I set it up? So
far on each of the CentOS guests I've installed kmod-xenpv (yum install
kmod-xenpv*). Good so far? What next?

Thanks in advance!



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>