WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Xen network configuration

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Xen network configuration
From: "Glen Davis" <gldavis@xxxxxxxxxx>
Date: Tue, 19 Dec 2006 13:42:49 -0700
Delivery-date: Tue, 19 Dec 2006 12:43:01 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx

Can anyone tell me what they would do for a network configuration for this scenerio?


I want to create 8 VMs on a single host, I have 5 network cards.  All the VMs will be on the same network and talking with other physical machines on the lab network.  I have chosen to create a single bridge for each network card, and assign 1-2 VMs per bridge, and a total of 5 bridges.  This seems to work.  Other than using PCI pass-through to access the nics directly is there any other better configuration to get the best performance?  I have tried "bonding" several NICS together for load balancing (on SLES10) then using one bridge, but that doesn't have as good of performance.  What do other users/customers typically do for this situation?


Thanks,
Glen









_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>