I'm trying to get some XEN VMs working on my CentOS 5.3 server. The VM's
are installed and everything is working but for the network. I believe my
problem is with the bridge. I have two ethernet adapters.
eth0 - external NIC running routable IP
eth1 - internal NIC running 10.0.xxx.xxx network
The internal machines behind the server are all NAT'd to the eth1 which
has an ip of 10.0.0.1. I have a vanilla install of CentOS 5.3 with the
XEN bits. Here is a list of the installed XEN and support packages:
[root@cerberus ~]# rpm -qa |grep -i xen
kernel-xen-2.6.18-92.1.17.el5
kernel-xen-2.6.18-92.1.22.el5
xen-libs-3.0.3-80.el5_3.2
xen-3.0.3-80.el5_3.2
kernel-xen-2.6.18-128.1.10.el5
[root@cerberus scripts]# rpm -qa |grep -i libvirt
libvirt-0.3.3-14.el5_3.1
libvirt-python-0.3.3-14.el5_3.1
[root@cerberus init.d]# rpm -qa |grep -i dnsmasq
dnsmasq-2.45-1.el5_2.1
I'm running the 2.6.18-128.1.10.el5xen kernel:
[root@cerberus init.d]# uname -r
2.6.18-128.1.10.el5xen
I only have one static routable IP address, and will be port forwarding on
the firewall (IPTables) to the VMs for the services the VMs will be
running. My hope is to have the VMs running on 10.0.2.XXX but that's not
a requirement. It seems that the CentOS distro is setup for
192.168.122.xxx so if that's needed, I'll deal with it.
Here is the route tables:
[root@cerberus ~]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
66.14.92.0 * 255.255.255.0 U 0 0 0 eth0
192.168.122.0 * 255.255.255.0 U 0 0 0 vnet0
169.254.0.0 * 255.255.0.0 U 0 0 0 eth0
10.0.0.0 * 255.0.0.0 U 0 0 0 eth1
default L408.AUSTTX-DSL 0.0.0.0 UG 0 0 0 eth0
I tried to change over the bridge by changing all the eth0 references in
the /etc/xen/scripts directory to use eth1:
[root@cerberus scripts]# grep eth1 *
network-bridge-bonding:netdev=${netdev:-eth1}
network-nat:netdev=${netdev:-eth1}
vif-common.sh: local nd=${netdev:-eth1}
but this isn't working. :( I have Dom1 and Dom2 up and running, and
setup as 192.168.122.2 and 192.168.122.3 but they are unable to ping or
connect to 192.168.122.1. Dom0 is able to see and connect to the vnet0:
[root@cerberus scripts]# ping -c2 192.168.122.1
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.127 ms
64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.108 ms
--- 192.168.122.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.108/0.117/0.127/0.014 ms
The last time I used these VMs was back when this was still a CentOS 5.0
(perhaps 5.1) box, but it seems that somewhere along the way the RPM
updates that brought it up to 5.3 did something and I'm unable to get
things working. :(
I had some problems with named, dhcpd, dnsmasq, and libvirtd having some
conflicts such that dnsmasq wouldn't start due to the ports being used,
but I set named and dhcpd to only listen on eth0 and eth1, and dnsmasq to
only listen on vnet0, so all four services are functional (near as I can
tell) now.
I've tried setting the DomU's as static, no joy. I've tried using the
dnsmasq dhcp, also to no avail. I spent the weekend searching the
archives, googleing by brains out, and trying experiment after experiment,
to get one of my DomU's to connect to either 192.168.122.1 or 10.0.0.1.
Any assistance here would be GREATLY appreciated... even if it were just
an example working configuration that allows a DomU to connect to an
internal private network on eth1.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|