On Fri, 24 Sep 2004 21:11:54 +0100
Ian Pratt <Ian.Pratt@xxxxxxxxxxxx> wrote:
> > Failed to obtain physical IRQ 4
>
> Physical IRQ 4 is normally used by serial UARTs.
>
> Have you enabled support for e.g. the 16550 uart in your dom0
> kernel config, but have also told xen to use com1 ? It's
> generally best to let xen have com1, then the virtual xencons
> driver will enable Linux to share the serial line with xen. This
> what happens with the default.
That was it, thanks! I wouldn't have figured that out (and the cluster
admins are on travel :). The console server slogging went away. I was
actually unable to get a shell through the console server before now
either, so this has enabled me to tackle the networking problem:
>
> > 2. I don't know if this is related, but after I'm logged in, then "xend
> > start" cuts off all network access. I assume this has something to do
> > with the bridging code, but I can't figure out what. A look at
> > xen.xend.* didn't help me much. The brctl tools seem to be working
> > normally. I feel like I'm missing something obvious..
>
> Odd. Add 'bash -x' at the top of the /etc/xen/scripts/network
> and then you should be able to see what the script is doing.
>
> It creates a bridge and then attempts to transfer the original
> network setup over to the bridge. Positing the output of
> "ip link show" and "ip route show" before and after should help
> figure out what's going wrong.
Adding bash -x didn't produce any output but having the console back let
me see the problem: NFS was hanging, problem solved. Sorry, I should
have connected the dots, there was a message before about this...
>
> > 3. This seems related to the first problem, I get this error when trying
> > to add the tun module:
> >
> > # modprobe tun
> > /lib/modules/2.4.27-xen0/kernel/drivers/net/tun.o: init_module:
> > Input/output error
> > Hint: insmod errors can be caused by incorrect module parameters, including
> > invalid IO or IRQ parameters
> > /lib/modules/2.4.27-xen0/kernel/drivers/net/tun.o: insmod
> > /lib/modules/2.4.27-xen0/kernel/drivers/net/tun.o failed
> > /lib/modules/2.4.27-xen0/kernel/drivers/net/tun.o: insmod tun failed
>
> Odd. Are you sure you installed the modules that were built for
> this kernel version?
They are. As an experiment, I wiped the /lib/modules/2.4.27-xen0
directory and redid 'make world' and 'make install' (after I recompiled
to fix the serial IRQ 4 issue) and still the same error. Before I did
not include this message from the kernel ring buffer:
"Universal TUN/TAP device driver 1.5 (C)1999-2002 Maxim Krasnyansky
tun: Can't register misc device 200"
(note I can create the tun character file just fine)
(/dev/net/tun created by "mknod /dev/net/tun c 10 200")
Does Xen do something curious with special character files in domains?
> Out of interest, what do you want to use tun for? Hosting other
> UML instances from withing xen guest instances? I guess this
> should work, though I've never tried it.
I am using tun with OpenVPN. My whole architecture actually relies on
the tun module. I assumed it would work for me with Xen because it is
only going to be used in dom0 (and so I could use the native Linux
driver).
The objective is to start VMs on remote grid nodes and L2 bridge all of
their network traffic to another network. There, they are used as a
backend for a grid node, i.e., a completely portable and custom
environment to run jobs in.
The VMs themselves have no hand in the bridging (by design). I make a
tap interface on the host resource for each VM and bridge the VM
directly to each tap interface (this tap interface is one end of the L2
tunnel). The VMs boot and get DHCP on the other end of the tunnels in a
special /30 subnet ('private' addresses). This is all done to create a
completely isolated network.
On the other end of the tunnel there's a firewall controller that
dynamically routes traffic to certain ports on the VM (there are of
course grid tools involved to start jobs in the VM).
This is all done to sandbox the VMs, make it reasonable for a site
administrator to host VMs (i.e., not letting VM traffic onto their
network), to introduce VMs to grids without needing a public IP for
each, and to allow open network connections to continue after a VM
migrates to an entirely different site.
If curious (not to be too lengthy), I don't use one LAN and ebtables on
the virtual LAN host because 1) each resource can tunnel many VMs which
would all be bridged together before hitting the second bridge (the
tunnel) and to avoid this would require a very similar setup to the
current one. 2) this scheme avoids the need for keeping track of
Distinguished Name->IP->MAC associations when DN->IP is all we really
want, 3) avoids any MAC spoofing issues, and 4) avoids the need for
ebtables+iptables when I could just learn iptables :)
p.s., I don't know how this type of bridging might be useful to others
not doing grid computing, but I will forward my paper (month or two) if
anyone is interested in using Xen in a similar way.
Thanks for all the help, most appreciated!!
>
> Ian
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
> Project Admins to receive an Apple iPod Mini FREE for your judgement on
> who ports your project to Linux PPC the best. Sponsored by IBM.
> Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/xen-devel
>
--
-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|