On Fri, Nov 07, 2003 at 10:03:41PM +0000, Ian Pratt wrote:
> > First, I noted that xen_nat_enable was *not* built along with the
> > other tools in xeno-clone/install/bin. Is this still needed (per the
> > README.CD instructions, for a NAT-based virtual host, rather than
> > IP-based)?
>
> It's a script rather than a binary.
Yes....I was just worried about versioning, as we've been
warned with the xi_ programs.
> The current 'loop through domain0' approach to NAT is not the
> long term solution (we're adding NAT to Xen).
>
> I'm afraid I'm not entirely surprised that xen_nat_enable doesn't
> play well with your firewall.
I'll do a little more diagnosis in the future. What I think
happened, though, is that the NAT's nat* rules somehow discarded
the filter* rules. I was also getting some complaints about
mangle* needing to load the iptables module, which was not found
(this was when I was trying to re-add my default rules).
iptables is a big pain, no matter what. But adding nat* rules
(especially when there were none in the first place) seems
like it should work.
> Are you short of IP addresses? I'd certainly recommend using one
> IP per guest for the moment unless you really have to use NAT. Of
> course, you don't need to use NAT if you only want to do
> inter-guest communication (you can use the 169.254.1.X addresses
> directly).
1) Yes, I'll have IP addresses to play with, but don't yet as of today.
2) Hmmm -- this does not work. Any quick guess what to try fixing?
$ xenctl domain list
id: 0 (Domain-0)
processor: 0
has cpu: true
state: 0 active
mcu advance: 10
total pages: 192000
id: 2 (XenoLinux)
processor: 0
has cpu: false
state: 1 stopped
mcu advance: 10
total pages: 24576
$ ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:B0:D0:DF:FA:ED
inet addr:169.254.1.0 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
$ ping -c1 169.254.1.0
PING 169.254.1.0 (169.254.1.0) from 169.254.1.0 : 56(84) bytes of data.
64 bytes from 169.254.1.0: icmp_seq=1 ttl=64 time=0.083 ms
--- 169.254.1.0 ping statistics ---
1 packets transmitted, 1 received, 0% loss, time 0ms
rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms
$ ping -c1 169.254.1.1
PING 169.254.1.1 (169.254.1.1) from 169.254.1.0 : 56(84) bytes of data.
[times out]
--- 169.254.1.1 ping statistics ---
1 packets transmitted, 0 received, 100% loss, time 0ms
I saw nat* rules for this, also for port 2201. But now
I re-ran xen_nat_enable and locked myself out again. I'll
reboot, and look some more, but meanwhile maybe you can
tell me:
- Should 169.254.1.1 be ping-able from 169.254.1.0 ?
- Do I "ssh -p2201 root@xxxxxxxxxxx"
or "ssh -p2201 root@xxxxxxxxxxx" or
"ssh -p2201 root@localhost" or the native IP address (137. ...)
Finally, and this concludes today's confusion: I seem unable to
get any sort of console output after the kernel boots. I
redirected with "console=xencons0", but even after NAT, don't
get anything. /dev/tty0 didn't seem any different.
I'd *really* like to redirect to a file, or to the
physical console (tty0). I suspect this is another firewall
issue with console messages via UDP & NAT, but if there is
a workaround to get it sent to a file I'd greatly prefer it.
Ok, one more random question: I noted that the new domain
wanted an initrd.gz, but I did not get a new one in the
xeno-clone tree. "mkinitrd -k image.gz -i initrd.gz" failed
("couldn't find modules"). I copied the initrd from the
CD and it seems to work, but it might not forever.
> > I copied & ran the xen_nat_enable from the CD, and immediately was
> > unable to access my machine to/from the network (I had already run
> > "ifconfig eth0:0 169.254.1.0 up").
> >
> > What I found was that the SuSEfirewall default configuration did not
> > get along well with whatever changes to iptables were made by
> > xen_nat_enable. My solution, which needs to be tuned later, was to
> > edit /etc/sysconfig/SuSEfirewall2 to greatly loosen the firewall. I
> > then restarted it:
>
> Another thing to watch out for is that some distributions
> 'helpfully' create random link-local 169.254.x.x addresses for
> all interfaces automatically. This doesn't play well with our use
> of link-local addresses. e.g. you have to nail this in RH9 with ZEROCONF=NO
> in ifcfg-eth0
I'm using SuSE, which doesn't seem to do this. However, the
SuSE setup for iptables is *really* much more difficult than
for RedHat. They obfuscate it in multiple layers of scripts and
stuff...
-- Greg
-------------------------------------------------------
This SF.Net email sponsored by: ApacheCon 2003,
16-19 November in Las Vegas. Learn firsthand the latest
developments in Apache, PHP, Perl, XML, Java, MySQL,
WebDAV, and more! http://www.apachecon.com/
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|