WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Error: Kernel image does not exist: /boot/vmlinuz-2.6.16-xen

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Error: Kernel image does not exist: /boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
From: nbitspoken <nbitspoken@xxxxxxxxxxx>
Date: Fri, 18 Aug 2006 10:39:20 -0400
Delivery-date: Fri, 18 Aug 2006 07:38:21 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.5 (X11/20060719)
Hello,

I am trying to boot an RHAS 4 release 3 guest kernel (the binary release
available through Redhat Network) from a SLES 10 dom0 host (the binary
release available through Suse's software distribution channel). Both
systems are installed as physical partitions in a standard multiboot
configuration on a single hard drive on a recent vintage HP Pavilion
(zd7380) notebook PC with a single 5400 RPM built-in hard drive and 2 G
of RAM (which is more like 1.5 G according to 'free.' ).  I've been
struggling for several days with this problem following an uneventful
(with the exception of the proprietary nvidia driver I had to uninstall)
boot into the SLES 10 domain 0 kernel. The problem is that I cannot get
beyond xm's catatonic retort:

       Error: Kernel image does not exist:
/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1

whenever I try to boot the guest domain mentioned above using the
command line (per the xen 3.0 user manual):

       xm create [-c] /etc/xen/vm/rhas4,

where rhas4 is the name of the configuration file for the guest domain
(see below).  To offset the natural suspicion that I have simply got the
path wrong, I submit the following transcript of my 'path verification'
procedure (executed from a running Dom0):

reproach:~ # mount /dev/hda7 /mnt/hda7

# Verify that the desired device has been exported to the guest domain:
reproach:~ # grep phy /etc/xen/vm/rhas4
# Each disk entry is of the form phy:UNAME,DEV,MODE
disk = [ 'phy:hda7,hda7,w' ]
# disk = [ 'phy:vg1/orabase1,/oracle/orabase1,w' ]
# disk = [ 'phy:vg1/oas1,vg1,/oracle/oas1,w' ]

# Verify that the /etc/fstab file in the guest domain agrees with the
exported
# name of the desired device:
reproach:~ # grep hda7 /mnt/hda7/etc/fstab
/dev/hda7                /                       ext3    defaults        1 1

# Compare the kernel and ramdisk lines from the config file with the
paths relative to the exported
# device of the files to which these lines purport to refer:

reproach:~ # cat /etc/xen/vm/rhas4 | grep "kernel ="
                          kernel = "/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1"
reproach:~ #   ls  /mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1
                         /mnt/hda7/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1

reproach:~ # cat /etc/xen/vm/rhas4 | grep "ramdisk ="
                     ramdisk = "/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img
reproach:~ # ls  /mnt/hda7/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img
                       /mnt/hda7/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img

I have indented all output lines to facilitate visual comparison of the
relevant lines.

# Now Attempt to boot the guest domain using the xm command-line utility:
reproach:~ # umount /dev/hda7
reproach:~ # xm create -c /etc/xen/vm/rhas4
Using config file "/etc/xen/vm/rhas4".
Error: Kernel image does not exist: /boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1

Some help would be very much appreciated. BTW, the RHAS 4.3 installation
is the original partition. It is not
critical, but I would prefer not to destroy it because I have invested
considerable time setting installing and configuring
its contents. There is, for instance, an Oracle 10g Enterprise database,
app servers, IDE's etc.  I  have little  doubt that I
am putting that system at some risk, but how much risk, assuming that I
don't allow other domains write access to the
guest file system? Also, what will happen if I try to boot the guest
partition outside of xen (i.e. natively from grub) after
running it as a xen domain (assuming I ever get beyond the "kernel image
does not exist stage)?

Having said that, I don't want to let the focus slip to safety issues.
My first priority is just to get off the ground with booting
the guest domain.

TIA,

nb

      #  -*- mode: python; -*-
#============================================================================
# Python configuration setup for 'xm create'.
# This script sets the parameters used when a domain is created using
'xm create'.
# You use a separate script for each domain you want to create, or
# you can set the parameters for the domain on the xm command line.
#============================================================================

#----------------------------------------------------------------------------
# Kernel image file.
kernel = "/boot/vmlinuz-2.6.16-xen3_86.1_rhel4.1"

# Optional ramdisk.
ramdisk = "/boot/initrd-2.6.16-xen3_86.1_rhel4.1.img"

# The domain build function. Default is 'linux'.
#builder='linux'

# Initial memory allocation (in megabytes) for the new domain.
memory = 512

# A name for your domain. All domains must have different names.
name = "rhas1"

# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = ""         # leave to Xen to pick
#cpus = "0"        # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5

# Number of Virtual CPUS to use, default is 1
#vcpus = 1

#----------------------------------------------------------------------------
# Define network interfaces.

# By default, no network interfaces are configured.  You may have one
created
# with sensible defaults using an empty vif clause:
#
# vif = [ '' ]
#
# or optionally override backend, bridge, ip, mac, script, type, or vifname:
#
# vif = [ 'mac=00:16:3e:00:00:11, bridge=xenbr0' ]
#
# or more than one interface may be configured:
#
# vif = [ '', 'bridge=xenbr1' ]

# vif = [ '' ]
vif = [ 'mac=00:16:3e:17:b9:d8' ]
#----------------------------------------------------------------------------
# Define the disk devices you want the domain to have access to, and
# what you want them accessible as.
# Each disk entry is of the form phy:UNAME,DEV,MODE
# where UNAME is the device, DEV is the device name the domain will see,
# and MODE is r for read-only, w for read-write.
disk = [ 'phy:hda7,hda7,w' ]
# disk = [ 'phy:vg1/orabase1,/oracle/orabase1,w' ]
# disk = [ 'phy:vg1/oas1,vg1,/oracle/oas1,w' ]

#----------------------------------------------------------------------------
# Define to which TPM instance the user domain should communicate.
# The vtpm entry is of the form 'instance=INSTANCE,backend=DOM'
# where INSTANCE indicates the instance number of the TPM the VM
# should be talking to and DOM provides the domain where the backend
# is located.
# Note that no two virtual machines should try to connect to the same
# TPM instance. The handling of all TPM instances does require
# some management effort in so far that VM configration files (and thus
# a VM) should be associated with a TPM instance throughout the lifetime
# of the VM / VM configuration file. The instance number must be
# greater or equal to 1.
#vtpm = [ 'instance=1,backend=0' ]

#----------------------------------------------------------------------------
# Set the kernel command line for the new domain.
# You only need to define the IP parameters and hostname if the domain's
# IP config doesn't, e.g. in ifcfg-eth0 or via DHCP.
# You can use 'extra' to set the runlevel and custom environment
# variables used by custom rc scripts (e.g. VMID=, usr= ).

# Set if you want dhcp to allocate the IP address.
dhcp="dhcp"
# Set netmask.
netmask="255.255.255.0"
# Set default gateway.
gateway="192.168.1.1"
# Set the hostname.
# hostname= "vm%d" % vmid
hostname = "absolute"
# Set root device.
root = "/dev/hda7 ro"

# Root device for nfs.
#root = "/dev/nfs"
# The nfs server.
#nfs_server = '169.254.1.0'
# Root directory on the nfs server.
#nfs_root   = '/full/path/to/root/directory'

# Sets runlevel 4.
# extra = "5"
extra = 'TERM=xterm'
#----------------------------------------------------------------------------






_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>