WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] virtual disk/block-device problem

On Fri, 2003-12-19 at 18:23, Mike Hibler wrote:
> I am trying to write a script to handle setup of domains using virtual
> block devices for their root FS and have been unable to get the virtual
> devices to work with any sort of consistency.  This is using xen-1.1.bk
> on Redhat 7.3 (I had to rebuild all the tool binaries to run on 7.3).

<-- snip -->

Everything looks good so far but my memory is foggy... 

> Write access is given to domain0 (-n0) so I can initialize it.  My assumption
> here is that the <vdb_num> given to the -v option translates into /dev/xvda
> for -v0, /dev/xvdb for -v1, etc.  Is that correct?

Yes that is correct.

> 
> BTW, do I need to create a distinct virtual device which grants access to
> the domain whose kernel is going to use the virtual disk as its root?
> Currently I do not, I just set root=/dev/xvdN in xi_build where xvdN is the
> device I create/use here for dom0 initialization.  Do I need to do a
> "xenctl physical grant" for either the virtual block device or the
> partition on which the virtual disks reside?


See below.

> Moving on:
> 
>       xen_refresh_dev /dev/xvda
>       xen_refresh_dev /dev/xvdb
>       xen_refresh_dev /dev/xvdc
> 
> I read in the mailing list archive that this refresh is needed...
> Now I run "fdisk -l" on each of them and get:
> 
>       Disk /dev/xvda: 255 heads, 63 sectors, 16 cylinders
>       Disk /dev/xvdb: 255 heads, 63 sectors, 32 cylinders
>       Disk /dev/xvdc: 255 heads, 63 sectors, 48 cylinders
> 
> Note the ever increasing number of cylinders.  This makes mkfs think that
> xvdb and xvdc are larger than they really are.  mkfs does succeed, but you
> get a lot of:
> 

Ahh yes, this looks like a 1.0 problem I had. I haven't tried the 1.1
myself but Ian tells me there are people using the 1.1 VD/VBD stuff. I
can confirm the behavior you are seeing: /dev/xvdb's start offset begins
at '0 instead of +sizeof(xvda).'

Some additional information about my experiences in 1.0/1.1: 

If you set root=/dev/sda5 and then do a physical grant sda5 you'll get
some interesting results when you write to the filesystem. What I
believe happens here is the writes are executed twice in the 1.0 tree.
This doesn't seem to happen in the 1.1 tree.

To answer your question about granting physical access, the way it is
suppose to work (I believe) is that root=/dev/XXXX implies a physical
grant but that is not the case in the 1.0 tree. In the 1.1 tree, that
seems to work but only on the first disk. I could never get the
XenoLinux domains to see root=/dev/sdb* unless I did a physical grant in
their startup script. 


I suppose I should test under -unstable but I have my own unstable BK
tree that I experiment on.

I hope all that is accurate, a lot of my devel time happened during 11PM
- 7AM GMT -5 so its a little fuzzy unless I'm sitting at that
workstation.


Stephen Evanchik



-------------------------------------------------------
This SF.net email is sponsored by: IBM Linux Tutorials.
Become an expert in LINUX or just sharpen your skills.  Sign up for IBM's
Free Linux Tutorials.  Learn everything from the bash shell to sys admin.
Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel