WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Further problems with HyperSCSI and vbds...

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Further problems with HyperSCSI and vbds...
From: sven.kretzschmar@xxxxxx
Date: Tue, 14 Oct 2003 00:41:49 +0200 (MEST)
Cc: Keir.Fraser@xxxxxxxxxxxx
Delivery-date: Mon, 13 Oct 2003 23:45:24 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
After applying the new patches / Changesets from the Xen project team
(thanks again :)  vbds and vds are working with local harddisks
(e.g. /dev/hda) as expected.
Also, I am able now to load and use the HyperSCSI module...
...in a quite restricted way :-(

As long as there is no vbd involved, everything works as expected:

*) In domain0 I can fdisk /dev/sda  (which is "emulated" by the HyperSCSI
    kernel module)
*) I can put a filesystem on /dev/sdaX and mount it in domain0.

But as soon as I use a vbd to access it(via attaching a physical 
/dev/sdaX partition to the vbd _or_ via attaching a vd, which uses
a /dev/sdaX partition, to the vbd), it does not work anymore. 
(Even when using xen_refresh_dev.)

Fdisk can not open /dev/xvda in this case (unable to read).
mkfs.ext2 starts, but then complains about a "short read" on 
block 0. It then continues to write the filesystem, but I found out, 
that it does not really access the physical disk via HyperSCSI
at all. It also seems that it does not even access the /dev/sda "fake"
device on the local server, because there is no network access
from the client to the server, where the physical disk 
is located.
Trying to mount /dev/xvda then results again in "short read on block 0" 
and being unable to read the superblock, etc.

I think the problem here is, that HyperSCSI attaches /dev/sda 
without really knowing anything about Xen ;-)
Xen also knows nothing about this "faked" physical SCSI 
device on /dev/sda, only xenolinux does, because of the loaded 
HyperSCSI kernel module driver.

So, perhaps the virtual block driver in xenolinux tries to access the 
faked physical /dev/sda device via Xen, but as Xen does not know it, 
this somehow does not really work. (Btw: Shouldn't this result in some 
printk() error messages in the xenolinux virtual block driver ?)
The virtual block driver in xenolinux should instead recognize that 
this is not a physical device registered with Xen and should try to
forward these disk requests and ioctls directly to the /dev/sda(X) device,
instead of sending it to Xen.
Of course, this should only by allowed for devices (or device drivers)
loaded in domain0 ??
Of course these are only assumptions and loud thoughts ... ;-)

I think, one has at least to change some code in xl_block.c and 
xl_scsi.c to reach that goal.
Perhaps one could try to register the scsi-devices which are provided
by the HyperSCSI module as xenolinux virtual scsi block devices ? 
(The code in xlscsi_init(xen_disk_info_t *xdi) in xl_scsi.c makes me
think this could perhaps work...

I know that this might violate the design principle of Xen to be the
only component which has direct access to the hardware.
However, the /dev/sd* devices from HyperSCSI are not really local
hardware, it's only a "faked" physical disk.

I would be interested in some thoughts about that from the Xen project
team and list readers, because I consider HyperSCSI to be an important
feature for xenolinux domains.
It would allow you to store the whole filesystems of a lot of domains from
several physical machines, which are running xen/xenolinux, on one big
fileserver.
As HyperSCSI is a very quick and efficient protocol / implementation, this
would be a lot quicker and remarkably more efficient than using NFS for
the same task.
Also HyperSCSI can use not only SCSI devices (disks, tapes, etc.) but also
IDE devices like IDE-disks and IDE-CD-Writers as real physical devices
to be accessed over the LAN ( http://nst.dsi.a-star.edu.sg/mcsa/hyperscsi ).
Sorry for the little HyperSCSI hype, I only wanted to explain my interest in
HyperSCSI in connection with Xen.

I hope there's a not so complicated solution for this problem.


Regards,

Sven


-- 
NEU FÜR ALLE - GMX MediaCenter - für Fotos, Musik, Dateien...
Fotoalbum, File Sharing, MMS, Multimedia-Gruß, GMX FotoService

Jetzt kostenlos anmelden unter http://www.gmx.net

+++ GMX - die erste Adresse für Mail, Message, More! +++



-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel