This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] vscsi 2TB patches

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] vscsi 2TB patches
From: Samuel Kvasnica <bugreports@xxxxxxxxxxxxxx>
Date: Mon, 3 Jan 2011 21:11:29 +0100
Delivery-date: Mon, 03 Jan 2011 12:07:03 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: IMS AG
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv: Gecko/20101207 Lightning/1.0b2 Thunderbird/3.1.7
Hello xen developers,

the current xen vscsi driver implementation has a nasty >2TB limitation.
Both the
backend and frontend driver need a patch - included in the attachments.

Basically, for the frontend, just the max_cmd_len needs to be set correctly.
For for the backend, at least the READ_16 and WRITE_16 scsi commands
vere missing.
I also enabled/added some more scsi commands to allow tape drives and
work properly.

Could please somebody here take care to add this to mainstream code ?
SuSE people were not interested really and the original author is not
really known, i.e. "Copyright by Fujitsu Limited". I'm really sick of
patching every new kernel over and over...

best ragards,


Attachment: scsiback_2TB_fix.patch
Description: Text document

Attachment: scsifront_2TB_fix.patch
Description: Text document

Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>