WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iSCSI initiator problems accessing Infortrend storage

To: Werner Kuballa <wkuballa@xxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] iSCSI initiator problems accessing Infortrend storage
From: Pasi Kärkkäinen <pasik@xxxxxx>
Date: Thu, 27 Aug 2009 23:35:46 +0300
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 27 Aug 2009 13:36:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A965E3A02000066000108C5@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4A965E3A02000066000108C5@xxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
On Thu, Aug 27, 2009 at 10:21:46AM -0700, Werner Kuballa wrote:
> 
> I am trying to use two Infortrend iSCSI systems (A16E-G2130-4, each with 
> 16x750GB in Raid 6) for our Oracle VM environment (three big Dell R900 
> servers).  When I copy files on the Infortrend I receive SCSI errors, 
> resulting in loss of connection.  I read some postings to this user group 
> indicating that others are successfully using Infortrend storage systems.  
> However, I have tried pretty much everything I can think of, but can not get 
> the Infortrend working reliably with Dom0.  This is what I have tried: 
> 
> - simple iSCSI network connection, just one server connected with only one 
> 1Gb Ethernet NIC to the Infortrend (exclusively no other connections on the 
> Infortrend) 
> 
> - when installing Oracle Enterprise Linux 5.3 natively on the R900, the 
> Infortrend works fine - in all  
> different network connections (single connection, bonded NICs, etc.) 
> 

What is your dom0 OS? Oracle Enterprise Linux 5.3 and the Xen included
in it? 

Or something else? 

> - accessing an EMC AX4 iSCSI system works without any problems 
> 

That's weird.

> No matter what I tried, when utilizing the Infortrend in Dom0 I always get 
> errors as shown below.  And it gets worse when I am using "ocfs2" (what I 
> eventually have to use) - ocfs2 fences off the system when these errors occur 
> and reboots the server. 
> 
> Infortrend Support has no solution for this problem, they maintain that XEN 
> is not a supported platform. 
> 
> Any suggestions would be appreciated. 
> 

Does this happen when only dom0 is running, ie. no domUs? 

If it happens when you have load on the server, try giving dom0 more
weight (and dedicating a cpu core for it) so that it can always get the
needed cpu time for iSCSI processing.

-- Pasi

> Regards, 
> Werner 
> 
> 
> 
> 
> Aug 27 05:16:24 ovm3 multipathd: IFT110: load table [0 8787689472 multipath 1 
> queue_if_no_path 0 1 1 round-robin 0 1 1 8:48 100] 
>   
> Aug 27 05:16:24 ovm3 multipathd: IFT110: event checker started  
> Aug 27 05:16:24 ovm3 multipathd: dm-0: add map (uevent)  
> Aug 27 05:16:24 ovm3 multipathd: dm-0: devmap already registered  
> Aug 27 05:16:24 ovm3 multipathd: dm-1: add map (uevent)  
> Aug 27 05:16:24 ovm3 multipathd: dm-1: devmap already registered  
> Aug 27 05:16:25 ovm3 iscsid: received iferror -38 
> Aug 27 05:16:25 ovm3 last message repeated 2 times 
> Aug 27 05:16:25 ovm3 iscsid: connection1:0 is operational now 
> Aug 27 05:18:08 ovm3 kernel: kjournald starting.  Commit interval 5 seconds 
> Aug 27 05:18:08 ovm3 kernel: EXT3 FS on dm-1, internal journal 
> Aug 27 05:18:08 ovm3 kernel: EXT3-fs: mounted filesystem with ordered data 
> mode. 
> Aug 27 05:19:16 ovm3 kernel: ping timeout of 5 secs expired, last rx 13940, 
> last ping 15190, now 16440 
> Aug 27 05:19:16 ovm3 kernel:  connection1:0: iscsi: detected conn error 
> (1011) 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 49529536 
> Aug 27 05:19:17 ovm3 kernel: device-mapper: multipath: Failing path 8:48. 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 
> 1569783144 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 
> 1569784168 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 
> 1569785192 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 
> 1569786216 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 
> 1569786224 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 
> 1569787248 
> Aug 27 05:19:17 ovm3 iscsid: Kernel reported iSCSI connection 1:0 error 
> (1011) state (3) 
> Aug 27 05:19:17 ovm3 kernel: sd 4:0:0:1: SCSI error: return code = 0x00020000 
> Aug 27 05:19:17 ovm3 kernel: end_request: I/O error, dev sdd, sector 
> 1569788272 
> 
> 
> Save Paper.
> Think before you print.

> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>