WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] userspace block backend / gntdev problems

To: Derek Murray <Derek.Murray@xxxxxxxxxxxx>
Subject: [Xen-devel] userspace block backend / gntdev problems
From: Gerd Hoffmann <kraxel@xxxxxxxxxx>
Date: Fri, 04 Jan 2008 14:48:21 +0100
Cc: Xen Development Mailing List <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 04 Jan 2008 05:49:02 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.9 (X11/20071115)
  Hi,

I'm running into trouble over and over again with my userspace block
backend daemon (blkbackd) developed as part of the xenner project.

First problem is the fixed limit of 128 slots.  The frontend submits up
to 32 requests, with up to 11 grants each.  With the shared ring this
sums up to 353 grants per block device.  When is blkbackd running in aio
mode, thus many requests are in flight at the same time and thus also
many grants mapped at the same time, the 128 limit is easily reached.  I
don't even need to stress the disk with bonnie or something, just
booting the virtual machine is enougth.  Any chance replace the
fix-sized array with a list to remove that hard-coded limit?  Or at
least raise the limit to -- say -- 1024 grants?

Second problem is that batched grant mappings (using
xc_gnttab_map_grant_refs) don't work reliable.  Symtoms I see are random
failures with ENOMEM for no obvious reason (128 grant limit is *far*
away).  Also host kernel crashes (kernel 2.6.21-2952.fc8xen).

When using xc_gnttab_map_grant_ref only (no batching) and limiting the
number requests in flight to 8 (so we stay below the 128 grant limit)
everything works nicely though.

cheers,
  Gerd

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel