|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Re: userspace block backend / gntdev problems
Hi Gerd,
On 4 Jan 2008, at 13:48, Gerd Hoffmann wrote:
First problem is the fixed limit of 128 slots. The frontend
submits up
to 32 requests, with up to 11 grants each. With the shared ring this
sums up to 353 grants per block device. When is blkbackd running
in aio
mode, thus many requests are in flight at the same time and thus also
many grants mapped at the same time, the 128 limit is easily
reached. I
don't even need to stress the disk with bonnie or something, just
booting the virtual machine is enougth. Any chance replace the
fix-sized array with a list to remove that hard-coded limit? Or at
least raise the limit to -- say -- 1024 grants?
The 128-grant limit is fairly arbitrary, and I wanted to see how
people were using gntdev before changing this. The reason for using a
fixed-size array is that it gives us O(1)-time mapping and unmapping
of single grants, which I anticipated would be the most frequently-
used case. I'll prepare a patch that enables the configuration of
this limit when the device is opened.
Second problem is that batched grant mappings (using
xc_gnttab_map_grant_refs) don't work reliable. Symtoms I see are
random
failures with ENOMEM for no obvious reason (128 grant limit is *far*
away).
If it's failing with ENOMEM, a possible reason is that the address
space for mapping grants within gntdev (the array I mentioned above)
is becoming fragmented. Are you combining the mapping of single
grants and batches within the same gntdev instance? A possible
workaround would be to use separate gntdev instances for mapping the
single grants, and for mapping the batches. That way, the
fragmentation should not occur, if the batches are all of the same size.
Also host kernel crashes (kernel 2.6.21-2952.fc8xen).
When does this happen? Could you post the kernel OOPS?
When using xc_gnttab_map_grant_ref only (no batching) and limiting the
number requests in flight to 8 (so we stay below the 128 grant limit)
everything works nicely though.
That's good to know, thanks!
Regards,
Derek Murray.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|