WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: blkback/2.6.38: Use 'vzalloc' for page arrays ...

To: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Subject: [Xen-devel] Re: blkback/2.6.38: Use 'vzalloc' for page arrays ...
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Sat, 12 Mar 2011 17:30:23 -0500
Cc: konrad@xxxxxxxxxx, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sat, 12 Mar 2011 14:31:21 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1299898639.11681.227.camel@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1299898639.11681.227.camel@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
On Fri, Mar 11, 2011 at 06:57:19PM -0800, Daniel Stodden wrote:
> commit ef19ebb7c4fe3e647cbc8d5bd6601a27bc6ab408
> Author: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> Date:   Tue Mar 1 16:26:10 2011 -0500
>     Previously we would allocate the array for page using 'kmalloc' 
>     which we can as easily do with 'vzalloc'.
> 
> Sure? Vmalloc goes in numbers of pages. The overhead from that switch is
> not huge (except for that xen_blkbk allocation), but the default vector
> sizes don't justify vm area construction work either.
> 
> Mind if I'll push back with a couple kmallocs?

Not at all! This is great - I haven't yet completly gone through the commits
(there is a TODO in one of them - for the flush/barrier stuff), and you
are already reviewing it. Thank you!

Will definitly update it to be kmallocs.

> 
>     The pre-allocation of pages
>     was done a bit differently in the past - it used to be that
>     the balloon driver would export "alloc_empty_pages_and_pagevec"
>     which would have in one function created an array, allocated
>     the pages, balloned the pages out (so the memory behind those
>     pages would be non-present), and provide us those pages.
> 
> -       for (i = 0; i < mmap_pages; i++)
> +       for (i = 0; i < mmap_pages; i++) {
>                 blkbk->pending_grant_handles[i] = BLKBACK_INVALID_HANDLE;
> -
> +               blkbk->pending_pages[i] = alloc_page(GFP_KERNEL | 
> __GFP_HIGHMEM);
> 
> This is broken if CONFIG_HIGHMEM is actually set, because the current
> code won't bother mapping that page:

Ooops.
> 
> (XEN) mm.c:3795:d0 Could not find L1 PTE for address 8fb51000
> 
> Second, the memory overhead of not ballooning than frame out anymore is
> admittedly not gigantic, but doesn't look so sweet either.
> 
> Why are we doing this? We still need a page struct for starting I/O on
> that foreign frame. The new M2P path won't touch those matters, unless
> I've been missing some important pieces. It only covers our way back
> from ptes to the mfn.
> 
>     This was OK as those pages were shared between other guest and
>     the only thing we needed was to "swizzel" the MFN of those pages
>     to point to the other guest MFN. We can still "swizzel" the MFNs
>     using the M2P (and P2M) override API calls, but for the sake of
>     simplicity we are dropping the balloon API calls. We can return
>     to those later on.
> 
> So it's just transient for balloon.c maintenance? If so, the old
> get_empty_pages_and_pagevec always carried a and_pagevec too many. :o)
> Should just take a caller side vector, so coming up with a new call
> would actually be a nice opportunity imho.

<nods> Right.

I didn't want to do until the patches that Daniel Kipper and David de Graaf
were completed - as they do some surgery on the balloon code.

Perhaps later on when all of that settles.
> 
> Cheers,
> Daniel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>