WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: blktap: Sync with XCP, dropping zero-copy.

To: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Subject: [Xen-devel] Re: blktap: Sync with XCP, dropping zero-copy.
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 17 Nov 2010 10:00:51 -0800
Cc: "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 17 Nov 2010 10:04:16 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1289942932.11102.802.camel@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1289604707-13378-1-git-send-email-daniel.stodden@xxxxxxxxxx> <4CDDE0DA.2070303@xxxxxxxx> <1289620544.11102.373.camel@xxxxxxxxxxxxxxxxxxxxxxx> <4CE17B80.7080606@xxxxxxxx> <1289898792.23890.214.camel@ramone> <4CE2C5B1.1050806@xxxxxxxx> <1289942932.11102.802.camel@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Fedora/3.1.6-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.6
On 11/16/2010 01:28 PM, Daniel Stodden wrote:
>> What's the problem?  If you do nothing then it will appear to the kernel
>> as a bunch of processes doing memory allocations, and they'll get
>> blocked/rate-limited accordingly if memory is getting short.  
> The problem is that just letting the page allocator work through
> allocations isn't going to scale anywhere.
>
> The worst case memory requested under load is <number-of-disks> * (32 *
> 11 pages). As a (conservative) rule of thumb, N will be 200 or rather
> better.

Under what circumstances would you end up needing to allocate that many
pages?

> The number of I/O actually in-flight at any point, in contrast, is
> derived from the queue/sg sizes of the physical device. For a simple
> disk, that's about a ring or two.

Wouldn't that be the worst case?

>> There's
>> plenty of existing mechanisms to control that sort of thing (cgroups,
>> etc) without adding anything new to the kernel.  Or are you talking
>> about something other than simple memory pressure?
>>
>> And there's plenty of existing IPC mechanisms if you want them to
>> explicitly coordinate with each other, but I'd tend to thing that's
>> premature unless you have something specific in mind.
>>
>>> Also, I was absolutely certain I once saw VM_FOREIGN support in gntdev..
>>> Can't find it now, what happened? Without, there's presently still no
>>> zero-copy.
>> gntdev doesn't need VM_FOREIGN any more - it uses the (relatively
>> new-ish) mmu notifier infrastructure which is intended to allow a device
>> to sync an external MMU with usermode mappings.  We're not using it in
>> precisely that way, but it allows us to wrangle grant mappings before
>> the generic code tries to do normal pte ops on them.
> The mmu notifiers were for safe teardown only. They are not sufficient
> for DIO, which wants gup() to work. If you want zcopy on gntdev, we'll
> need to back those VMAs with page structs.

The pages will have struct page, because they're normal kernel pages
which happen to be backed by mapped granted pages.  Are you talking
about the #ifdef CONFIG_XEN code in the middle of __get_user_pages()? 
Isn't that just there to cope with the nested-IO-on-the-same-page
problem that the current blktap architecture provokes?  If there's only
a single IO on each page - the one initiated by usermode - then it
shouldn't be necessary, right?

>   Or bounce again (gulp, just
> mentioning it). As with the blktap2 patches, note there is no difference
> in the dom0 memory bill, it takes page frames.

(And perhaps actual pages to substitute for the granted pages.)

> I guess we've been meaning the same thing here, unless I'm
> misunderstanding you. Any pfn does, and the balloon pagevec allocations
> default to order 0 entries indeed. Sorry, you're right, that's not a
> 'range'. With a pending re-xmit, the backend can find a couple (or all)
> of the request frames have count>1. It can flip and abandon those as
> normal memory. But it will need those lost memory slots back, straight
> away or next time it's running out of frames. As order-0 allocations.

Right.  GFP_KERNEL order 0 allocations are pretty reliable; they only
fail if the system is under extreme memory pressure.  And it has the
nice property that if those allocations block or fail it rate limits IO
ingress from domains rather than being crushed by memory pressure at the
backend (ie, the problem with trying to allocate memory in the writeout
path).

Also the cgroup mechanism looks like an extremely powerful way to
control the allocations for a process or group of processes to stop them
from dominating the whole machine.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>