WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: eliminating 166G limit (was Re: [Xen-devel] Problem withnr_nodes on

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>, <eak@xxxxxxxxxx>
Subject: RE: eliminating 166G limit (was Re: [Xen-devel] Problem withnr_nodes on large memory NUMA machine)
From: "Krysan, Susan" <KRYSANS@xxxxxxxxxx>
Date: Fri, 7 Dec 2007 07:20:59 -0600
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 07 Dec 2007 05:23:00 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C37DAC49.197EB%Keir.Fraser@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <47545DE2.2080509@xxxxxxxxxx> <C37DAC49.197EB%Keir.Fraser@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acg4DY4czItYcaQAEdymVQAX8io7RQAxebeQ
Thread-topic: eliminating 166G limit (was Re: [Xen-devel] Problem withnr_nodes on large memory NUMA machine)
I tested this changeset on Unisys ES7000 with 256G ram and 64 processors
and it works:

xentop - 06:30:59   Xen 3.2-unstable
1 domains: 1 running, 0 blocked, 0 paused, 0 crashed, 0 dying, 0
shutdown
Mem: 268172340k total, 7669456k used, 260502884k free    CPUs: 64 @
3400MHz

I will be running our full test suite on this configuration today.

Thanks,
Sue Krysan
Linux Systems Group
Unisys Corporation
 

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Keir Fraser
Sent: Thursday, December 06, 2007 8:40 AM
To: eak@xxxxxxxxxx
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: eliminating 166G limit (was Re: [Xen-devel] Problem
withnr_nodes on large memory NUMA machine)

Try xen-unstable changeset 16548.

 -- Keir

On 3/12/07 19:49, "beth kon" <eak@xxxxxxxxxx> wrote:

> Has there been any more thought on this subject? The discussion seems
to
> have stalled, and we're hoping to find a way past this 166G limit...
> 
> Jan Beulich wrote:
> 
>>>>> Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> 27.11.07 10:21 >>>
>>>>>        
>>>>> 
>>> On 27/11/07 09:00, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>>> 
>>>    
>>> 
>>>>> I don't get how your netback approach works. The pages we transfer
do not
>>>>> originate from netback, so it has little control over them. And,
even if
>>>>> it
>>>>> did, when we allocate pages for network receive we do not know
which
>>>>> domain's packet will end up in each buffer.
>>>>>        
>>>>> 
>>>> Oh, right, I mixed up old_mfn and new_mfn in netbk_gop_frag().
Nevertheless
>>>> netback could take care of this by doing the copying there, as at
that
>>>> point i
>>>> already knows the destination domain.
>>>>      
>>>> 
>>> You may not know constraints on that domain's max_mfn though. We
could add
>>> an interface to Xen to interrogate that, but generally it's not
something we
>>> probably want to expose outside of Xen and the guest itself.
>>>    
>>> 
>> 
>> What constraints other than the guest's address size influence its
max_mfn?
>> Of course, if there's anything beyond the address size, then having a
way to
>> obtain the constraint explicitly would be desirable. But otherwise
(and as
>> fallback) using 37 bits (128G) seems quite reasonable.
>> 
>>  
>> 
>>>>> Personally I think doing it in Xen is perfectly good enough for
supporting
>>>>> this very out-of-date network receive mechanism.
>>>>>        
>>>>> 
>>>> I'm not just concerned about netback here. The interface exists,
and other
>>>> users might show up and/or exist already. Whether it would be
acceptable
>>>> for them to do allocation and copying is unknown. You'd therefore
either
>>>> need a way to prevent future users of the transfer mechanism, or
set proper
>>>> requirements on its use. I think that placing extra requirements on
the
>>>> user
>>>> of the interface is better than introducing extra (possibly hard to
>>>> reproduce/
>>>> recognize/debug) possibilities of failure.
>>>>      
>>>> 
>>> The interface is obsolete.
>>>    
>>> 
>> 
>> Then it should be clearly indicated as such, e.g. by a mechanism
similar to
>> deprecated_irq_flag() in Linux 2.6.22. And as a result, its use in
netback
>> should
>> then probably be conditional upon an extra config option, which could
at once
>> be used to provide a note to Xen that the feature isn't being used so
that
>> the
>> function could return -ENOSYS and the clipping could be
avoided/reverted.
>> 
>> Jan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>  
>> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel