WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] MPI benchmark performance gap between native linux anddo

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU
From: "Santos, Jose Renato G (Jose Renato Santos)" <joserenato.santos@xxxxxx>
Date: Tue, 5 Apr 2005 08:23:30 -0700
Cc: "Turner, Yoshio" <yoshio_turner@xxxxxx>, Xen-devel@xxxxxxxxxxxxxxxxxxx, Aravind Menon <aravind.menon@xxxxxxx>, xuehai zhang <hai@xxxxxxxxxxxxxxx>, G John Janakiraman <john@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 05 Apr 2005 15:23:33 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcU5vU5jbyzZ1hx8TQW/nkk0VU47awANJOhA
Thread-topic: [Xen-devel] MPI benchmark performance gap between native linux anddomU
  Keir,

  In which version the 'truesize' field was changed to report less than
a page?
  We were using 2.0.3 when we found this problem.
  I agree this trick will prevent the early overflow of the receive
buffer.
  However, I am thinking if there is no other side effect of lying about
the true size of the buffer to the kernel.
  Would bad things happen if the kernel believes that is using less
memory than it is really using.
  For example, would it be possible for the kernel to exhaust memory for
network intensive application with a large number of open connections ?

  Renato


>> -----Original Message-----
>> From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx] 
>> Sent: Tuesday, April 05, 2005 1:59 AM
>> To: Santos, Jose Renato G (Jose Renato Santos)
>> Cc: Aravind Menon; Turner, Yoshio; 
>> Xen-devel@xxxxxxxxxxxxxxxxxxx; xuehai zhang; G John Janakiraman
>> Subject: Re: [Xen-devel] MPI benchmark performance gap 
>> between native linux anddomU
>> 
>> 
>> 
>> On 5 Apr 2005, at 03:07, Santos, Jose Renato G (Jose Renato Santos) 
>> wrote:
>> 
>> >  Here is a brief explanation of the problem we found and 
>> the solution 
>> > that worked for us.
>> >   Xenolinux allocates a full page (4KB) to store socket buffers 
>> > instead of using just MTU bytes as in traditional linux. This is 
>> > necessary to enable page exchanges between the guest and the I/O 
>> > domains. The side effect of this is that memory space used 
>> for  socket 
>> > buffers is not very efficient.
>> 
>> This is true, but these days we lie to the network stack 
>> about how big 
>> the skb data area is. The 'truesize' field, which is what I think is 
>> used for socket buffer accounting, will be around 1600 
>> bytes, not 4096. 
>> So I would expect the old trick of reducing the receive 
>> windows not to 
>> work: but if it does then that is very interesting!
>> 
>>   -- Keir
>> 
>> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>