xen-devel
Re: [Xen-devel] MPI benchmark performance gap between native linux anddo
To: |
"Santos, Jose Renato G (Jose Renato Santos)" <joserenato.santos@xxxxxx> |
Subject: |
Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU |
From: |
Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> |
Date: |
Tue, 5 Apr 2005 16:47:00 +0100 |
Cc: |
"Turner, Yoshio" <yoshio_turner@xxxxxx>, Xen-devel@xxxxxxxxxxxxxxxxxxx, Aravind Menon <aravind.menon@xxxxxxx>, xuehai zhang <hai@xxxxxxxxxxxxxxx>, G John Janakiraman <john@xxxxxxxxxxxxxxxxxxx> |
Delivery-date: |
Tue, 05 Apr 2005 15:44:04 +0000 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
In-reply-to: |
<6C21311CEE34E049B74CC0EF339464B902FB22@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<6C21311CEE34E049B74CC0EF339464B902FB22@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
On 5 Apr 2005, at 16:23, Santos, Jose Renato G (Jose Renato Santos)
wrote:
In which version the 'truesize' field was changed to report less than
a page?
We were using 2.0.3 when we found this problem.
I agree this trick will prevent the early overflow of the receive
buffer.
However, I am thinking if there is no other side effect of lying
about
the true size of the buffer to the kernel.
Would bad things happen if the kernel believes that is using less
memory than it is really using.
For example, would it be possible for the kernel to exhaust memory
for
network intensive application with a large number of open connections ?
I guess it would be easier to provoke trouble, but in any case the
default advertised window and socket buffer allocation are not affected
dynamically by system-wide memory pressure. Per-sockbuf limits are set
to a 'suitable default' at boot-time according to amount of RAM
detected, but after that they have to be manually reset by the user.
So I don't think we are breaking any carefully-tuned
dynamically-balanced memory allocation algorithms here. :-)
By setting the true size (4kB) we are far more likely to throw network
performance off, as the TCP stack will not have been tuned with such
large packet overheads in mind.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, (continued)
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, Santos, Jose Renato G (Jose Renato Santos)
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, Santos, Jose Renato G (Jose Renato Santos)
- Re: [Xen-devel] MPI benchmark performance gap between native linux anddomU,
Keir Fraser <=
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, Santos, Jose Renato G (Jose Renato Santos)
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, Santos, Jose Renato G (Jose Renato Santos)
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, Santos, Jose Renato G (Jose Renato Santos)
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, Ian Pratt
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU, Ian Pratt
|
|
|