|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] n/w performance degradation
> > Ah, I expect I know what's going on here.
> >
> > Linux sizes the default socket buffer size based on how much 'system'
> > memory it has.
> >
> > With a 128MB domU it probably defaults to just 64K. For 256MB it
> > probably steps up to 128KB. You can prove this by setting
> > /proc/sys/net/core/{r,w}mem_{max,default}.
>
> For TCP sockets, you'll also have to bump up
> net/ipv4/tcp_rmem[1,2] and net/ipv4/tcp_wmem[1,2],
> don't forget.
>
> Fow low mem systems, the default size of the tcp read buffer
> (tcp_rmem[1]) is 43689, and max size (tcp_rmem[2]) is
> 2*43689, which is really too low to do network heavy
> lifting.
Just as an aside, I wanted to point out that my dom0's were running at
the exact same configuration (memory, socket sizes) as the VM. And I
can mostly saturate a gig link from dom0. So while socket sizes might
certainly have an impact, there are still additional bottlenecks that
need to be fine tuned.
--
Web/Blog/Gallery: http://floatingsun.net
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|