|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] n/w performance degradation
> > Fow low mem systems, the default size of the tcp read buffer
> > (tcp_rmem[1]) is 43689, and max size (tcp_rmem[2]) is
> 2*43689, which
> > is really too low to do network heavy lifting.
>
> Just as an aside, I wanted to point out that my dom0's were
> running at the exact same configuration (memory, socket
> sizes) as the VM. And I can mostly saturate a gig link from
> dom0. So while socket sizes might certainly have an impact,
> there are still additional bottlenecks that need to be fine tuned.
Xen is certainly going to be more sensitive to small socket buffer sizes
when you're trying to run dom0 and the guest on the same CPU thread. If
you're running a single TCP connection the socket buffer size basically
determines how frequently you're forced to switch between domains.
Switching every 43KB at 1Gb/s amounts to thoudands of domain switches a
second which burns CPU. Doubling the socket buffer size halves the rate
of domain switches. Under Xen this would be a more sensible default.
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread> |
- Re: [Xen-devel] n/w performance degradation, (continued)
|
|
|
|
|