|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] vif interface dropping 80% packets
You are probably using jumbo packets for native, right?
What is the MTU when you get 9.88 Gb/s? I was thinking MTU=1500 when I
said unrealistic. Not sure why you cannot get higher MTU in Xen to
work. I have not tried MTU > 1500 bytes in Xen myself but I believe
Herbert Xu has done that successfully.
If you get larger MTU to work, keep in mind that in Xen
there is an extra data copy to transfer the packet from dom0 to guest which
will have a higher impact on performance for larger MTUs when
compared to linux.
Renato
Most likely your dom0 CPU is saturated.. It is
unrealistic to expect full throughput of a 10 Gig NIC. I would be surprised
if even linux could keep up with a receive rate of 10 Gb/s.
Did you try the same experiment on native linux? In my experiments, on
a 4-way 2.8 Ghz Xeon, dom0 consumes ~75% of one cpu for processing receive
packets while the guest consume ~55% of another
CPU.
With native->native or even native->dom0, I get
9.88 Gb/s with 21% utilization of receive side CPU. It's a nice machine
and we make nice NICs ;-) So, it's not unrealistic.
It is expected that your dual
CPU system saturates at a rate slighlty above 1 Ghz. (You are also
consuming extra cycles to drop packets)
You could confirm that the CPU is
saturating by running xentop while running your
experiments.
You could avoid dropping packets and
get a more realistic experiment by running a TCP experiment as Ian
suggested
i have a pending note on that, TCP test is actually
what I tried first, and the perf was terrible due to the same packet
loss. more info to come.
-reese
|
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|