Tim Post wrote:
>>> And a handy place to stick SNORT and others. I've tried this kind of
>>> setup but it's been 'choppy' at best. I'm also rather new to ebtables,
>>> I'm assuming you would use ebtables to craft this, do you have some
>>> scripts that you'd like to share?
>> why choppy? It works on my side. BTW, no ebtables is needed to achieve
>> traffic shaping. You can stick your tc rules inside Dom0 at the
>> vif-Interfaces of the gateway domain. It is nothing more than tc-magic;-)
>>
>
> ebtables + tc seemed to be producing the best results for others
> according to the research I did, so that's the route I've been going.
>
> When I/O on the guests are normal / nominal there is no degradation of
> network performance, however when they really begin accessing and
> working their VBD's, networking gets .. 'choppy' for lack of a simpler
> description. This is not the case when I use plain old bridged
I have done a quick test on my home system, using the gateway domain and
one additional, traffic shaped, domU.
I produced IO-load within the DomU and the gateway domain:
# while true; do dd if=/dev/urandom of=file.img bs=1M count=100; done
That said, i cannot reproduce your problem. Traffic shaping works fine,
jumping a bit around the desired rate measured at a 1s interval, but
being quite exactly in the desired rate measured in a 10s interval.
Ping reply latency is going up if the load was produced in Dom0 or the
gateway domain. However, the reason for that probably is the cpu
utilization inside dom0, resp. xengate.
If the IO-load was produced inside the domU, no reasonably changes for
ping reply latency could be observed.
I used Xen 3.0.4 for the tests.
Do you have set the mtu/burst/cburst Parameters for your tc rules? They
are necessary cause of some giant packet problem i stumbled across some
time ago.
E.g.:
tc class add dev eth0 parent 1:0 \
classid 1:101 htb \
rate 1000kbit mtu 16000 cburst 16000 burst 16000
Or do you use HVM domains? HVM domains are quite unusable at the current
state 'cause of the heavy IO-overhead of the qemu drivers.
Greetings,
-timo
--
Timo Benk - Jabber ID: fry@xxxxxxxxxxxx - ICQ ID: #414944731
PGP Public Key: http://m28s01.vlinux.de/timo_benk_gpg_key.asc
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|