Yes, the drop packets should be related with the OS socket buffer size, but
user could enlarge it via proc interface.
Latency and event channel frequency is a pair of tradeoff, this is why we
provide the coalesce interface in the second solution for user to balance the
tradeoff.
I think it is related with the low resolution timer in Windows that caused the
chunk of packets received. 10ms is really too long for timer slot.
The tunable figures you mentioned, for example, the maxium time, is set by the
standard coalesce interface in netback. For the max packets to notify, we
calculates the number in ring and if it exceeds half ring size, netback will do
notification. For the maxium time since last packet, I think it is a bit hard
to measure and maybe unnecessary because we already have a time parameter
(timer frequency), and it could adjust the tradeoff between latency and
notification frequency.
Thanks very much for the comments!
Best Regards,
-- Dongxiao
-----Original Message-----
From: James Harper [mailto:james.harper@xxxxxxxxxxxxxxxx]
Sent: Thursday, September 10, 2009 5:48 PM
To: Xu, Dongxiao; Keir Fraser; xen-devel@xxxxxxxxxxxxxxxxxxx
Cc: Dong, Eddie; Yang, Xiaowei
Subject: RE: [Xen-devel][PATCH][RFC] Using data polling mechanism in netfront
toreplace event notification between netback and netfront
>
> Here the w/ FE patch means that applying the first solution patch
attached in
> my last mail. w/ BE patch means applying the second solution patch
attached in
> this mail.
>
I think you also need to measure dropped packets (which presumably
happened due to buffer overflow), latency, and maybe jitter. The latter
two might be hard to measure with enough resolution to be significant
though.
What I saw when I was testing this sort of thing under Windows was that
instead of receiving a constant stream of a few packets at a time, I
received less frequent but much larger chunks of packets, which caused
more work to be done per DPC.
Does Xen give Windows any high resolution timers to play with? The best
I could find (I didn't look that hard) was the standard windows timer
which has >10ms resolution.
I think it might be good, as you have suggested, to push all the smarts
back to the back end. Have some tunable figures (either via xenbus or
via ring comms) to give the parameters of when a notify is needed eg:
. maximum time since last notification
. maximum time since last packet
. number of packets to notify at, regardless of timeout (this is the
event setting in the ring, although maybe not using that and having a
separate backend driven auto-scaling algorithm might be worthwhile)
Sounds like a bit of work though...
James
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|