|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH] remus: fix check for installed qdiscs on ifb
On Mon, Mar 21, 2011 at 7:51 AM, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> wrote:
Shriram Rajagopalan writes ("[Xen-devel] [PATCH] remus: fix check for installed qdiscs on ifb"):
> remus: fix check for installed qdiscs on ifb
Thanks.
> current check includes ingress and pfifo_fast.
> Add mq to the list of allowed qdiscs already installed
> on ifb. This patch fixes cases where remus fails to start,
> due to an mq qdisc already present on the vif.
Forgive me for being dense, but I don't understand this at all. What
is the problem caused by pre-existing qdiscs that the code is trying
to avoid, and why are these particular qdiscs OK ?
sorry. my bad. It is not the pre-existing qdiscs that cause an issue. It is caused by the dummy "mq" qdisc that gets added by "default". The original code checks for presence of only ingress/pfifo-fast qdisc. If anything else is
present, it punts. In this case, "mq" is present (added by default) and causes remus to fail.
This is what I understood from the kernel netfilter code & docs.
from net/netfilter/sched/sched_generic.c
void dev_activate(struct net_device *dev) { /* No queueing discipline is attached to device; create default one i.e. pfifo_fast for devices, which need queueing and noqueue_qdisc for
virtual interfaces */
if (dev->qdisc == &noop_qdisc) attach_default_qdiscs(dev); .... static void attach_default_qdiscs(struct net_device *dev) {
... if (!netif_is_multiqueue(dev) || dev->tx_queue_len == 0) { netdev_for_each_tx_queue(dev, attach_one_default_qdisc, NULL); ... } else { qdisc = qdisc_create_dflt(dev, txq, &mq_qdisc_ops, TC_H_ROOT);
...
sch_mq is a "Classful multiqueue dummy scheduler" and according to the multiqueue semantics in Section 2: Documentation/networking/multiqueue.txt
"Currently two qdiscs are optimized for multiqueue devices. The first is the
default pfifo_fast qdisc. This qdisc supports one qdisc per hardware queue. A new round-robin qdisc, sch_multiq also supports multiple hardware queues."
shriram
Thanks,
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|