Jeremy Fitzhardinge wrote:
> On 12/09/09 19:29, Xu, Dongxiao wrote:
>>> Also, is it worth making it a tunable? Presumably it needn't scale
>>> exactly with the number of dom0 cpus; if you only have one or two
>>> gbit interfaces, then you could saturate that pretty quickly with a
>>> small number of cpus, regardless of how many domains you have.
>> How many CPUs are serving for the NIC interface is determined by how
>> interrupt is delivered. If system only has two gbit interfaces, and
>> they delivier interrupts to CPU0 and CPU1, then the case is: two
>> CPUs handle two tasklets. Other CPUs are idle. The group_nr just
>> defines the max number of tasklets, however it doesn't decide how
>> tasklet is handled by CPU.
> So does this mean that a given vcpu will be used to handle the
> interrupt if happens to be running on a pcpu with affinity for the
> device? Or that particular devices will be handled by particular
If NIC device is owned by Dom0, then its interrupt affinity is related
with Dom0's *VCPU* (I think its not PCPU). Which VCPU will handle
the device interrupt is determined by the interrupt affinity, either set
manually by commands such as:
"echo XXX > /proc/irq/irq_num/smp_processor_id", or automatically
adjusted by irqbalanced.
>>> I've pushed this out in its own branch:
>>> xen/dom0/backend/netback-tasklet; please post any future patches
>>> against this branch.
>> What's my next step for this netback-tasklet tree merging into
> Hm, well, I guess:
> * I'd like to see some comments Keir/Ian(s)/others that this is
> basically the right approach. It looks OK to me, but I don't
> have much experience with performance in the field.
> o does nc2 make nc1 obsolete?
> * Testing to make sure it really works. Netback is clearly
> critical functionality, so I'd like to be sure we're not
> introducing big regressions
I will do another round of testing for this patch, and will give you reply
Xen-devel mailing list