WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel][PV-ops][PATCH] Netback: Fix PV network issue for netback

To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Subject: RE: [Xen-devel][PV-ops][PATCH] Netback: Fix PV network issue for netback multiple threads patchset
From: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Date: Fri, 25 Jun 2010 15:31:12 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "djmagee@xxxxxxxxxxxx" <djmagee@xxxxxxxxxxxx>, Fantu <fantonifabio@xxxxxxxxxx>
Delivery-date: Fri, 25 Jun 2010 00:33:43 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1277368404.19091.37455.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <D5AB6E638E5A3E4B8F4406B113A5A19A1F205536@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <1276248930.19091.2870.camel@xxxxxxxxxxxxxxxxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A1F372F07@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <1277368404.19091.37455.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsTd+xN5/QqDKbTSj6iXTZwtFIvawABPN8Q
Thread-topic: [Xen-devel][PV-ops][PATCH] Netback: Fix PV network issue for netback multiple threads patchset
Ian Campbell wrote:
> On Thu, 2010-06-17 at 09:16 +0100, Xu, Dongxiao wrote:
>> Ian,
>> 
>> Sorry for the late response, I was on vacation days before.
> 
> I was also on vacation so sorry in _my_ late reply ;-)
> 
>> Ian Campbell wrote:
>>> On Thu, 2010-06-10 at 12:48 +0100, Xu, Dongxiao wrote:
>>>> Hi Jeremy,
>>>> 
>>>> The attached patch should fix the PV network issue after applying
>>>> the netback multiple threads patchset.
>>> 
>>> Thanks for this Donxiao. Do you think this crash was a potential
>>> symptom of this issue? It does seem to go away if I apply your
>>> patch. 
>> 
>> Actually, the phenomenon is the same on my side without the fixing
>> patch. 
> 
> Great, thanks.
> 
>>> On an unrelated note, do you have any plans to make the number of
>>> groups react dynamically to CPU hotplug? Not necessarily while
>>> there are actually active VIFs (might be tricky to get right) but
>>> perhaps only when netback is idle (i.e. when there are no VIFs
>>> configured), since often the dynamic adjustment of VCPUs happens at
>>> start of day to reduce the domain 0 VCPU allocation from the total
>>> number of cores in the machine to something more manageable.
>> 
>> I'm sorry, currently I am busy with some other tasks and may not have
>> time to do this job.
> 
> I understand.
> 
>> But if the case is to reduce dom0 VCPU number, keep the group number
>> unchanged will not impact the performance, since the group reflects
>> the tasklet/kthread number, and it doesn't have direct association
>> with dom0's VCPU number.
> 
> Yes, that mitigates the issue to a large degree. I was just concerned
> about e.g. 64 threads competing for 4VCPU or similar which seems
> wasteful in terms of some resource or other...
> 
> For XCP (which may soon switch from 1 to 4 domain 0 VCPUS in the
> unstable branch) I've been thinking of the following patch. I wonder
> if 
> it might make sense in general? 4 is rather arbitrarily chosen but I
> think even on a 64 core machine you wouldn't want to dedicate more
> than 
> some fraction netback activity and if you do then it is configurable.

Basically I am OK with it. 
One concern is that, when system is equipped with 10G NIC,
according to my previous experience, 4 dom0 vCPU may be not enough
for eating all the bandwidth.

Thanks,
Dongxiao

> 
> Ian.
> 
> 
> netback: allow configuration of maximum number of groups to use
> limit to 4 by default.
> 
> Signed-off-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> 
> diff -r 7692c6381e1a drivers/xen/netback/netback.c
> --- a/drivers/xen/netback/netback.c   Fri Jun 11 08:44:25 2010 +0100
> +++ b/drivers/xen/netback/netback.c   Fri Jun 11 09:31:48 2010 +0100
> @@ -124,6 +124,10 @@
>  static int MODPARM_netback_kthread = 1;
>  module_param_named(netback_kthread, MODPARM_netback_kthread, bool,
>  0); MODULE_PARM_DESC(netback_kthread, "Use kernel thread to replace
> tasklet"); +
> +static unsigned int MODPARM_netback_max_groups = 4;
> +module_param_named(netback_max_groups, MODPARM_netback_max_groups,
> bool, 0); +MODULE_PARM_DESC(netback_max_groups, "Maximum number of
> netback groups to allocate"); 
> 
>  /*
>   * Netback bottom half handler.
> @@ -1748,7 +1752,7 @@
>       if (!is_running_on_xen())
>               return -ENODEV;
> 
> -     xen_netbk_group_nr = num_online_cpus();
> +     xen_netbk_group_nr = min(num_online_cpus(),
>       MODPARM_netback_max_groups); xen_netbk = (struct xen_netbk
>                                           *)vmalloc(sizeof(struct xen_netbk) 
> * xen_netbk_group_nr);
>       if (!xen_netbk) {


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel