WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [Patch] continue_hypercall_on_cpu rework using tasklets

To: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Subject: Re: [Xen-devel] [Patch] continue_hypercall_on_cpu rework using tasklets
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 16 Apr 2010 08:55:51 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Yu, Ke" <ke.yu@xxxxxxxxx>
Delivery-date: Thu, 15 Apr 2010 23:56:45 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1271400953; x=1302936953; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; z=Message-ID:=20<4BC809F7.8080907@xxxxxxxxxxxxxx>|Date:=20 Fri,=2016=20Apr=202010=2008:55:51=20+0200|From:=20Juergen =20Gross=20<juergen.gross@xxxxxxxxxxxxxx>|MIME-Version: =201.0|To:=20"Jiang,=20Yunhong"=20<yunhong.jiang@xxxxxxxx m>|CC:=20Keir=20Fraser=20<keir.fraser@xxxxxxxxxxxxx>,=20 =0D=0A=20"xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@lis ts.xensource.com>,=0D=0A=20"Yu,=20Ke"=20<ke.yu@xxxxxxxxx> |Subject:=20Re:=20[Xen-devel]=20[Patch]=20continue_hyperc all_on_cpu=20rework=20using=09tasklets|References:=20<789 F9655DD1B8F43B48D77C5D30659731D73CECE@xxxxxxxxxxxxxxxxxxx intel.com>=09<C7ECB1CE.115BF%keir.fraser@xxxxxxxxxxxxx> =20<789F9655DD1B8F43B48D77C5D30659731D79778B@xxxxxxxxxxxx r.corp.intel.com>|In-Reply-To:=20<789F9655DD1B8F43B48D77C 5D30659731D79778B@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> |Content-Transfer-Encoding:=207bit; bh=YUW87g2hatGfg76ECfL4A4Sng5Yx0YizJ52gYmuXfcQ=; b=Z2/l2GD5CoxGEH0mS+MpieRk7rv7ZRzFX8U4xwP9mMUPky6zGbtsn360 LW4lz7hQaJg/o9mzcMZMWtLxjfTI1+FJRHmcuVr3llgQebxrq8MoRM/0q htaS6eR+YBfDzl6AxXz1ZJGj08VsAXPHtHKt4+vifP3vyQkCERPnZSZiu Gw7S+ztmMBwwIzamx8OvJYqj5/pHOcQMJpuhqcKVIlR9WUTVASK2ITh2Y B0tq1NyRCuXnwAoXZVy42N5CDjOwg;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=kzwccnac0hCD+ePgFnx7+mCsEYKGaNqibG7/tlnYjMEfzpnvn+/a8dpn VodCX0mVFC2j9I27dlwrtFEjQjStkNJWXpYYmqJA59oEf+0GbVHJmzFJG QkGvppyn5u7DjeusdtoOnZ4dXxDabouG+YuELXXEYR5DT/TKhyZ1dxftY kwQTZTJmzZn3k7A0LQx0dsVTeov9H/Z2zqizYaMskhzG2+03j67xsxdPT 5bPssr02HYXgm/LOjVDo1tarZGYpC;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <789F9655DD1B8F43B48D77C5D30659731D79778B@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <789F9655DD1B8F43B48D77C5D30659731D73CECE@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C7ECB1CE.115BF%keir.fraser@xxxxxxxxxxxxx> <789F9655DD1B8F43B48D77C5D30659731D79778B@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla-Thunderbird 2.0.0.24 (X11/20100329)
Jiang, Yunhong wrote:
> 
>> -----Original Message-----
>> From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx]
>> Sent: Thursday, April 15, 2010 7:07 PM
>> To: Jiang, Yunhong; Juergen Gross
>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Yu, Ke
>> Subject: Re: [Xen-devel] [Patch] continue_hypercall_on_cpu rework using 
>> tasklets
>>
>> On 15/04/2010 10:59, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx> wrote:
>>
>>>> Actually that's a good example because it now won't work, but for other
>>>> reasons! The hypercall continuation can interrupt another vcpu's execution,
>>>> and then try to synchronously pause that vcpu. Which will deadlock.
>>>>
>>>> Luckily I think we can re-jig this code to freeze_domains() before doing 
>>>> the
>>>> continue_hypercall_on_cpu(). I've cc'ed one of the CPU RAS guys. :-)
>>> Hmm, I have cc'ed one of the PM guys because it is enter_state :-)
>>> Can we add check in vcpu_sleep_sync() for current? It is meaningless to
>>> cpu_relax for current vcpu in that situation, especially if we are not in 
>>> irq
>>> context.
>>> I'm not sure why in freeze_domains it only checkes dom0's vcpu for current,
>>> instead of all domains.
>> Well actually pausing any vcpu from within the hypercall continuation is
>> dangerous. The softirq handler running the hypercall continuation may have
>> interrupted some running VCPU X. And the VCPU Y that the continuation is
>> currently trying to pause may itself be trying to pause X. So we can get a
>> deadlock that way. The freeze_domains() *has* to be pulled outside of the
>> hypercall continuation.
>>
>> It's a little bit similar to the super-subtle stop_machine_run deadlock
>> possibility I just emailed to you a second ago. :-)
> 
> Thanks for pointing out the stop_machine_run deadlock issue.
> 
> After more consideration and internally discussion, seems the key point is, 
> the tasklet softirq is something like getting a lock for the current vcpu's 
> state(i.e. no one else could change that state unless this softirq is 
> finished). So any block action in softirq context, not just vcpu_pause_sync, 
> is dangerous and should be avoided (we can't get a lock and do block action 
> per my understanding).
> This is because vcpu's state can only be changed by schedule softirq (am I 
> right on this?), while schedule softirq can't prempt other softirq. So, more 
> generally, anything that will be updated by a softirq context, and will be 
> syncrhozed/blocking waitied in xen's vcpu context is in fact a implicit lock 
> held by the softirq.
> 
> To the tricky bug on the stop_machine_run(), I think it is in fact similar to 
> the cpu_add_remove_lock. The stop_machine_run() is a block action, so we must 
> make sure no one will be blocking to get the lock that is held by 
> stop_machine_run() already. At that time, we change all components that try 
> to get the cpu_add_remove_lock to be try_lock.
> 
> The changes caused by the tasklet is, a new implicit lock is added, i.e. the 
> vcpu's state.
> The straightforward method is like the cpu_add_remove_lock: make everything 
> that waiting for the vcpu state change to do softirq between the checking. 
> Maybe the cleaner way is your previous suggestion, that is, put the 
> stop_machine_run() in the idle_vcpu(), another way is, turn back to the 
> original method, i.e. do it in the schedule_tail.
> 
> Also We are not sure why the continue_hypercall_on_cpu is changed to use 
> tasklet. What's the benifit for it? At least I think this is quite confusing, 
> because per our understanding, usually hypercall is assumed to execut in 
> current context, while this change break the assumption. So any hypercall 
> that may use this _c_h_o_c, and any function called by that hypercall, should 
> be aware of this, I'm not sure if this is really so correct, at least it may 
> cause trouble if someone use this without realize the limitation. From 
> Juergen Gross's mail, it seems for cpupool, but I have no idea of the cpupool 
> :-(

Cpupools introduce something like "scheduling domains" in xen. Each cpupool
owns a set of physical cpus and has an own scheduler. Each domain is member
of a cpupool.

It is possible to move cpus or domains between pools, but a domain is always
limited to the physical cpus being in the cpupool of the domain.

This limitation makes it impossible to use continue_hypercall_on_cpu with
cpupools for any physical cpu without changing it. My original solution
temporarily moved the target cpu into the cpupool of the issuing domain,
but this was regarded as an ugly hack.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>