WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Cpu pools discussion

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Cpu pools discussion
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Thu, 30 Jul 2009 14:51:54 +0200
Cc: Tim Deegan <Tim.Deegan@xxxxxxxxxxxxx>, George Dunlap <dunlapg@xxxxxxxxx>, Zhigang Wang <zhigang.x.wang@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 30 Jul 2009 05:52:28 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1248958186; x=1280494186; h=from:sender:reply-to:subject:date:message-id:to:cc: mime-version:content-transfer-encoding:content-id: content-description:resent-date:resent-from:resent-sender: resent-to:resent-cc:resent-message-id:in-reply-to: references:list-id:list-help:list-unsubscribe: list-subscribe:list-post:list-owner:list-archive; z=From:=20Juergen=20Gross=20<juergen.gross@xxxxxxxxxxxxxx> |Subject:=20Re:=20[Xen-devel]=20Cpu=20pools=20discussion |Date:=20Thu,=2030=20Jul=202009=2014:51:54=20+0200 |Message-ID:=20<4A71976A.8040704@xxxxxxxxxxxxxx>|To:=20Ke ir=20Fraser=20<keir.fraser@xxxxxxxxxxxxx>|CC:=20Tim=20Dee gan=20<Tim.Deegan@xxxxxxxxxxxxx>,=20=0D=0A=20George=20Dun lap=20<dunlapg@xxxxxxxxx>,=0D=0A=20Zhigang=20Wang=20<zhig ang.x.wang@xxxxxxxxxx>,=20=0D=0A=20"xen-devel@xxxxxxxxxxx urce.com"=20<xen-devel@xxxxxxxxxxxxxxxxxxx>|MIME-Version: =201.0|Content-Transfer-Encoding:=207bit|In-Reply-To:=20< C69718AB.10ED3%keir.fraser@xxxxxxxxxxxxx>|References:=20< C69718AB.10ED3%keir.fraser@xxxxxxxxxxxxx>; bh=ZHDseSNVnniKcoC9DOCQOX8CMPXDOototSzvgadM3Hk=; b=qb3vU3cw+VCWM3dmNeVP9hEPRAhhuvoWzb/u5u5FqfpaFqKVpcQuD5jo MnAnChBM9QdAK6pX/JoN9BWTmfMbxKyiXsW6CpuV3Nm529utvOmUiy9a7 Jmey+xoABoYn1TbiYcQ8aiLHvqhHbbHGzhGd3CBnxYU8bIlq9+Iij7nUx 1Ekmqk+3pVkbEbDfhLa5WG7tPb7mz3R3+tuMumyOTsAqrSG9PGMPxji7S qCk2c83lb77ES+4eSA8NojUHX+gXa;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:X-Enigmail-Version:Content-Type: Content-Transfer-Encoding; b=eBiWcF5JX/0Q7Tuh3d9LME16iDtCkBSUolSRlpw+XAkYcXVwRGZ5DSdD 1/0krlQOFeFkM+KsALxgLCBX7u6Lj3qMQlg1SDQerIUdIgiAVpMjHy5Id Wvu/aFsFWyKFx8xPblYbuLjQmImbbKk5wGMLS4qDMgmUbBERcJNIpxLsL ESyASvo6T6YDG5CgrqDUOrUTcxZoF6mgRQBYNZVFkGlbTfMSKdB0vGGAc G3FLCQFafnmGx9nfdAwTSdv1k0OWw;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C69718AB.10ED3%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <C69718AB.10ED3%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla-Thunderbird 2.0.0.22 (X11/20090707)
Keir Fraser wrote:
> On 30/07/2009 06:46, "Juergen Gross" <juergen.gross@xxxxxxxxxxxxxx> wrote:
> 
>>> Another alternative might be to create a 'hypervisor thread', either
>>> dynamically, or a per-cpu worker thread, and do the work in that. Of course
>>> that has its own complexities and these threads would also have their own
>>> interactions with cpu pools to keep them pinned on the appropriate physical
>>> cpu. I don't know whether this would really work out simpler.
>> There should be an easy solution for this: What you are suggesting here 
>> sounds
>> like a "hypervisor domain" similar to the the idle domain, but with high
>> priority and normally all vcpus blocked.
>>
>> The interactions of this domain with cpupools would be the same as for the
>> idle domain.
>>
>> I think this approach could be attractive, but the question is if the pros
>> outweigh the cons. OTOH such a domain could open interesting opportunities.
> 
> I think especially if cpupools are added into the mix then this becomes more
> attractive than the current approach. The other alternative is to modify the
> two existing problematic callers to work okay from softirq context (or not
> need continue_hypercall_on_cpu() at all, which might be possible at least in
> the case of CPU hotplug). I would be undecided between these two just now --
> it depends on how easily those two callers can be fixed up.

I'll try to set up a patch to add a hypervisor domain. Regarding all the
problems I got with switching cpus between pools (avoid running on the cpu to
be switched etc.) this solution could make life much easier.

And George would be happy to see all the borrow cpu stuff vanish :-)


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 636 47950
Fujitsu Technolgy Solutions               e-mail: juergen.gross@xxxxxxxxxxxxxx
Otto-Hahn-Ring 6                        Internet: ts.fujitsu.com
D-81739 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel