This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] [PATCH] Fix hvm vcpu hotplug bug

To: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Fix hvm vcpu hotplug bug
From: "Liu, Jinsong" <jinsong.liu@xxxxxxxxx>
Date: Wed, 18 Aug 2010 12:57:06 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "Li, Xin" <xin.li@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Delivery-date: Tue, 17 Aug 2010 21:58:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <19554.43920.11785.97567@xxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <BC00F5384FCFC9499AF06F92E8B78A9E0B0007FD20@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <19554.43920.11785.97567@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acs5XLcWVGs7WFKXQdCv6amkH6j5qQFMR1vA
Thread-topic: [Xen-devel] [PATCH] Fix hvm vcpu hotplug bug
Ian Jackson wrote:
> Liu, Jinsong writes ("[Xen-devel] [PATCH] Fix hvm vcpu hotplug bug"):
>> When hotplug hvm vcpu by 'xm vcpu-set' command, if it add/remove
>> many vcpus by 1 'xm vcpu-set' command, it has a bug that it cannot
>> add/remove all vcpus that want to be added/removed.
>> This patch is to fix the bug. It delays trigger sci until all
>> xenstore cpu node status are watched.
> This patch seems to arrange to take multiple CPU hot-add/remove events
> and coalesce them into a single event.  It is obvious how this avoids
> triggering a race, but I'm not convinced that it's a correct fix.

It's used to avoid inconsistency of cpu status map (producer: qemu watch 
xenstore cpu nodes; customer: SCI \_L02 control method), so it delay trigger 
SCI until all cpu node are watched.

> The core problem seems to be that somehow the SCI IRQ is lost ?
> Perhaps the real problem is this code:
>         qemu_set_irq(sci_irq, 1);
>         qemu_set_irq(sci_irq, 0);
> I'm not familiar with the way SCI is supposed to work but clearing the
> irq in the qemu add/remove function seems wrong.  Surely the host
> should clear the interrupt when it has serviced the interrupt.
> Can you explain what I'm missing ?
> Ian.

Yes, you are right. 
In fact, it make me puzzling how and when to drop sci_irq to 0.

According to acpi spec, there are 2 level logic: gpe and sci.
1). gpe_en and gpe_sts 'AND' to trigger a gpe event (like pci-hotplug, 
2). multi gpe event 'OR' to trigger sci, which now wired to i8259[9];

Current qemu-xen implement gpe logic, and directly wired gpe to i8259[9].
Since qemu-xen now only support pci-hotplug event, it can work doing so.
However, if we want to support multi hotplug event, qemu-xen now didn't have 
'gpe events OR to trigger sci' logic.
I think qemu-xen should add this logic level, so that it can support more gpe 
events in the future.

BTW, at qemu-kvm, it do same as our current patch:
         qemu_set_irq(sci_irq, 1);
         qemu_set_irq(sci_irq, 0);
it triggers a sci pulse and works fine.

Xen-devel mailing list