This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: [Qemu-devel] [PATCH 01/13] Handle terminating signal

To: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [Qemu-devel] [PATCH 01/13] Handle terminating signals.
From: Anthony Liguori <anthony@xxxxxxxxxxxxx>
Date: Tue, 26 Aug 2008 10:36:18 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, qemu-devel@xxxxxxxxxx, Gerd Hoffmann <kraxel@xxxxxxxxxx>
Delivery-date: Tue, 26 Aug 2008 08:37:42 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <18612.8502.305043.233934@xxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1219336054-15919-1-git-send-email-kraxel@xxxxxxxxxx> <1219336054-15919-2-git-send-email-kraxel@xxxxxxxxxx> <m2n.s.1KWGbo-001Nr9@xxxxxxxxxxxxxxxxxxxxxx> <18611.56975.584280.471257@xxxxxxxxxxxxxxxxxxxxxxxx> <48B3F411.2020306@xxxxxxxxxx> <18611.63711.631859.280983@xxxxxxxxxxxxxxxxxxxxxxxx> <48B4027C.1000008@xxxxxxxxxxxxx> <18612.1900.73781.314743@xxxxxxxxxxxxxxxxxxxxxxxx> <48B41B7E.40708@xxxxxxxxxxxxx> <18612.7267.832361.270651@xxxxxxxxxxxxxxxxxxxxxxxx> <48B41F55.1000909@xxxxxxxxxxxxx> <18612.8502.305043.233934@xxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (X11/20080723)
Ian Jackson wrote:
Anthony Liguori writes ("Re: [Xen-devel] Re: [Qemu-devel] [PATCH 01/13] Handle 
terminating signals."):
Then you need a pipe per-signal and you can't accurately simulate threadfd() (since you can't communicate siginfo).

No, you don't need a pipe per signal.  You only need multiple pipes if
you have multiple different event loops which are each capable of
handling only a subset of the signals, _and_ you're unwilling to use
an atomic flag variable (eg, cpu_interrupt) or its equivalent.

But we already have cpu_interrupt, and of course the aio completion
system has its own recording of what's going on.  So that does not
apply to qemu.

All that's needed is a reliable, race-free, way of avoiding spuriously
blocking in a syscall when an event occurs between checking the state
variables (aio_error, cpu interrupt check) and the call to select().
One fd is sufficient for that.

Just to make that concrete, even if we extended my patch to use its
mechanism for SIGINT et al I don't think a second pipe would be

In KVM, we do use the signal number to determine action. We could use globals but since we're multi-threaded, that gets pretty nasty. The same would apply to a threaded QEMU.

I don't see threads as a problem.  Are you concerned about mini-OS?

Minios certainly doesn't currently have any threads and it would
probably be a severe pain to introduce them.  That's just one example
of a portability problem.

We're definitely not going to avoid threads forever in QEMU. KVM requires threads to support multiple VCPUs. Threads are also needed to support true SMP with TCG.

And right now, implementing a thread pool is the only sane way to get reasonable disk IO in userspace.

I share your concerns about threading, which is why we have to use them in a very careful way. My signalfd() patch uses them in a very isolated way that is pretty easily verified.

You could always add proper signalfd() support to minios. You can also certainly implement a signalfd() emulation that uses pipe(). signalfd() is really the right solution to this problem (and it doesn't require threads by default).


Anthony Liguori


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>