This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] VIRQ_CON_RING

To: Jan Beulich <JBeulich@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] VIRQ_CON_RING
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Thu, 12 Nov 2009 14:51:36 +0000
Delivery-date: Thu, 12 Nov 2009 06:51:51 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4AFC2784020000780001F54F@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcpjozYEEvv0lpaGS5CHwv2ME2SUXAABGvdp
Thread-topic: [Xen-devel] VIRQ_CON_RING
User-agent: Microsoft-Entourage/
On 12/11/2009 14:19, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:

> Is there any real user for this vIRQ?

xenconsoled when started with --log=hv|all

> While I realize that for compatibility reasons (even in the case of there
> not being a current user) it may not be possible to drop this vIRQ
> altogether, I wonder whether it would be possible to avoid scheduling
> the tasklet when the vIRQ has no handler and/or is already pending.

So this is due to you adding a really noisy printk in the middle of a
hypercall? I don't think you should expect goodness to result from that.
Seems to me the issue is as much the extreme load you put on printk as it is
printk's overhead.

 -- Keir

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>