This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] bug # 477

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] bug # 477
From: Florian Kirstein <xenlist@xxxxxxxxxxxxxx>
Date: Thu, 30 Mar 2006 00:39:10 +0200
Cc: Ewan Mellor <ewan@xxxxxxxxxxxxx>
Delivery-date: Wed, 29 Mar 2006 22:40:43 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20060329095603.GC31336@xxxxxxxxxxxxxxxxxxxxxx>; from ewan@xxxxxxxxxxxxx on Wed, Mar 29, 2006 at 10:56:03AM +0100
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <Pine.LNX.4.63.0603172155560.27119@localhost> <20060329072401.A26981@xxxxxxxxxxx> <20060329095603.GC31336@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/

OK, I found at least a kludge to work around this, see below. Not sure if
it qualifies as a clean soloution, though, but works for me so far...
Added it as a comment in bugzilla, hope that's OK.

> When xenconsoled gets into this state, spinning using 100% CPU, could you use
> gdb to find out where it is spinning?  We've not managed to reproduce this
Oh, and I thought it easily reproduces :) But now I even had difficulties
getting it into the "really hung" state, but the "hung for 30 seconds"
was enough for a first analysis:

I used strace to see what xenconsoled is doing while consuming 100% CPU,
and what it does is "select" all the time:

select(20, [16 18 19], [], NULL, NULL)  = 1 (in [19])
select(20, [16 18 19], [], NULL, NULL)  = 1 (in [19])
select(20, [16 18 19], [], NULL, NULL)  = 1 (in [19])
select(20, [16 18 19], [], NULL, NULL)  = 1 (in [19])
select(20, [16 18 19], [], NULL, NULL)  = 1 (in [19])
select(20, [16 18 19], [], NULL, NULL)  = 1 (in [19])

using gdb I identified this to be the select in 
tools/console/daemon/io.c line 572 in handle_io(void):
    ret = select(max_fd + 1, &readfds, &writefds, 0, NULL);
after which xenconsoled seems to iterate through the domains
to handle the input or something like that.

My idea now was that it could be possible, that the select returns before
the domU really made the data available or something, and then by
running in an select-loop xenconsoled even slows down the machine more
so it takes even longer for the data to become available. Just wild
guesses, I haven't looked into the details of the console code :) So
I simply added:
after the select in io.c to slow down the select-loop and give the machine
time to do other things. Possibly this is why you can't reproduce it:
because you don't have machines slow enough? :)

The result is satisfying, the console accepts the paste of even large blocks
more or less immediately and  I now can't bring xencosoled to consume any
relevant amount of CPU and could not reproduce the soft-irq kernel-message
either. Of course this patch slows down the consoles a bit, but I think
of using even 1000 in the usleep, 1ms should be a fair response time
for a console and it prevents users from stealing Dom0 CPU by flooding
the console :)

Possibly there's a nicer fix for this possible race-condition, but for
that I don't have the insight in the inner workings of the console
mechanism (yet :). 

Oh, and for the record: I never could really crash xenconsoled in my
setup (just hang it to 100% CPU), so I'm not sure if this fixes also the
initial problem Alex Kelly had in Bug #477 - possibly he could test this?

(:ul8er, r@y

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>