WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] intermittent problems with legacy xmlrpc server in 3.0.4

To: John Levon <levon@xxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] intermittent problems with legacy xmlrpc server in 3.0.4
From: "Daniel P. Berrange" <berrange@xxxxxxxxxx>
Date: Wed, 17 Jan 2007 01:29:59 +0000
Cc: atse@xxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 16 Jan 2007 17:29:34 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20070117010401.GA4503@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20070117010401.GA4503@xxxxxxxxxxxxxxxxxxxxxxx>
Reply-to: "Daniel P. Berrange" <berrange@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.1i
On Wed, Jan 17, 2007 at 01:04:01AM +0000, John Levon wrote:
> So for some reason the server is trying to process a request before xm has 
> sent it, and the
> EWOULDBLOCK is causing the EPIPE it seems.
> 
> changeset 12062:5fe8e9ebcf5c made this change:
> 
> +        try:
> +            self.server.socket.settimeout(1.0)
> +            while self.running:
> +                self.server.handle_request()
> 
> which places xmlrpc.sock in non-blocking mode. SocketServer.py actually
> does this on init:

So from reading that changeset, it looks as if the socket is being put
in no-blocking mode so that when XenD shuts down it doesn't wait forever
for active clients to finish. An alternate way to do this would be to
simply set all the client connection handling threads to be daemonized
threads and not bother calling join() on them at all - just rely on
the automatic thread cleanup. This means that the leader process can just
quit & any outstanding client handling threads will simply be killed
off without delay.

>     def __init__(self, request, client_address, server):
>         self.request = request
>         self.client_address = client_address
>         self.server = server
>         try:
>             self.setup()
>             self.handle()
>             self.finish()
> 
> This self.handle() ends up as the recv() that craps itself when it gets
> EAGAIN. This doesn't always happen, presumably the race is between
> creating the request thread in SocketServer and xm writing the data.
> 
> I've hacked up SocketServer a bit to handle EAGAIN, but this obviously
> isn't a good fix. Suggestions welcome, I'm not really familiar with all
> this server code.

Having had a cursory glance at the code, as you say, none of it is expecting
the socket to be in non-blocking mode so it easily breaks. You'd probably
see same thing if network congestion caused a data stall of > 1 second.
IMHO the sockets should be put back to blocking mode & find another way
of dealing with any possible shutdown issues.

Regards,
Dan.
-- 
|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>