This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH 2/2] reap the blktapctl thread and notify the tap

To: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 2/2] reap the blktapctl thread and notify the tapdisk backend driver to release resource like memory..
From: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Date: Fri, 7 May 2010 18:32:14 +0100
Cc: Jim Fehlig <JFEHLIG@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, James Song <JSong@xxxxxxxxxx>
Delivery-date: Fri, 07 May 2010 10:33:20 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4BE3DB5A0200007800001BB7@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Newsgroups: chiark.mail.xen.devel
References: <4BE170FE0200002000085C39@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <19426.59365.615882.575878@xxxxxxxxxxxxxxxxxxxxxxxx> <4BE3DB5A0200007800001BB7@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Jan Beulich writes ("Re: [Xen-devel] [PATCH 2/2] reap the blktapctl thread and 
notify the tapdisk backend driver to release resource like memory.."):
> Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> 06.05.10 18:01
> >Reading the message you refer to, surely it should be the job of the
> >toolstack (xend or libxl) to ensure that the backends are instructed
> >to do all necessary releasing ?
> No (or not only): The cleanup done here is to close open file handles
> and/or mmap-s associated with blktap. You may have seen the kernel
> side patches to allow the system as a whole to recover from that
> state (particularly when qemu-dm crashes), but in general I consider
> it bad practice for an application to keep open huge amounts of
> mapped memory when getting orderly terminated.

Uh ?  I can't see anything at all wrong with letting the kernel do the
cleanup of memory mapped by and fds held by qemu.

The kernel already needs to have that code and if it's wrong or
incomplete (which you don't seem to be suggesting) then the system is
already broken; whereas if it's correct and complete then there is no
need for qemu to do anything.

In fact however there is allegedly some bug somewhere which this patch
is supposed to deal with, but I can't really see the connection.

> "Orderly" in the qemu-dm case unfortunately means being terminated
> by a signal, hence the signal should be intercepted by qemu
> (otherwise, i.e. in the current state) the design seems broken to me.

I think in general we should be aiming for crash-only software.
It's much much more reliable, as well as meaning we need to write less
code (and thus fewer bugs).

> Having said that doesn't mean that I agree to the blktap-centric
> approach taken by the patch. Imo global cleanup should be
> performed by qemu-dm upon being terminated - the question just is
> whether such code already exists (and just needs to be hooked up),
> or whether that part is missing altogether and needs to be written
> from scratch.

I can't see that there is anything that qemu should be relied upon to
do on its own termination.  If it can't be relied on to do it then we
need code elsewhere to do it (which we already have), and then there
is no need for qemu to have any code for it.


Xen-devel mailing list