This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Reconciling multiple Xen flavored development streams

To: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Reconciling multiple Xen flavored development streams
From: Mike Dickson <mike.dickson@xxxxxx>
Date: Wed, 01 Apr 2009 19:56:54 -0500
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 01 Apr 2009 17:57:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4FA716B1526C7C4DB0375C6DADBC4EA34172EC1B84@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: BladeSystem infrastructure R&D
References: <1238596413.3625.25.camel@xxxxxxxxxxxxxxxxxxxxx> <4FA716B1526C7C4DB0375C6DADBC4EA34172EC1B84@xxxxxxxxxxxxxxxxxxxxxxxxx>
Reply-to: mike.dickson@xxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Wed, 2009-04-01 at 22:02 +0000, Ian Pratt wrote:
> Not sure who said kxen would make 3.4 -- it's clearly missed the window. I 
> think Christian said he'd re-base the code as soon as 3.4 was released. The 
> code is certainly a good candidate to get merged to xen-unstable post branch.

That makes more sense. Certainly its difficult to stabilize something
against unstable so having the 3.4 release as a base makes sense.

> The XenClient repo [ http://xenbits.xen.org/xenclient/ ] contains more than 
> just the core hypervisor and is a full reference implementation for 
> virtualization on x86 client devices, including a modern xen kernel (soon to 
> be pvops based), tiny uclibc/busybox/buildroot based filesystem, and the 
> 'xenvm/xenops' embedded xen toolstack.

This is where I've spent my time recently. I've built the tree and
booted it on a couple of systems.  I like the more modular approach when
building the components actually. That's necessary of course given the
use of uClibc and buildroot.  I'm curious how the build system and the
modularity gets fitted against the current server tree.  Also how does
the ocaml work get reconciled against the current python tools approach
in the main xen tree. The current ocaml stuff is more minimalist which
makes it a nice fit for the client hypervisor or an embedded approach.
Do the tool stacks live side by side and when a build is done I
configure which stack to use? Or do you anticipate this work will be
kept separate.

> The hypervisor tree in the XenClient repo is currently based on a 
> xen-unstable snapshot plus some additional client-specific patches that 
> aren't clean enough to go into mainline xen yet. The plan is to keep 
> re-basing that to newer xen versions, and feeding the patches into 
> xen-unstable as they're ready. Ultimately all the client-specific patches 
> should be merged into mainline xen-unstable.

That's what I suspected.  So is the likely outcome then that the work
converges around a single version of Xen (as patches mature the
XenClient and kXen work gets merged into unstable and therefore into a
future release)?  I can see a similar convergence around the qemu-dm
code.  Toolstacks stay separate and perhaps I select which stack I need
when building this unified Xen?

It's great to see all the healthy development activity addressing
different use cases.  I was just curious if the plan was to keep these
as separate forks or merge them into "core Xen" as was practical. Thanks
for clarifying.


Xen-devel mailing list