WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Open SSI paravirtualized kernel available for xen 3?

To: "Christopher G. Stach II" <cgs@xxxxxxxxx>
Subject: Re: [Xen-users] Open SSI paravirtualized kernel available for xen 3?
From: Tim Post <tim.post@xxxxxxxxxxxxxxx>
Date: Fri, 19 Jan 2007 12:36:49 +0800
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 18 Jan 2007 20:36:49 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <45B04463.2080807@xxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: Net Kinetics
References: <1169132461.13111.25.camel@xxxxxxxxxxxxxxxxxxxxx> <45AFF540.6060602@xxxxxxxxx> <1169178840.13177.57.camel@xxxxxxxxxxxxxxxxxxxxx> <45B04463.2080807@xxxxxxxxx>
Reply-to: tim.post@xxxxxxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Fri, 2007-01-19 at 02:09 -0200, Christopher G. Stach II wrote:
> Tim Post wrote:
> > On Thu, 2007-01-18 at 20:31 -0200, Christopher G. Stach II wrote:
> >> If you want something that you have to
> >> babysit, and can crash really easily, and use FC3,
> > 
> > I try to avoid Fedora (or yum distros), I much prefer GNU. I was
> > considering trying FC3/SSI as a last resort / HVM guest just to get a
> > proof of concept up for something. For something like Open SSI, a
> > paravirtualized kernel would be ideal.
> 
> There's also a Debian branch for it, and you're a big fan.  I really
> mentioned FC3 since that's a decent approximation of how current their
> kernel patches are.  I probably wasn't clear.
> 
> >>  I still don't think
> >> anyone is actively working on it.  It hardly feels like anyone is even
> >> working on OpenSSI these days.
> > 
> > I noticed that too. I try not to complain about projects not releasing
> > (heck I'm a Debian user, I'm used to it) .. but I do get a little
> > irritated if it starts to look like a project should be passed on to
> > others, but isn't being passed on. I think the developers are working on
> > things much cooler now (professionally) , going back to Open SSI for
> > them would be like going back to Duplo after using Leggos.
> > Understandable.
> 
> I dunno.  I think RDMA is pretty cool to work on. :)
> 
> >>   A few people have mentioned it, and I'm
> >> interested in it, but you'd probably have to roll your own.
> > 
> > I'll keep digging in the off chance I find someone who picked it up and
> > has gotten somewhere. I'd rather help someone else finish rather than
> > start from square-0 needlessly. But if I can't find someone else mucking
> > with it, I'll probably do it myself. When the Xen / IB stuff is done /
> > stable / working, Mosix (and clusters using some kind of Mosix plugin)
> > will get new life, I think. It's a worth while effort to at least get
> > started on paravirtuzlizing one of them up to 3.0.4 so other efforts can
> > be dropped into place once completed.
> 
> I'd love nothing more than to have PV SSI nodes on Xen for scalable HA
> SSI clusters on only a few bigass machines.  I'm afraid it's just not
> time. :(
> 

My eventual goal (And dream) is PV ssi guests running from something
like freesan .. I've already written an interconnect and ssi-ish
utilities for xen, i.e. determining the fastest node available to run
any given guest and migrate it there.

So in essence, you have processes migrating to the fastest node in the
virtualized cluster, and the similar process going on with the nodes
themselves. Keeps the true meaning of "off the shelf" well in tact by
doing well utilizing mixed hardware, and brings in some neat management
possibilities.

I really like how, in open SSI you can just type on node xyz command, on
class abc command, on fastest [command], etc. So I did something similar
for a Xen farm. Centralizing that further, so "on fastest guest
[command]" run from dom-0 on any Xen server in the hive would start a
process on the fastest PV guest in the cluster. Or migrate pvnode3
fastest , meaning send the paravirtualized node 3 to the xen server that
is best equipped to power it in its current state. We developed our own
interconnect for that purpose.

For no other reason, any college could run 15 completely isolated SSI
clusters for teaching.. or if the performance was good much more. It
takes web hosting to a whole different level, lets researchers have 1
cluster per project instead of sharing a single instance.. there are
many benefits for it, I wish the idea would attract more attention.

There are *many* things that could be worked out prior to even hoping
for any real performance out of the kernel.. all it needs to do is
'work' at this point for more development to take place :) 

I think it's time ;), I just wish someone a bit better making squares
fit in circular holes with C would rant about it as much as i do.. but,
like I said, I'll give it a try, will just take quite a while longer.

Best,
--Tim



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>