WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] XCP My projects and todo list

To: matt@xxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] XCP My projects and todo list
From: Vern Burke <vburke@xxxxxxxx>
Date: Tue, 16 Feb 2010 20:30:41 -0500
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 16 Feb 2010 17:31:44 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1cf90bfc29c5f7fd3bbf37a7e9434432.squirrel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4B7AC97A.10602@xxxxxxxx> <20100216183920.GI2861@xxxxxxxxxxx> <4B7AF397.50807@xxxxxxxx> <20100216195053.GL2861@xxxxxxxxxxx> <4B7AFBA1.2040503@xxxxxxxx> <1cf90bfc29c5f7fd3bbf37a7e9434432.squirrel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.7) Gecko/20100111 Thunderbird/3.0.1
Matthew:
The XCP cloud is currently 5 servers (2x Opteron single core with 4GB, soon to get a rolling upgrade to dual core Opterons and more memory). We're running about 40 virtual machines (all CentOS 5.4), practical limit is around 12 per. We leave the pool master open as extra capacity should a slave server croak. Networking is the standard XCP configuration using open vswitch. Public facing ports are static addresses, ports facing the storage area network are DHCP private addresses. Back end storage is a pair of the most rock solid NFS servers I could put together (2x Opteron, 4GB memory, 500GB drives on hardware RAID (soon to be upgraded to 1TB drives), oversize redundant power supply, etc) on a private Gigabit Ethernet storage network. I know this isn't the most bulletproof redundant storage configuration, but the MTBFs will hold us until I find something I'm happy with that doesn't require Rube Goldberg to make it work :). Problems were mostly confined to various bugaboos with XCP, such as leaving a critical file off the last distribution iso. All in all, I think XCP is one heckuva package, the bugs just come with the territory of being out on the front edge :).

Vern Burke

SwiftWater Telecom
http://www.swiftwatertel.com
ISP/CLEC Engineering Services
Data Center Services
Remote Backup Services

On 2/16/2010 7:13 PM, Matthew Law wrote:

On Tue, February 16, 2010 8:10 pm, Vern Burke wrote:
When all is said and done, most of it will be PHP.

I'm developing in kind of a mishmash right now, I guess you could say
I'm freestyling :D.

It's actually working well enough now that I have it in production on my
own XCP cloud (live customers and all!), most of the work left on the
working sections is to push static settings off to a common config file.

It's so nice to be able to put my feet up in front of the tv and handle
the cloud from my BlackBerry :D.

Sounds cool, Verne.

Can you tell us a bit more? - how many domUs on how many dom0s?  What kind
of storage and what problems if any did you have to overcome?

Are the domUs bridged or routed and are their IPs statically configured or
from DHCP?

Enquiring minds wanna know! ;-)

We are quite late to the Xen party and have very little invested in our
own tools.  This, and because XCP looks so cool has got us thinking of
creating our own XCP frontend in Ruby together with Sinatra or maybe Rails
3 when it comes out soon.

The only worry is the cost of capable and redundant shared storage.  We're
currently looking at 30 or more domUs across a few dom0s and our small
company couldn't stretch to a NetApp cluster or anything like it.  Nor
could we afford to move up from gbit switching either...


Thanks,

Matt.



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users