WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] My future plan

Thanks Rob for the tip on the NICs! This will come in handy. My main area of concern was using Ethernet/Software iSCSI for my setup, but all seems ok!
 
I'll remember to ask Broadberry about the new backplace and RAID card for the storage server.
 
Do you think I'll be alright using just SATA disks for my setup? I guess I could always change the disks if it became a problem...


From: Robert Dunkley [mailto:Robert@xxxxxxxxx]
Sent: Wed 09/06/2010 09:36
To: Jonathan Tripathy
Subject: RE: [Xen-users] My future plan

Hi Jonathan,

 

There is nothing wrong with your plan just make sure you get the SAS6G backplane and card for the storage, cost difference should be little or nothing and you don’t want to be bandwidth constrained later by the raid card if you choose to upgrade to 10Gbit for storage.

 

I could not see the dual port Pro 1000ET copper card on Dells options so was just pricing those separately at about £100 each: http://www.google.co.uk/products/catalog?q=E1G42ET+Intel&cid=12126864948002960902&ei=b1EPTPKmFZ622ASGxYnUBA&sa=title&ved=0CAcQ8wIwADgA#p

 

The ET ones are the latest with multi-queue support so are the ones to get IMHO.

 

Rob

 

 

From: Jonathan Tripathy [mailto:jonnyt@xxxxxxxxxxx]
Sent: 09 June 2010 09:20
To: Robert Dunkley; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

 

Hi Rob,

 

Thanks for the link. I think very highly of supermicro gear as well as their staff. However, since we wish to build up the solution slowly, we can really only afford to start with the R210s. Once the initial 3 or 4 R210 general some revenue, then we could look into some beefier servers (As it would be much cheeper in the long run as we could run more guests per node).

 

Please let me know if you think my plan is flawed from the outset.

 

When you were spec'ing the R210, what NIC were you looking at? Just the 2 on board ones?

 

Thanks

 

Jonathan

 


From: Robert Dunkley [mailto:Robert@xxxxxxxxx]
Sent: Wed 09/06/2010 09:13
To: Jonathan Tripathy; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

Hi Jonathan,

 

 

I was going to say buy duals but then I saw the price of the R210s, tough call.

 

Good example:

http://www.supermicro.com/products/system/1U/6016/SYS-6016T-URF4_.cfm?UIO=N

 

18 Dimm slots and 4 of the latest Intel GBE Ports (Supports Multi-Queue used in Xen 4.0). Just add some Quad or Hex core Xeons and as much RAM as you need, no need for additional NICs. Depending on a internal policy onsite rapid response support maybe less of an issue when you have a redundant node type architecture.

 

I have to admit the R210s are a good price though and its a tough choice:

R210 with 2.40Ghz QC and 8GB Ram and Pro1000 ET Dual port – About £600

Supermicro Above with Dual 2.26Ghz QC and 24GB Ram – About £1700

 

The dual options gives you full screen IKVM and redundant PSUs along with with 12 spare memory slots (6 used) as opposed to no spare slots on the R210. Alot of this depends on your RAM requirements and spare rack space I suppose. Would be interesting to hear the opinions of others.

 

 

Rob

 

 

 

From: Jonathan Tripathy [mailto:jonnyt@xxxxxxxxxxx]
Sent: 08 June 2010 16:20
To: Robert Dunkley; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

 

Hi Rob,

 

Since this is just an idea at this stage, and that we are just starting out, we want to build up our rack over time. The Dell R210 is the best we can afford at the minute. Maybe, after the first 4 or 5 R210, I could look into getting servers with Dual CPUs in them so more guests can run. Initally, each server will be handling its own storage using RAID1.

 

The Dell R210s do come with dual on-board NICs, however I need one of them for the internet connection, unless of course I used VLANs and just used the on-board NICs?

 

I'm very confused about the RAID cards. I've never really worked with these before, so all advice is appreciated. With the number of total VMs running around 100, do you think I'll notice much of a difference between SATA and SAS?

 

Thanks

 


From: Robert Dunkley [mailto:Robert@xxxxxxxxx]
Sent: Tue 08/06/2010 15:56
To: Jonathan Tripathy; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

Hi Jonathan,

 

The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I’m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander.  Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate.

 

Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand.

 

Any reason you aren’t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems.

 

Rob

 

 

 

From: Jonathan Tripathy [mailto:jonnyt@xxxxxxxxxxx]
Sent: 08 June 2010 15:38
To: Robert Dunkley; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

 

Hi Rob,

 

Do you have any links or anything for cards that you suggest? I'm just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!!

The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss

 

That's a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same.

 

My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max..

 

Thanks

 

Jonathan

 


From: Robert Dunkley [mailto:Robert@xxxxxxxxx]
Sent: Tue 08/06/2010 15:36
To: Jonathan Tripathy; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

 

Hi Jonathan,

 

 

Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers).

 

If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes.

 

 

Rob

 

From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jonathan Tripathy
Sent: 08 June 2010 13:56
To: Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] My future plan

 

My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated:

 

Each Node:

Dell R210 Intel X3430 Quad Core 8GB RAM

Intel PT 1Gbps Server Dual Port NIC using linux "bonding"

Small pair of HDDs for OS (Probably in RAID1)

Each node will run about 10 - 15 customer guests

 

 

Storage Server:

Some Intel Quad Core Chip

2GB RAM (Maybe more?)

LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps)

Battery backup for the above RAID controller

4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total)

Each RAID10 array will connect to 2 nodes (8 nodes per storage server)

Intel PT 1Gbps Quad port NIC using Linux bonding

Exposes 8 X 1.5GB iSCSI targets (each node will use one of these)

 

HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes)

 

What you think? Any tips?

 

Thanks

 

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

 

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support.

 

 SAQ Group

 

ISPA Member

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

 

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support.

 

 SAQ Group

 

ISPA Member

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

 

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support.

 

 SAQ Group

 

ISPA Member

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

 

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support.

 

 SAQ Group

 

ISPA Member

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>