Firstly, thanks to those who responded. I ll take the Giga bit ethernet
suggestion for sure.
The objective here is NOT to USE CoRAID at all. Use the pcs in the lan ,
for storage purposes and carry on. The HPC cluster which we are building or
aiming to build focusses more on other resources like cpus, network, etc
and doesnt place much emphasis on RAID . So, I am looking for a of the
shelf, no cost, unified storage solution (sans any extra hardware aka
coraid ) ie just use the vblade and aoe modules, and use it over the simple
gigabit lan. It doesnt have to be highly sophisticated like SANs or have
multiple raid levels (i could use drbd if such a need arises).
---Again, the emphasis here is not on enterprise type scenarios with High
Availability storage and data integrity.
thanks
shriram
On 8:20 am 01/01/08 "Mike Bailey" <mike@xxxxxxxxxxxxx> wrote:
> Coraid SR1521's allow you to use two ethernet ports. I would suggest
> you give that a try with 2 x 1Gb NICs.
>
> They provide a tool called ddt to do tests with.
>
> Here are some results I got a couple of weeks ago.
>
> - Mike
>
> #
> # Xen guest on sm02 with lvm filesystem on 4 disk RAID5
> #
>
> root@x4:~/ddt-6# time ddt -t 8g /
> Writing to /ddt.3159 ... syncing ... done.
> sleeping 10 seconds ... done.
> Reading from /ddt.3159 ... done.
> 8192 MiB KiB/s CPU%
> Write 52009 7
> Read 105424 1
>
> real 4m26.415s
> user 0m0.044s
> sys 0m11.017s
>
> On Jan 1, 2008 10:57 PM, Ofek Doron [Ofek BIZ] <doron@xxxxxxxx> wrote:
> >
> >
> >
> > Hi shriram,
> >
> >
> >
> >
> >
> >
> >
> > You can find a few benchmarks that I did in the past.
> >
> > I used 1Gbps Ethernet card (on board in t42 IBM thinkpad and intel
> > pci card)
> >
> > I used regular linux distro (Fedora, RHEL, SLES and Open SuSE) and I
> > worked with the basic Coraid SR1521 (http://coraid.com/products1.html) .
> >
> >
> >
> >
> >
> > My information is in Hebrew but you don't need Hebrew to read the
> > results .
> >
> >
> >
> >
> > If you have any questions you can e-mail my directly.
> >
> >
> >
> >
> > - doron
> >
> >
> >
> >
> > oops,
> >
> > the link for the benchmarks:
> >
> >
> >
> > http://www.ofek.biz/WiKi/doku.php?id=%D7%91%D7%93%D7%99%D7%A7%D7%95%D7%AA_%D7%91%D7%99%D7%A6%D7%95%D7%A2%D7%99%D7%9D_%D7%A9%D7%9C_coraid_%D7%A2%D7%9D_%D7%9E%D7%A2%D7%A8%D7%9B%D7%95%D7%AA_%D7%9C%D7%99%D7%A0%D7%95%D7%A7%D7%A1
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > XenoCrateS wrote:
> > hi all
> > forgive me if this question has already been answered (i couldnt find
> > any :( )..
> >
> > I am working on some xen based cluster where we were thinking of
> > using AoE for storage management -rather than pinning a DomU's data
> > partition to the same host (same stuff which everybody does)
> >
> > here comes my question:
> > I have read in several places that AoE is sleek and fast for certain
> > conditions,etc etc. But i couldnt find any real Xen-AoE combination
> > (or atleast linux AoE) benchmarks in the internet.
> >
> > I managed to stumble upon several Coraid benchmarks - but they arent
> > useful at all. because the coraid disks are connected by a MyriNet
> > NIC (i guess 1/10 Giga bit nic card) and its their implementation.
> >
> > I am looking at benchmarks for a simple 100 MBps ethernet lan (or a
> > 1GBps ethernet lan) - Dedicated for Aoe, based on the vblade/aoe
> > kernel modules.. There are no racks/special storage servers. Its just
> > a set of commodity machines in the cluster. each machine has a
> > dedicated nic card for aoe tasks, while the other is used for cluster
> > communication/ communication to the internet.
> > -----Note that these machines are also performing computationally
> > intensive tasks sometimes.
> >
> > -----Some things i would definitely like to know are ,
> > --If a host A is busy, does it affect the host B's AoE data fetch
> > from host A its hard drive significantly?
> > --What kind of bandwidths (atleast a ballpark number) can i expect,
> > when compared between a in host hard disk access and remote host AoE
> > based hard disk access?
> >
> >
> > If such bench marks are available anywhere, would somebody please
> > point me to the links or any such source where i can dig it from?
> >
> > thanks in advance.
> >
> > cheers
> > shriram
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> >
> >
> >
> >
> > --
> >
> >
> >
> > P Save a tree...please don't print this e-mail
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> >
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|