On Sun, 23 Mar 2008, Andy Smith wrote:
server).
If you want to put 100 general purpose servers on 1 piece of
hardware I suggest you look into a lot more than 4 disks, and
probably look at 10kRPM 2.5" SAS as well, as opposed to what I am
guessing are commodity 7200RPM 3.5" SATA disks.
That's an interesting point. 1U cases for 2.5" drives are not common, but
Intel has a SR1550 line of servers (I have one, have tested it, but not
put it into production yet) that will fit EIGHT 2.5" drives. The active
SAS backplane uses the megaraid_sas driver which is supported under RHEL
since the late 4.x releases (4.4?)
Check the iowait % in your domains and dom0 - if it is more than a
few percent then it's IO you are needing i.e. more disks. I always
run out of IO before CPU or RAM too, it's pretty common.
Another good suggestion. If you want %cpu stats from boot, they are in
/proc/status (which is almost certainly where top gets them, and then
shows you the relative changes)... I think they are measured in jiffies
which may be 1/100th of a second, but this depends on your architecture.
[root@copper /home/virtuals/html/bmd]# head -1 /proc/stat
cpu 24113961 21788047 10244198 240697056 28184021 178598 92713 0
user nice system idle iowait irq softirq steal
[root@copper /usr/src]# uptime
11:52:25 up 37 days, 15:39, 9 users, load average: 1.19, 1.14, 1.12
ok, lets convert aproximate uptime to jiffies
(37 * 24 + 15 ) * 3600 * 100 = 325080000 (jiffies)
240697056/325080000*100 = 74 % idle
28184021/325080000*100 = 8.7 % i/o
interesting. I bet it is the morning indexing job that kicks
up the average I/O wait times. This machine re-builds a swish
index of over 450,000 email conversations (our tech support logs)
every morning.
Hhmm, and the load average is up due to a looping email message.
Gotta go! :)
-Tom
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|