In my opinion—invest in a big drive array. A box with 15
inexpensive SATA disks, RAID 5/10, should give you plenty of disk bandwidth for
a few domU's. SSD's are nice, and perhaps essential if your application is
really sensitive to latency, but they are expensive and not necessary just to
get more disk bandwidth. (The appliances we use run around $5k full of disks,
and can give close to 200MB/s read/write throughput.)
From:
xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx]
On Behalf Of Dot Yet
Sent: Friday, October 30, 2009 2:17 PM
To: xen-discuss@xxxxxxxxxxxxxxx; xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Re: [xen-discuss] I/O performance concerns. (Xen
3.3.2)
indeed.... you are correct,
learning something everyday.
So, how do i get the most out of it? (or is it the most i can get out of it?)
In reality, the prime task is to run two linux pv VMs each configured similar
to the one i am testing with, and use them as the database servers. They do not
have to be "real" enterprise grade, but they should still be able to
keep up if I am pumping a few million rows into them everyday. I did the
initial testing with DB2 9.7 linux, and what i noticed was that after a few
hours of activity, I tend to have a rather sluggish system. If I stop the
connecting applications, the disks would still keep showing heavy activity for
upto 10 to 15 minutes. That's what led me to believe something was wrong in the
way I was working on it.
This is a just a lab setup which I use for my own craving, but I would want it
to be a little bit faster than what is right now. Also, I would want the data
stored in it to be safe from durability point of view.
What would you suggest to get past this bottleneck?
If required, on the hardware side, I am thinking on the lines of adding some
more similar sized disks for the primary pool, adding three SSDs, one for zfs cache
and the other two for mirrored zil. Or one SSD for zil and 2 10Krpm raptors for
cache. Although, I dont really know if database's IO will cause any synchronous
writes to the zfs pool. Running dd with oflag=sync under dom0, does show zil
usage (used zilstat script), but running the same dd under domU does not show
any synchronous usage on zfs, which makes me think that the SSD for zil may not
be of much help.
Plus, I am also thinking about moving the disks from AOC-SAT2-MV8 PCI-X to
AOC-USAS-L8i LSI 1068E UIO SAS/SATA PCIE card, just in case PCI-X is adding to
the slowness.
>From what I have read on the net sofar, the "real" enterprise grade
SSDs are pricey, mostly because of the capacitor/backup feature or SLC
technology.
Can someone also guide me on which SSD would be reasonable enough for this
task, and if I really should opt for one?
Thanks a lot, do appreciate your help.
Regards,
dot.yet
On Fri, Oct 30, 2009 at 12:45 PM, Stu Maybee <Stuart.Maybee@xxxxxxx> wrote:
Since it's a linux domU I would expect to see caching in the domU iirc ext3 (or
whatever) is a heavily caching fs.
Stu
Mark Johnson wrote:
Gary Pennington wrote:
Some answers below...
The above command returns under 2.5 seconds, however iostat
on domU AND
zpool iostat on dom0, both continue to show write IO activity for upto 30 to
40 seconds:
If the domU is showing IO activity, then it must be caching too.
When you write the data to the zvol most of the writes are
cached
to memory and the writes to disk are performed later when ZFS writes
the memory cache to disk. It seems that your logging device is capable
of writing about 40MB/s (see your figure below), so it takes at least 800/20
seconds to write this to disk.
you can to do a iostat -x 1 and see the stats for
slog device.
MRJ