WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Re: [xen-discuss] I/O performance concerns. (Xen 3.3.2)

To: Dot Yet <dot.yet@xxxxxxxxx>
Subject: [Xen-users] Re: [xen-discuss] I/O performance concerns. (Xen 3.3.2)
From: Gary Pennington <Gary.Pennington@xxxxxxx>
Date: Mon, 2 Nov 2009 17:05:18 +0000
Cc: xen-discuss@xxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 03 Nov 2009 08:41:27 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <93bc4af40910301116j525efb9k68feab9133c6986b@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: Solaris Kernel Development; Sun Microsystems, Inc.
References: <93bc4af40910292155n5170035fsf7f939bf5a609765@xxxxxxxxxxxxxx> <20091030113749.GA27725@xxxxxxxxxxxxxxxxx> <4AEADAD9.2010504@xxxxxxx> <4AEB181B.9020308@xxxxxxx> <93bc4af40910301116j525efb9k68feab9133c6986b@xxxxxxxxxxxxxx>
Reply-to: Gary Pennington <gary.pennington@xxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6i
Hi,

You would be better addressing your questions about SSD or alternative
disks to the zfs-discuss mailing list, since they are generic ZFS questions
and not specific to xVM or any type of virtualisation.

My opinion, though...

An SSD for the ZIL should provide some performance improvement.
However, if you are continually streaming data onto the system, then you
willl need to size your SSD accordingly so that it has the capacity to
keep accepting write requests. Otherwise you'll just hit the same
problem that you have with main memory when you run out of cache space.

I can't advise you on the best size, since it will be entirely dependant
on the quantity of data that you are writing. I also can't advise you on
which SSD to buy. Maybe someone on the ZFS discuss list can help there.

Gary

On Fri, Oct 30, 2009 at 02:16:41PM -0400, Dot Yet wrote:
> indeed.... you are correct, learning something everyday.
> 
> So, how do i get the most out of it? (or is it the most i can get out of
> it?)
> 
> In reality, the prime task is to run two linux pv VMs each configured
> similar to the one i am testing with, and use them as the database servers.
> They do not have to be "real" enterprise grade, but they should still be
> able to keep up if I am pumping a few million rows into them everyday. I did
> the initial testing with DB2 9.7 linux, and what i noticed was that after a
> few hours of activity, I tend to have a rather sluggish system. If I stop
> the connecting applications, the disks would still keep showing heavy
> activity for upto 10 to 15 minutes. That's what led me to believe something
> was wrong in the way I was working on it.
> 
> This is a just a lab setup which I use for my own craving, but I would want
> it to be a little bit faster than what is right now. Also, I would want the
> data stored in it to be safe from durability point of view.
> 
> What would you suggest to get past this bottleneck?
> 
> If required, on the hardware side, I am thinking on the lines of adding some
> more similar sized disks for the primary pool, adding three SSDs, one for
> zfs cache and the other two for mirrored zil. Or one SSD for zil and 2
> 10Krpm raptors for cache. Although, I dont really know if database's IO will
> cause any synchronous writes to the zfs pool. Running dd with oflag=sync
> under dom0, does show zil usage (used zilstat script), but running the same
> dd under domU does not show any synchronous usage on zfs, which makes me
> think that the SSD for zil may not be of much help.
> 
> Plus, I am also thinking about moving the disks from AOC-SAT2-MV8 PCI-X to
> AOC-USAS-L8i LSI 1068E UIO SAS/SATA PCIE card, just in case PCI-X is adding
> to the slowness.
> 
> >From what I have read on the net sofar, the "real" enterprise grade SSDs are
> pricey, mostly because of the capacitor/backup feature or SLC technology.
> 
> Can someone also guide me on which SSD would be reasonable enough for this
> task, and if I really should opt for one?
> 
> Thanks a lot, do appreciate your help.
> 
> Regards,
> dot.yet
> 
> 
> On Fri, Oct 30, 2009 at 12:45 PM, Stu Maybee <Stuart.Maybee@xxxxxxx> wrote:
> 
> >
> > Since it's a linux domU I would expect to see caching in the domU iirc ext3
> > (or whatever) is a heavily caching fs.
> >
> > Stu
> >
> > Mark Johnson wrote:
> >
> >>
> >>
> >> Gary Pennington wrote:
> >>
> >>> Some answers below...
> >>>
> >>>  The above command returns under 2.5 seconds, however iostat on domU AND
> >>>> zpool iostat on dom0, both continue to show write IO activity for upto
> >>>> 30 to
> >>>> 40 seconds:
> >>>>
> >>>
> >> If the domU is showing IO activity, then it must be caching too.
> >>
> >>
> >>  When you write the data to the zvol most of the writes are cached
> >>> to memory and the writes to disk are performed later when ZFS writes
> >>> the memory cache to disk. It seems that your logging device is capable
> >>> of writing about 40MB/s (see your figure below), so it takes at least
> >>> 800/20
> >>> seconds to write this to disk.
> >>>
> >>
> >> you can to do a iostat -x 1 and see the stats for
> >> slog device.
> >>
> >>
> >>
> >>
> >> MRJ
> >>
> >> _______________________________________________
> >> xen-discuss mailing list
> >> xen-discuss@xxxxxxxxxxxxxxx
> >>
> >
> >

> _______________________________________________
> xen-discuss mailing list
> xen-discuss@xxxxxxxxxxxxxxx


-- 
Gary Pennington
Solaris Core OS
Sun Microsystems
Gary.Pennington@xxxxxxx

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] Re: [xen-discuss] I/O performance concerns. (Xen 3.3.2), Gary Pennington <=