WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] lots of cycles in i/o wait state

Subject: Re: [Xen-users] lots of cycles in i/o wait state
From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
Date: Mon, 07 Jun 2010 06:58:03 -0400
Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 07 Jun 2010 04:00:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100607080830.GY17817@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4C0AD6E7.1000809@xxxxxxxxxxxxxxxx> <20100607080830.GY17817@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.1.9) Gecko/20100317 SeaMonkey/2.0.4
Pasi Kärkkäinen wrote:
On Sat, Jun 05, 2010 at 06:59:51PM -0400, Miles Fidelman wrote:
Hi Folks,

I've been doing some experimenting to see how far I can push some old
hardware into a virtualized environment - partially to see how much use
I can get out of the hardware, and partially to learn more about the
behavior of, and interactions between, software RAID, LVM, DRBD, and Xen.

Is your disk/partition aligment properly set up? Doing it wrong could
cause bad performance. It's easy to mess it up with VMs.
can you say a little more about what you mean by "properly set up" vs. not properly set up?
As I've started experimenting with adding additional domUs, in various
configurations, I've found that my mail server can get into a state
where it's spending almost all of its cycles in an i/o wait state (95%
and higher as reported by top).  This is particularly noticeable when I
run a backup job (essentially a large tar job that reads from the root
volume and writes to the backup volume).  The domU grinds to halt.
Is that iowait measure in the guest, or in dom0?
iowait ONLY suffers in the guest

when I run stress tests, iowait (in the guest) jumps considerably when:
- running a benchmark (bonnie++) in dom0, on either host (to be expected, given that dom0 gets priority)
- running bonnie++ in the guest with iowait problems

running bonnie++ in another guest does not impact the iowaits
Again run "iostat 1" in both the domU and dom0, and compare the results.
Also run "xm top" in dom0 to monitor the overall CPU usage.
very little CPU load

iostat (and vmstat) are what really helped me track things down; and after doing a lot of googling on "performance tuning" and "iowait" I came across the suggestion to add "noatime" to my mount options --- brought my iowait times way down, and sped up performance

you learn something new every day :-)

Thanks again, to all,

Miles Fidelman




--
In theory, there is no difference between theory and practice.
In<fnord>  practice, there is.   .... Yogi Berra



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>