|
|
|
|
|
|
|
|
|
|
xen-users
Re: [Xen-users] Xen and I/O Intensive Loads
On Wed, Aug 26, 2009 at 12:07:55PM -0600, Nick Couchman wrote:
>
> Doesn't really seem to make a difference which way I do it...I still see
> pretty intense disk I/O.
>
> Here is some sample output from iostat in the domU:
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> xvdb 12.20 0.00 1217.40 26.20 9197.60 530.80 15.65
> 29.66 23.47 0.80 100.00
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> xvdb 18.40 0.00 1121.20 19.60 8737.60 691.50 16.53
> 32.97 29.13 0.88 100.00
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> xvdb 27.80 0.00 1241.40 29.20 8158.40 377.90 13.44
> 42.59 33.73 0.79 100.00
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> xvdb 31.60 0.00 1256.60 35.00 9426.40 424.00 15.25
> 42.06 32.44 0.77 100.00
>
> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz
> avgqu-sz await svctm %util
> xvdb 57.68 0.00 1250.50 17.76 8588.42 352.99 14.10
> 51.36 40.60 0.79 99.80
>
> the avgqu-sz is anywhere from 11 to 75, and the await is anywhere from 20 to
> 50. %util is always around 100.
>
Well.. it seems your SAN LUN is the problem. Have you checked the load
from the FC Storage array?
Or then the problem is in your FC HBA. Have you verified the FC link is at full
speed?
Are the FC switches OK?
Do you have up-to-date HBA driver in dom0? Are the HBA/Switch/Storage
firmwares up-to-date?
-- Pasi
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|