The above command returns under 2.5 seconds, however iostat on domU AND zpool iostat on dom0, both continue to show write IO activity for upto 30 to 40 seconds:
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 24.81 0.00 75.19
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
xvda 0.00 4989.00 0.00 1133.00 0.00 23.06 41.68 146.08 107.49 0.88 100.00
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
xvda2 0.00 4989.00 0.00 1133.00 0.00 23.06 41.68 146.08 107.49 0.88 100.00
dm-0 0.00 0.00 0.00 6113.00 0.00 23.88 8.00 759.28 100.24 0.16 100.00
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 24.94 0.00 75.06
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
xvda 0.00 4989.00 0.00 1153.00 0.00 23.91 42.47 146.32 146.37 0.87 100.40
xvda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
xvda2 0.00 4989.00 0.00 1153.00 0.00 23.91 42.47 146.32 146.37 0.87 100.40
dm-0 0.00 0.00 0.00 6143.00 0.00 24.00 8.00 751.75 143.43 0.16 100.40
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dom0 zpool iostat:
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
vmdisk 188G 1.18T 287 2.57K 2.24M 19.7M
mirror 62.7G 401G 142 890 1.12M 6.65M
c8t2d0 - - 66 352 530K 6.49M
c9t0d0 - - 76 302 612K 6.65M
mirror 62.7G 401G 83 856 670K 6.39M
c8t3d0 - - 43 307 345K 6.40M
c9t1d0 - - 40 293 325K 6.40M
mirror 62.7G 401G 60 886 485K 6.68M
c8t4d0 - - 50 373 402K 6.68M
c9t4d0 - - 10 307 82.9K 6.68M
c8t5d0 0 464G 0 0 0 0
cache - - - - - -
c9t5d0 77.1G 389G 472 38 3.86M 3.50M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
vmdisk 188G 1.18T 75 3.52K 594K 27.1M
mirror 62.7G 401G 30 1.16K 239K 8.89M
c8t2d0 - - 10 464 86.6K 8.89M
c9t0d0 - - 19 350 209K 8.89M
mirror 62.7G 401G 0 1.18K 0 9.10M
c8t3d0 - - 0 510 0 9.10M
c9t1d0 - - 0 385 0 9.10M
mirror 62.7G 401G 45 1.18K 355K 9.11M
c8t4d0 - - 37 469 354K 9.11M
c9t4d0 - - 7 391 57.7K 9.11M
c8t5d0 0 464G 0 0 0 0
cache - - - - - -
c9t5d0 77.1G 389G 514 157 4.14M 17.4M
---------- ----- ----- ----- ----- ----- -----
Can you tell me why this happens? Is this behavior coming from Linux or Xen or ZFS? I do notice that iostat reports an iowait of 25%, but I don't know who is causing this bottleneck amongst them.
Instead of writing a 800 meg file, if I write an 8gb file, the performance is very poor (40 MB/s or so) and again, despite there is a long iowait after the dd command returns.
Any help would be really appreciated.
Kind regards,
dot.yet