|
|
|
|
|
|
|
|
|
|
xen-users
Re[2]: [Xen-users] poor IO perfomance
Здравствуйте, Mats.
Вы писали 30 мая 2007 г., 16:22:54:
>
>> -----Original Message-----
>> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
>> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
>> Vitaliy Okulov
>> Sent: 30 May 2007 13:11
>> To: xen-users@xxxxxxxxxxxxxxxxxxx
>> Subject: [Xen-users] poor IO perfomance
>>
>> Здравствуйте, xen-users.
>>
>> Just test domU IO perfomance:
>>
>> sda1 configure via phy:/dev/sdb1. Benchmark with dbench
>> (dbench -D /usr/src -s 10
>> -t 120) - 102 Mb/s
>>
>> Native sistem (mount /dev/sdb1 /mnt && dbench -D /mnt -s 10 -t 120) -
>> 140 Mb/s
>>
>> How i can speedup dbench?
> Probably not that easy. If you have multiple disk-controllers (that
> is, multiple devices according to for example "lspci"), you can give
> one device to the guest, that should give the same performance as
> native (assuming nothing else interferes with DomU - if two domains
> share the same CPU it would of course not give the same performance as
> native, for example).
I test native linux kernel 2.6.18-4 and get 140 Mb/s
Test xen 2.6.18-4-xen dom0 and get 127 Mb/s
I test 1 xen 2.6.18-4-xen domU and get 102 Mb/s
I think its very bad & according to official Xen benchmark IO must be
close to native.
Same controller, adaptec 2130 slp, same scsi hdd.
Also, when i use sda1 in domU as file, i recive 140 Mb/s.
> The disk-IO request goes through Dom0 even if the device is "phy:",
> as the device that is connected to "/dev/sdb1" is on a
> disk-controller owned by Dom0, so there will be some latency
> overhead, and unless the "queue" is of infinite length, that latency will
> affect the transfer rate.
> You have to understand that any form of virtualization does add
> overhead - a bit like the raw disk-write performance is (or should
> be) higher than if you write to the disk with a file-system - but I
> don't think anyone would prefer to refer to their e-mails or
> documents by saying "please give me blocks 12313287, 12241213 and
> 12433823" instead of "/usr/doc/mytext.doc" - so the overhead is
> "accepted" because it makes the system more usable. In the
> virtualization case, there is usually a REASON for wishing to use
> virtualization: either that the system is underutilized, which means
> that it's CPU and IO capacity isn't used to full potential. Merging
> two systems that have about 20-30% utilization would still give some
> "spare" for expansion as well as for the virtualization overhead.
> --
> Mats
>>
>> --
>> С уважением,
>> Vitaliy mailto:vitaliy.okulov@xxxxxxxxx
>>
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>>
>>
>>
--
С уважением,
Vitaliy mailto:vitaliy.okulov@xxxxxxxxx
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|