> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Rami Rosen
> Sent: 07 June 2006 19:31
> To: M S, Rajanish
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] VTx enabled + xen 3.0 stable IO
> performance...
>
> Hi,
>
> This question is very interesting indeed and bothered also me .
>
> I googled for official benchmarks from Intel/AMD for
> performance of Xen running on VT-x or AMD and could not find any.
>
> Had you looked at the following thread in xen-devel:
> HVM network performance:
> http://lists.xensource.com/archives/html/xen-devel/2006-05/msg
> 00183.html
>
>
> Though it speaks about network performance (and not disk I/O)
> I think it may be relevant.
>
> You said:
>
> >Xen document says that the performance should >be close-to-native.
> As I understand this refers to non-VT processors.
Yes, absolutely. And I think, in terms of virtualization, 75% of the
native performance is "close" in this case.
>
> It seems to me that using QEMU in HVM
> may cause slower performance than on non-VT
> processors. (In non-VT processors Xen does not use QEMU but uses
> backend-frontend virtual device drivers, which seems more efficient).
>
> Can anybody give I/O performance results on AMD
> SVM processors (these processors have virtualization extensions).?
I can't publish (post) any benchmark results at the moment, but I would
definitely think that the behaviour would be almost identical to the
Intel results posted in the above link. We have a slightly better memory
controller than Intel does, which would help with the MANY memory
accesses that happen as a result of the intercept and task-switching
that goes on as part of the processing of IO operations. The overall
control-flow is near enough identical, so it's only differences like the
memory controller or perhaps how the memory management unit (MMU) in the
processor behaves that may make a little difference - and I empasize on
A LITTLE difference.
The reason is that ALL hardware (for HVM) is emulated in QEMU (except
for timer and APIC accesses, which are emulated in the hypervisor
itself).
So any IO to disk or network would go through the Hypervisor to QEMU,
generally in MANY steps. For example, a hard-disk read/write operation
takes at least 6 IO intercepts. Each one of those is several hundred
clock-cycles. Then there is the overhead of going through Dom0 for the
ACTUAL file-access to the disk, which of course adds a little more
overhead on top of the intercepting.
I did a QUICK test, using "hdparm -t /dev/hda" on real hardware and in
SuSE 10.1 running on HVM. There are at least one order of magnitude
difference in performace. This is NOT an official benchmark, but just a
quick test!
Writing a pseudo-device driver for the disk-driver (or network) would be
the best way around this problem, then performance would be similar to
the para-virtual solution. But it would of course require that the user
loads this device driver... Which has a heap of interesting logistical
problems - will Microsoft supply this driver for the Windows version.
Could we get a WHQL certificate for it? Etc, etc.
--
Mats
>
> Regards,
> Rami Rosen
>
>
>
>
>
>
> On 6/7/06, M S, Rajanish <MS.Rajanish@xxxxxx> wrote:
> >
> >
> >
> >
> > Hi,
> >
> >
> >
> > Is there any IO performance results for Xen3.0.2 stable
> + VT enabled full
> > virtualization? Our tests shows 100,000 IOPs on Native
> linux(2.6.16) vs
> > 75000 IOPs on domain 0 for the same kernel version for 512B
> IO size. Is this
> > expected behavior? Xen document says that the performance should be
> > close-to-native.
> >
> >
> >
> > Also, it will be helpful if you could point us to any
> performance results
> > of Xen 3.0.
> >
> >
> >
> > Thanks.
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> >
> >
> >
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|