> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Liang Yang
> Sent: 09 February 2007 19:18
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Cc: Xen devel list
> Subject: [Xen-users] Does VT-d make live migration of Xen
> more difficult by reducing the size of abstraction layer?
>
> Hi,
>
> I'm just thinking about the pros and cons of VT-d. On one
> side, it does
> improve performance of guest domain to provide more direct
> access to HW by
> walking around hypervisor; while on the other side, it also
> reduce the
> abstraction layer of hypervisor which could make the live
> migration more
> difficult.
Ehm, that's a bit "wrong". VT (or in the case I know better, AMD-V)
doesn't directly allow the guest to access hardware, because of the
complications of memory addresses. Guest has a "delusioned" view of
physical memory addresses, for example a guest with 256MB of memory will
believe that memory starts at 0 and ends at 256MB. That can of course
only be true for one guest (at the most), and that privilege is usually
for Xen+Dom0. The guests are then loaded at some other address, given a
"false" statement from the hypervisor about where memory is - because
most OS's don't grasp the concept of being loaded at some random address
in memory (never mind the fact that the memory for the guest can be
non-contiguous]. Because the guest is completley unaware of the
machine-physical address, it will not be able to correctly tell a piece
of hardware where the actual data is located (within the MACHINE's
memory).
The memory abstraction for HVM (Fully virtualizaed domains) is not
particularly different from the PV domains - it's different in the sense
that we trap into the hypervisor in a different way, and we've got to
"reverse engineer" the operations the kernel is doing, rather than "know
from the source-code" what's going on, and a few other complications.
But in essence, the hypervisor knows all about the memory layout and
hardware settings that the guest has done, and is not much different in
difficulty than the PV-case.
Whilst there is overhead accessing hardware in the PV-case, the overhead
is actually greater in the HVM-case, as the number of intercepts ("traps
to hypervisor") for any emulated hardware is most likely a larger number
than the trap to HV from the PV-guest - only really trivial hardware can
get away with a single memory operation to perform a HW access complete.
An IDE access for example consists of several IO-writes followed
IO-read/write operations for the data.
>
> Could someone here give me a best balanced point when
> considering vt-d for
> Xen guest domains?
The balanced view is that if PV kernel is available, use PV. If there is
no PV-kernel (and it's non-trivial to get one), then use HVM. The latter
is the case for Windows or other "closed-source" OS's, as well as OS's
where the kernel patches supplied by Xen-source aren't available (Linux
kernel version 2.4.x for example).
--
Mats
>
> Thanks,
>
> Liang
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|