> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> Christian Anton
> Sent: 04 September 2006 18:35
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] hvm questions (hvm vs. patched DomU kernel)
>
>
> Hi folks,
>
> i have been doing some tests with xen in the last weeks.
> First only with
> patched Kernels in the DomUs, now using the hvmloader, too.
>
> I think using the hardware capabilities of the new Intel
> Dualcore CPUs is
> not yet very well documented, so i want to ask some questions here.
>
> 1. hvmloader vs. vmxloader
> In the official documentation of xen 3.0 they talk about using the
> vmxloader to use unmodified DomU kernels. Is this an error in the
> documentation?
> http://www.cl.cam.ac.uk/Research/SRG/netos/xen/readmes/user/us
> er.html#SECTION04300000000000000000
> I am using hvmloader for my tests as i think that vmxloader
> belongs to the
> 2.x version and has been replaced by hvmloader in version 3.
Actually vmxloader was the name when "hvm" didn't exist. When AMD
started introducing the code for the AMD SVM processors, there was a
need to "merge" some of the functionality. Thanks to some work from IBM,
Intel and AMD, we came up with "hvm". Before that it was just "vmx",
which was Intel's name for the full virtualization. The documentation is
broken, and if you wish to file a bug, it would probably be a good thing
to fix it...
>
> 2. Handling of unmodified guests
> I enjoyed the handling of patched guest os's a much, because the xm
> console (the virtual serial console) gave me much comfort setting up
> networking and services in a DomU.
>
> Is it possible to use this on unmodified guests too? Do i have to
> configure a serial port in the virtual machine and do some
> configuration
> to create a virtual serial port to connect to with xm console
> / minicom?
> Is this documented anywhere?
>
> Is it possible to start a virtual machine with a fixed ID? In
> the config
> file i give the machine a name, but when i reboot it, it gets a new
> integer id what means, that the vnc-server for this machine
> is listening
> on another port, and as my test box is only reachable via
> ssh, i must do a
> new ssh connect with another port forwarding for every time i reboot a
> virtual machine.
Can't answer this...
>
> 3. Disks (LVM, Files...)
> I use LVM logical volumes for my DomUs with patched kernels
> passing them
> to the domUs as partitions, such as hda1, hda2... The big advantage of
> this is that i can copy a virtual machine with rsync, then
> configure the
> nessecary things using a chroot, leave and power up the
> machine. Great!
> Doing backups using LVM snapshots would be something cool, too. Using
> the hvmloader it seems that this is not possible at all, i
> can only pass
> LVM volumes or files as entire disk (hda, hdb) and the guest operating
> system must partition it. So i am not able to access the files on the
> virtual disk when the machine is not running, right? Is there
> any chance
> to use a similar setup as i use for modified guests?
>
> Also i have noticed that i get a disk IO performance of 57Mbyte/sec
> buffered disk reads with hdparm -tT in a patched domU, nearly
> the value i
> get in Dom0, but in a unmodified DomU i get poor results, just about
> 9Mbyte/sec. What is the reason of this?
That's because the modified DomU uses a single hypervisor call to
communicate to Dom0 to get the data from "disk". In full virtualization,
the process is much more complex, since it's emulating a standard disk
interface (IDE or SCSI), which takes several operations to form one full
command (write the sector, cylinder and head numbers, then which command
you want performed...). Each operation is "intercepted" by the
hypervisor, passed to QEMU-DM and processed there before the execution
is returned to the guest. Each intercept is a few thousand clock-cycles
or so... Once the command is completed, QEMU-DM asks for the "disk"
data, and when it's done, DomU will be signaled with an virtual
interrupt, and the interrupt handler of the guest is going to perform
another IO operation that gets intercepted by the hypervisor and sent to
QEMU-DM. All of this clearly slower than sending a single command from
DomU to Dom0 (the same type of communication is used to send each
intercepted IO operation as the DomU to Dom0 commands for an entire disk
operation). Adding para-virtual drivers would solve this bottle-neck
scenario, but it would require new drivers for the guest. There has been
work on this for Linux, but as far as I know, no work has yet been done
on Windows drivers of this type.
>
> 4. Recommendation
> Is it recommendet to use xen with unmodified DomUs when using Linux as
> virtual machines operating system? I thought it would be faster than
> patched kernels because the CPU is doing the hard job instead of the
> kernel patches, but for now it does not seem so. I did not yet do
> CPU performance tests, so the only thing i have noticed until
> now is the
> poor disk io.
The hard work is still done in the hypervisor, so in most of the the
cases it's actually slower to use fully-virtualized guests than it is
for para-virtual guests. There are also optimizations that can be done
in para-virtual guests, since the OS knows what's going on better than
with full virtualization. Example: A process in the guest allocates 4MB
of memory, which means that the guest needs to write 1024 page-table
entires [4KB per pte]. In para-virt, we can send one request to the
hypervisor saying "Please set up page-table entries for these 1024
pages". In full virtualization, the hypervisor will write-protect the
page-table behind the OS's back, and when a write-fault happens, the
hypervisor sees that it's a "page-table write". But it sees one write at
a time, so every page-table write takes a some thousands of cycles or
so.
The real advantage of hardware virtualization is that there's no need to
make a patch to the OS kernel, and it's a little bit easier to intercept
(capture/catch) the things we need to intercept in the guest-OS, without
causing other "interesting" problems elsewhere. Naturally, developing a
patch for an OS that doesn't have source-code readily available is a bit
of a problem too... Although it is possible to get MS to license the
source-code, they don't appreciate distribution of copies produced from
this source-code, and they don't have much interest in accepting
patches. Nor would it be likely that you could distribute a patch-set to
the few people that actually have source-code in any open way. So
Windows virtualization requires "full" virtualization.
>
> 5. qemu
> In my tests i have noticed that when i am using hvmloader the
> virtualization job is done by a process named "qemu", is this
> the qemu i
> know? Was it adapted to work with xen or is it a xen
> development that has
> accidently the same name as http://freshmeat.net/projects/qemu/?
It is an adaption of the QEMU project as linked above. It is used to
make hardware devices available to the guest-OS, as the unmodified OS
can't use the para-virtual back-/frontend driver pairs [particularly,
these drivers would probably not be part of the standard distribution,
even if they did exist for all OS's].
>
> It would be really great if you could answer my questions. I
> am wanting to
> use xen instead of our vmware gsx-server we have now,
> virtualizing some
> Windows but mostly Linux servers.
I have tried as best as I can to answer. I hope this helps.
--
Mats
>
> In vmware we have io issues with our servers, and the vmware
> environment
> produces high load on the host system "without doing anything".
>
>
> Greets
>
> Christian Anton
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
>
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|