|
|
|
|
|
|
|
|
|
|
xen-users
Re: [Xen-users] Hi: Newbie intro.
Ken wrote:
2. Dom0 strategy:
I'm a bit confused as to the entire role of dom0. Is this supposed to
be just a minimal command-line distro for drivers and Xen admin, or is
it a full-blown distro which also has Xen admin? Do I want flexibility
or stability? Rolling release or conservative non-rolling release?
Admin-only or do I live here?
Dom0 is to Xen a bit like user root is to Unix style systems. Xen
loads first and sits on top of the hardware, but Xen itself doesn't
really have any way to interact with it. It loads Dom0 as the first
guest OS (and yes, even Dom0 is a virtual machine I believe) - and
gives it special privileges, such as being able to communicate with
the hypervisor and control it. Dom0 also gets direct access to all
hardware by default.
It is customary that Dom0 is a "light" install - just containing what
you need to run the machine. It doesn't have to be, you can load up
all your GUI shells, user apps etc - but it's custom to keep Dom0
light because of it's privileged role and the fact that if you
compromise Dom0 then all the guests are compromised.
So if you have a machine that is your desktop, then it's quite OK to
run Xen on it, use Dom0 as your "desktop machine" and fire up some
other guests as required. You just have to accept that if you
compromise your desktop machine, then the others are compromised too.
it would be a good way to get started and experiment - just not good
for running production services.
3. Partitioning:
I'm currently using RAID per linux-only schemes. Does Xen have its own
requirements and abilities for that, or is that entirely handled by
Dom0?
Do I assign logical volumes directly to the DomUs with the proper
partitioning scheme or do I store everything on XFS in a big file and
let the DomU partition that file?
Does it make sense still to segment the system out to different
partition types for performance, or what? What's the strategy?
Again, this is largely a matter of personal preference. In terms of
performance, then unless you take steps to segregate stuff (eg
keeping different bits of data on different drives), then the I/O
from all your guests shares the same disk I/O bottlenecks. In some
ways it could be said to be worse since you will typically have the
virtual disks for different machines spread across the disks and thus
ensure lots of head seeks.
Xen does not handle any file systems on it's own, whatever containers
you use are transparent to the hypervisor. What type of container is
again a matter of preference.
At one extreme, you can build a big filesystem for Dom0, and use file
to store the virtual disks for the guests. Personally I use block
devices and LVM - I create on logical volume per filesystem in each
DomU, and I don't partition them in the DomUs. This has the advantage
that each filesystem can be mounted in Dom0 without any hassles (ie
you can just "mount /dev/vg0/guest1root /mnt") and have access to the
file on the guests disks (but you must shut down the guest first).
If you create a virtual disk and partition it inside the guest then
filesystems can still be mounted elsewhere, but there's an extra step
or two involved.
One LV per filesystem also makes resizing filesystems a doddle -
shutdown guest, shrink filesystem if reducing size, resize LV, expand
filesystem. There is talk of it being possible to resize (expand) the
LV and trigger some signal to make the increased size visible to a
running Dom0, and then live-expand the filesystem.
As mentioned above, many of the same performance issues arise, but
with some added complications because you are no longer considering
one "machine". If you do have a heavy I/O application, then you may
still want to take the usual steps of keeping that data on it's own
set of spindles and so on - Xen will let you do that, it really
doesn't care what you do.
Dunno about the other questions.
--
Simon Hobson
Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed
author Gladys Hobson. Novels - poetry - short stories - ideal as
Christmas stocking fillers. Some available as e-books.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|