WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Xen as a kernel module

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Xen as a kernel module
From: Tobias Hunger <tobias@xxxxxxxxxxx>
Date: Thu, 27 Jan 2005 16:05:15 +0100
Delivery-date: Fri, 28 Jan 2005 18:12:26 +0000
Envelope-to: xen+James.Bulpin@xxxxxxxxxxxx
In-reply-to: <41F8257E.608@xxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <E1Ctijw-0004J6-00@xxxxxxxxxxxxxxxxx> <41F8257E.608@xxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.7.2
On Thursday 27 January 2005 00:19, Jacob Gorm Hansen wrote:
> Sorry if this came out sounding as a bit of a troll, anyway, my
> suggested setup would look like this:
>
>        [ VM1 ]   [ VM2 ] ....   [ VMN ]
>
>            [ Xen + linux kernel ]
>
>                 [ hardware ]

I am interested in fast VMs. Assuming that you want dom0 to be '[ Xen + linux 
kernel ]' I fail to see how your proposed architecture helps there.

> Right now Xen is mapped somewhere in top of memory, I am not sure how
> domains are kept out of there, but I suppose it has to do with segments.

As I understand this xen runs in ring0 and pushes the guests kernels one ring 
up into ring1 and then uses traps to allow the guest OSes to trap into the 
hypervisor as necessary.

> The good thing about that is that hypercalls are cheap, and in Xen1.x
> I/O was cheap as well.

Cheap where? In dom0 or the VMs?

> My suggestion/question was a) why don't we just put a full Linux up
> there, including drivers, and b) can we provide the Xen hypercall
> interface on top of other OSes as well?

a) Because that  impacts security and robustness a lot. Security and 
robustness are the two attributes I want in software, especially a kernel. 
This is even more true for a hypervisor.

b) I don't understand what you are going at in b). 

> > What aspects of performance under Xen are you finding unacceptable?
>
> I generally find performance acceptable, but as I said there are cases
> where there appears to be some friction against the goals of Xen (driver
> isolation) and the goals of the application (throughput, low latency).

Hmmm.... adding a layer of abstraction rarely improves throughput/latency. You 
do it anyway to gain flexibility. I do not like you proposal as it sacrifices 
flexibility I want for throughput/latency in a place I don't care about.

> > Well isolation (both security and performance) are two explicit
> > design goals of Xen. If you want to have the illusion of multiple
> > kernels without these properties, you can use linux vservers or
> > BSD jail.

Please keep those goals!

> I would argue that you could get the same level of isolation (except
> from driver isolation) if you merge the two, while achieving the same IO
> performance as the monolithic model, and still be able to reuse existing
> driver code.

I fail to see where monolithic kernel comes into this... I assume you are 
referring to kernel running on a real machine instead of a virtual one.

Your proposal would force me to have all network traffic pass through dom0, 
the system able to halt all VMs on the machine! I'd feel extremely nervous 
with such a setup (OK, I am paranoid;-).

You do that to improve IO performance in dom0, which is the one virtual 
machine that I do not need IO performance: dom0 is meant on my systems to be 
able to setup VMs and nothing more (currently I use 16MiB of RAM for that 
domain). All work is done in other domains. Those can no longer access 
hardware directly with your proposal, thus it would even hurt IO performance 
for me.

> It would still be interesting to reuse existing Xen guestOS ports on top
> of different hypervisor implementations.

The critical part of the hypervisor (from the perspective of a guest OS) is 
the interface. That seems well defined and wouldn't need to change with your 
proposal. So you should not need to modify domU OSes.

Your needs seem to be to have a fast dom0 and only spin of small sessions 
occasionally, doing most of the work in dom0. Mine is to have several servers 
sharing a piece of hardware. Dom0 is mostly idling while the other domains do 
the heavy lifting. My hope is that I will see more proposals about reducing 
the coupling of xen and dom0 (like being able to reboot dom0 without 
effecting the other domains). I have no relation with the xen project apart 
from using it, so I may only give my feelings.

-- 
Gruss,
Tobias

------------------------------------------------------------
Tobias Hunger           The box said: 'Windows 95 or better'
tobias@xxxxxxxxxxx                     So I installed Linux.
------------------------------------------------------------

Attachment: pgp1F1CRRv7Vl.pgp
Description: PGP signature

<Prev in Thread] Current Thread [Next in Thread>