Does Xen hypervisor do a VM EXIT or still delegate CLTS to ring 0? How
does
Xen hypervisor distinguish the instruction is from para-virtualized
domain
or is from a full-virtualized domain? Does Xen have to replace all
problematic instructions with hypercalls for Para-domain (even for CLTS)?
Why does Xen need to use different strategies in para-virtualized domain
to
handle CLTS (delegation to ring 0) and other problematic instructions
(hypercall)?
My second question:
It seems each processor has its own exception bitmap. If I have
multi-processors (vt-x enabled), does Xen hypervisor use the same
exception
bitmap in all processors or does Xen allow different processor have its
own
(maybe different) exception bitmap?
Best regards,
Liang
-----Original Message-----
From: M.A. Williamson [mailto:maw48@xxxxxxxxxxxxxxxx] On Behalf Of Mark
Williamson
Sent: Tuesday, March 20, 2007 5:37 PM
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Cc: Liang Yang; Petersson, Mats
Subject: Re: [Xen-devel] Does Dom0 always get interrupts first before
they
are delivered to other guest domains?
Hi,
> First, you once gave another excellent explanation about the
> communication
> between HVM domain and HV (15 Feb 2007 ). Here I quote part of it
> "...Since these IO events are synchronous in a real processor, the
> hypervisor will wait for a "return event" before the guest is allowed
> to
> continue. Qemu-dm runs as a normal user-process in Dom0..."
> My question is about those Synchronous I/O events. Why can't we make
> them
> asynchronous? e.g. whenever I/O are done, we can interrupt HV again and
let
> HV resume I/O processing. Is there any specific limiation to force Xen
> hypervisor do I/O in synchronous mode?
Was this talking about IO port reads / writes?
The problem with IO port reads is that the guest expects the hardware to
have
responded to an IO port read and for the result to be available as soon
as
the inb (or whatever) instruction has finished... Therefore in a virtual
machine, we can't return to the guest until we've figured out (by
emulating
using the device model) what that read should return.
Consecutive writes can potentially be batched, I believe, and there has
been
talk of implementing that.
I don't see any reason why other VCPUs shouldn't keep running in the
meantime,
though.
> Second, you just mentioned there is big difference between the number
> of
> HV-to-domain0 events for device model and split driver model. Could you
> elaborate the details about how split driver model can reduce the
> HV-to-domain0 events compared with using qemu device model?
The PV split drivers are designed to minimise events: they'll queue up a
load
of IO requests in a batch and then notify dom0 that the IO requests are
ready.
In contrast, the FV device emulation can't do this: we have to consult
dom0
for the emulation of any device operations the guest does (e.g. each IO
port
read the guest does) so the batching is less efficient.
Cheers,
Mark
> Have a wonderful weekend,
>
> Liang
>
> ----- Original Message -----
> From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
> To: "Liang Yang" <multisyncfe991@xxxxxxxxxxx>;
> <xen-devel@xxxxxxxxxxxxxxxxxxx>
> Sent: Friday, March 16, 2007 10:40 AM
> Subject: RE: [Xen-devel] Does Dom0 always get interrupts first before
> they
> are delivered to other guest domains?
>
> > -----Original Message-----
> > From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> > [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Liang
> > Yang
> > Sent: 16 March 2007 17:30
> > To: xen-devel@xxxxxxxxxxxxxxxxxxx
> > Subject: [Xen-devel] Does Dom0 always get interrupts first
> > before they are delivered to other guest domains?
> >
> > Hello,
> >
> > It seems if HVM domains access device using emulation mode
> > w/ device model
> > in domain0, Xen hypervisor will send the interrupt event to
> > domain0 first
> > and then the device model in domain0 will send event to HVM domains.
>
> Ok, so let's see if I've understood your question first:
> If we do a disk-read (for example), the actual disk-read operation
> itself will generate an interrupt, which goes into Xen HV where it's
> converted to an event that goes to Dom0, which in turn wakes up the
> pending call to read (in this case) that was requesting the disk IO,
> and
> then when the read-call is finished an event is sent to the HVM DomU.
> Is
> this the sequence of events that you're talking about?
>
> If that's what you are talking about, it must be done this way.
>
> > However, if I'm using split driver model and I only run BE driver on
> > domain0. Does domain0 still get the interrupt first (assume
> > this interupt is
> > not owned by the Xen hypervisor ,e.g. local APIC timer) or
> > Xen hypervisor
> > will send event directly to HVM domain bypass domain0 for
> > split driver
> > model?
>
> Not in the above type of scenario. The interrupt must go to the
> driver-domain (normally Dom0) to indicate that the hardware is ready to
> deliver the data. This will wake up the user-mode call that waited for
> the data, and then the data can be delivered to the guest domain from
> there (which in turn is awakened by the event sent from the driver
> domain).
>
> There is no difference in the number of events in these two cases.
>
> There is however a big difference in the number of hypervisor-to-dom0
> events that occur: the HVM model will require something in the order of
> 5 writes to the IDE controller to perform one disk read/write
> operation.
> Each of those will incur one event to wake up qemu-dm, and one event to
> wake the domu (which will most likely just to one or two instructions
> forward to hit the next write to the IDE controller).
>
> > Another question is: for interrupt delivery, does Xen treat
> > para-virtualized
> > domain differently from HVM domain considering using device
> > model and split
> > driver model?
>
> Not in interrupt delivery, no. Except for the fact that HVM domains
> obviously have full hardware interfaces for interrupt controllers etc,
> which adds a little bit of overhead (because each interrupt needs to be
> acknowledged/cancelled on the interrupt controller, for example).
>
> --
> Mats
>
> > Thanks a lot,
> >
> > Liang
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel