WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH][v2] Hybrid extension support in Xen

To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH][v2] Hybrid extension support in Xen
From: Sheng Yang <sheng@xxxxxxxxxxxxxxx>
Date: Tue, 2 Feb 2010 22:07:10 +0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 02 Feb 2010 06:09:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1265118764.2965.23102.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Intel Opensource Technology Center
References: <201002021616.19189.sheng@xxxxxxxxxxxxxxx> <201002022106.42451.sheng@xxxxxxxxxxxxxxx> <1265118764.2965.23102.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.2 (Linux/2.6.31-17-generic; KDE/4.3.2; x86_64; ; )
On Tuesday 02 February 2010 21:52:44 Ian Campbell wrote:
> On Tue, 2010-02-02 at 13:06 +0000, Sheng Yang wrote:
> > On Tuesday 02 February 2010 19:26:55 Ian Campbell wrote:
> > > On Tue, 2010-02-02 at 08:16 +0000, Sheng Yang wrote:
> > > > +static hvm_hypercall_t *hvm_hypercall_hybrid64_table[NR_hypercalls]
> > > > = {
> > > > +    [ __HYPERVISOR_memory_op ] = (hvm_hypercall_t *)hvm_memory_op,
> > > > +    [ __HYPERVISOR_grant_table_op ] = (hvm_hypercall_t
> > > > *)hvm_grant_table_op,
> > > > +    HYPERCALL(xen_version),
> > > > +    HYPERCALL(console_io),
> > > > +    HYPERCALL(vcpu_op),
> > > > +    HYPERCALL(sched_op),
> > > > +    HYPERCALL(event_channel_op),
> > > > +    HYPERCALL(hvm_op),
> > > > +};
> > >
> > > Why not just expand the exiting hvm hypercall table to incorporate
> > > these new hypercalls?
> >
> > I am just afraid the normal HVM guest called these hypercalls would
> > result in some chaos, so add a limitation(for hybrid only) here. (I admit
> > it didn't much improve the security for a malicious guest...)
> 
> I don't think this limitation adds any meaningful security or reduction
> in chaos. A non-hybrid aware HVM guest simply won't make those
> hypercalls.

Um, yes... Would update this in the next version.
> 
> > > In fact, why is hybrid mode considered a separate mode by the
> > > hypervisor at all? Shouldn't it just be an extension to regular HVM
> > > mode which guests may choose to use? This seems like it would eliminate
> > > a bunch of the random conditionals.
> >
> > There is still something different from normal HVM guest. For example, to
> > use PV timer, we should clean the tsc offset in HVM domain; and for event
> > delivery, we would use predefined VIRQ rather than emulated IOAPIC/APIC.
> > These code are exclusive, we need them wrapped with flag(which we called
> > "hybrid"). The word "mode" here may be is inaccuracy, a "extension"
> > should be more proper. I would change the phrase next time.
> 
> But the old mechanisms (emulated IOAPIC etc) are still present until the
> enable_hybrid HVMOP is called, aren't they? Why can't you perform the
> switch at the point at which the new feature is requested by the guest
> e.g. when the VIRQ is configured?

Yes, they are there before the enable_hybrid HVMOP called. But sorry for I 
don't quite understand about the point "when the VIRQ is configured?"
> 
> It looks like you are using evtchn's for all interrupt injection,
> including any emulated or passthrough devices which may be present.
> Using evtchn's for PV devices obviously makes sense but I think this
> needs to coexist with emulated interrupt injection for non-PV devices so
> the IOAPIC/APIC should not be mutually exclusive with using PV evtchns.

Yes. similar to dom0(maybe more like pv_ops dom0). Currently passthrough 
devices have issue with it, due to lack of MSI/MSI-X support(which would be 
our next step and can be shared with pv_ops dom0); for INTx method, because 
event channel is edge triggered, but INTx is level triggered, so they also 
can't be used. Passthrough devices can be used with some hack, if MSI2INTX 
translation is worked.

But let IOAPIC/APIC coexisted with event channel is not our target. As we 
know, the overhead brought by them, notably EOI by LAPIC, impact performance 
badly. We want event channel that because event channel have much less 
overhead compared to IOAPIC/LAPIC, a completely virtualization aware solution 
which elimiate all the unnecessary overhead. That's what we want Xen guest to 
be benefit.

-- 
regards
Yang, Sheng

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>