On Thu, 21 Jul 2011 07:35:00 +0100
Keir Fraser <keir.xen@xxxxxxxxx> wrote:
> On 21/07/2011 02:30, "Mukesh Rathor" <mukesh.rathor@xxxxxxxxxx> wrote:
>
> > Hi,
> >
> > This is a bit confusing. This for PVOPs kernel, I've not looked at
> > older PV kernels to see what they do yet. But, the VCPU starts with
> > evtchn_upcall_mask set and eflags.IF enabled. However, during kernel
> > boot memory mapping lot of faults are getting fixed up by xen in:
> >
> > fixup_page_fault():
> > /* No fixups in interrupt context or when interrupts are
> > disabled. */ if ( in_irq() || !(regs->eflags & X86_EFLAGS_IF) )
> > <------ return 0;
>
> A PV guest never has EF.IF=0, so the early exit should never be
> triggered by a guest fault.
>
> Your best bet is to fake this out in your HVM container wrapper. Just
> write an EFLAGS into the saved regs that has EF.IF=1, as would always
> be the case for a normal PV guest. Rather that than fragile
> eis_hvm_pv() checks scattered around.
Ok. In my prototype, i've the check, but I'll do the wrapper. I realize
now the above check is more for hyp not taking fault disabled than the
guest doing so.
> The setting of EF.IF shouldn't matter much for your guest as you'll
> be doing PV event delivery anyway, but I wonder how it ends up with
> EF.IF=0 -- is that deliberate?
Yeah, I change IF=0 initially to make sure events are not delivered
until the guest is ready and does irq enable. For PV, the vcpu-mask=1
assures this. Unlike PV, the hybrid changes IF in
enable/disable to make "interrupt window exiting" work, BTW.
thanks,
Mukesh
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|