WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Do

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
From: "Guy Zana" <guy@xxxxxxxxxxxx>
Date: Fri, 10 Aug 2007 06:10:56 -0400
Cc: Alex Novik <alex@xxxxxxxxxxxx>
Delivery-date: Fri, 10 Aug 2007 03:20:42 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <D470B4E54465E3469E2ABBC5AFAC390F013B20AC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcfarSF45lhECFFTQSiE3oBAG7IKmQASGmqAAA5CKCA=
Thread-topic: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
Thanks Kevin for all of your comments, I agree with them all.
First, most the work here was done by Alex Novik, not me :)

More comments below...

Thanks,
Guy.

> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx] 
> Sent: Friday, August 10, 2007 5:59 AM
> To: Guy Zana; xen-devel@xxxxxxxxxxxxxxxxxxx
> Cc: Alex Novik
> Subject: RE: [Xen-devel] [RFC] Pass-through Interdomain 
> Interrupts Sharing(HVM/Dom0)
> 
> Hi, Guy,
>       Thanks for very good description.
> 
>       Basically I think this should work, but with following concerns:
> 
> - How to choose the timeout value?
>       Small timeout may result more spurious injection and 
> performance penalty, while large timeout may not satisfy 
> driver expectation to high-speed device.
> 

That's a good point. The Spurious vs Starving is exactly opposite between the 
HVM and dom0. For an HVM that holds a vline, when you have a large timeout 
value it'll result in more spurious interrupts since you hold the line asserted.

The timeout value could be adaptive, increased (made slower) anytime it fires 
and it decides to do nothing and decreased anytime it take decisions. This may 
complicate things even further.

Does the IOAPIC has a timeout value to fire an interrupt when the line is held 
asserted? Is using that value feasible?
Freezing the timer is logically the same as masking the IOAPIC.

> - How to cope with existing irq sharing mechanism for PV 
> driver domain?
>       Existing approach between PV driver domain and dom0 is 
> based on some trigger point, i.e, guest EOI. Keep insertion 
> count and track guest response. Timeout mechanism is 
> different, and I guess two paths are difficult to share logic.
> 
>       How about a mixed sharing case, say among dom0/PV 
> domain/ HVM domain?

Sharing is problematic between multiple domains, at least when you have an HVM 
involved. I guess that it is infrequently that you'll want to assign more than 
two devices sharing the same line to different domains other than dom0, I look 
at the M devices left to dom0 more as a nuisance.

Didn't give a lot of thought to that but you can probably allow PV domains in 
the shared interdomain ISR chain proposed. Injecting the interrupt to all of 
the PV domains & dom0 (simultaneously) and ORed their handling status result. 
Take actions based on that value. Sharing a line between 2 or more HVMs is much 
more difficult to solve.

> 
> - interrupt delay within HVM may be exaggerated under some 
> special condition, if HVM is not ready to handle the 
> injection at D.3 (like blocked in I/O emulation) while later 
> D.4 will cancel previous injection at next timeout. Then only 
> at next D.3 HVM gets re-injection again and it may or may not 
> be delayed again upon status at that time.

I'm not sure I understood -

In a D3 -> D4 -> D3 event cycle the HVM's vline is staying asserted. Dom0 
always gets a chance to check out if the interrupt is his, but the vline stays 
asserted until dom0 handled it or until the pline is deasserted.

The HVM will be ready when it will unmask the IOAPIC's pin, and it's VCPU will 
be executing.
It doesn't matter if you choose to assert or deassert its vline. In the 
meantime the timer will fire and that will create spurious interrupts in dom0 
eventually. But an assumption we took is that we can't avoid spurious 
interrupts and we rather get them in dom0.

> 
>       Did you run some heavy workload and observe any complains?

We didn’t implement it yet :-)

Thanks for the great comments!

Guy.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel