WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Do

To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, Guy Zana <guy@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
From: Keir Fraser <keir@xxxxxxxxxxxxx>
Date: Fri, 10 Aug 2007 09:16:28 +0100
Cc: Alex Novik <alex@xxxxxxxxxxxx>
Delivery-date: Fri, 10 Aug 2007 01:17:05 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <D470B4E54465E3469E2ABBC5AFAC390F013B20B3@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcfarSF45lhECFFTQSiE3oBAG7IKmQAbyytWAAAcBIUAADDsMAAA8Q6JAAAVuSAAAUjAmQ==
Thread-topic: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
User-agent: Microsoft-Entourage/11.3.3.061214
On 10/8/07 09:02, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

> Considering the sharing between high-speed device and low-speed
> device, simple move-to-back policy (once EOI) is not most efficient.
> At least we can take interrupt frequency as one factor of priority too.

My assumption would be that any given interrupt is due to only one device,
and that in this case it is always most probable that the interrupting
device is the high-speed one. Whenever a low-speed device interrupt occurs
that will slow things down because we will deliver to the high-speed driver
first, wait for unmask/EOI, then see the line is not deasserted, then move
high-speed device to back, and re-deliver to low-speed device. Plus, on the
next interrupt you will deliver to the low-speed device first even though it
is most likely a high-speed device interrupt. Clearly we could be smarter
here (only move-to-back after N failures, for example). I'm not convinced
the extra complexity is worth it though - I think this kind of scenario is
rare enough. I'd like to see a simple sharing method measured and found
wanting before adding extra heuristics.

> While my question is about the efficiency of timeout under different
> condition. Say the top of the list is HVM domain at the time, and
> HVM domain has vRTE masked (driver unload, or previous injection is
> in handle), in this case we may not want to inject now and wait same
> 'reasonable time' for non-response and instead move-to-back can
> make effect immediately.

Okay, yes, the driver-unloaded case at least needs to be handled. But it
seems to me that the timeout here could be in the hundreds of milliseconds,
minimum. It should be an extremely occasional event that the timeout is
needed.

>> The timeout isn't part of this method's normal operation. The usual case
>> will be that we deliver to just one guest -- at the front of our priority
>> list -- and it was the correct single guest to deliver the interrupt to. In
> 
> This is hard to tell, since no clue to check whether it's right one due
> to randomness of interrupt occurrence.

Well yes. My interest here is in working well for one active device at a
time (ie. Other devices are basically quiescent). Or, if there are multiple
devices active at a time, only one is delivering a really significant number
of interrupts. If you have multiple high-speed devices and want maximum
performance, I think people know to avoid shared interrupts for those
devices if possible, by shuffling PCI cards and so on.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>