WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Do

To: "Keir Fraser" <keir@xxxxxxxxxxxxx>, "Guy Zana" <guy@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Fri, 10 Aug 2007 16:41:38 +0800
Cc: Alex Novik <alex@xxxxxxxxxxxx>
Delivery-date: Fri, 10 Aug 2007 01:42:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <C2E1DD6C.13F99%keir@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <D470B4E54465E3469E2ABBC5AFAC390F013B20B3@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C2E1DD6C.13F99%keir@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcfarSF45lhECFFTQSiE3oBAG7IKmQAbyytWAAAcBIUAADDsMAAA8Q6JAAAVuSAAAUjAmQAAbvFw
Thread-topic: [Xen-devel] [RFC] Pass-through Interdomain Interrupts Sharing(HVM/Dom0)
>From: Keir Fraser [mailto:keir@xxxxxxxxxxxxx]
>Sent: 2007年8月10日 16:16
>
>On 10/8/07 09:02, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
>rare enough. I'd like to see a simple sharing method measured and
>found
>wanting before adding extra heuristics.

Sure, and let's start from simple first. Just remind for drivers with 
timeout check on expected interrupt delivery, that slow condition may 
exaggerate the complain opportunity though it's also not solved 
when not sharing.

>
>> While my question is about the efficiency of timeout under different
>> condition. Say the top of the list is HVM domain at the time, and
>> HVM domain has vRTE masked (driver unload, or previous injection is
>> in handle), in this case we may not want to inject now and wait same
>> 'reasonable time' for non-response and instead move-to-back can
>> make effect immediately.
>
>Okay, yes, the driver-unloaded case at least needs to be handled. But it
>seems to me that the timeout here could be in the hundreds of
>milliseconds,
>minimum. It should be an extremely occasional event that the timeout is
>needed.

I can agree with 'occasional' but not 'extremely occasional'. :-) HVM, if 
in head of the list, may be in block state waiting Qemu to respond, while 
at same time Qemu may wait for driver (like disk r/w) and driver may 
wait for interrupt. In such condition, 1st injection into HVM will cause 
timeout anyway and only next injection can get handled after dom0 gets
its interrupt. Just think that such inter-domain-dependency may make 
the case worse...

>
>>> The timeout isn't part of this method's normal operation. The usual
>case
>>> will be that we deliver to just one guest -- at the front of our priority
>>> list -- and it was the correct single guest to deliver the interrupt to. In
>>
>> This is hard to tell, since no clue to check whether it's right one due
>> to randomness of interrupt occurrence.
>
>Well yes. My interest here is in working well for one active device at a
>time (ie. Other devices are basically quiescent). Or, if there are multiple
>devices active at a time, only one is delivering a really significant number
>of interrupts. If you have multiple high-speed devices and want
>maximum
>performance, I think people know to avoid shared interrupts for those
>devices if possible, by shuffling PCI cards and so on.
>

If we are clear to keep such assumption, then simplest is the best after 
warning to user. :-)

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>