WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] Event channel vs current scheme speed [wasvIOSAPIC

To: "Tristan Gingold" <Tristan.Gingold@xxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] Event channel vs current scheme speed [wasvIOSAPIC and IRQs delivery]
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Fri, 10 Mar 2006 04:02:53 +0800
Delivery-date: Thu, 09 Mar 2006 20:04:14 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcZDVMW/1CjWm9F3S4i+a4Dx56SvqwATuCew
Thread-topic: [Xen-ia64-devel] Event channel vs current scheme speed [wasvIOSAPIC and IRQs delivery]
Anyway, good discussion by far though still some way to go for 
consensus. :-)

Maybe we want to look at this from another way - fairness.

On native system, OS may receive hardware interruptions includes 
exception, trap, fault and interrupts. (Copying from SDM) Interruptions 
are events that occur during instruction processing, causing the flow 
control to be passed to an interruption handling routine. Then Interrupts 
are normally the events that managed by interrupt controller.

Then let's see the major difference between two arguing mechanism 
(event channel Vs current interrupt injection): 
        whether to expose "xen event" as a new type of interruption (not 
interrupt) to guest.

Let's first look at current xen/ia64 behavior. All the "xen events" are 
bound to one hard-coded per-cpu vector (0xE9). From guest point of 
view, "xen events" are simply an interrupt. Xen event dispatcher is 
registered as an irq_action bound to vector 0xE9. Then __do_IRQ will 
finally jump to that dispatcher to handle "xen events".

Then the event channel mechanism is to expose "xen events" as a 
new interruption to xenlinux, with a specific callback handler as the 
direct resume point. Finally it's xen event dispatcher to invoke __do_IRQ 
if it's device interrupt.

Regarding current model, there seems to be an issue about fairness 
between physical interrupts and "xen events". Taking current 0xE9 for 
example, it's lower than timer but higher than all external device interrupts. 
This means "xen events" will always preempt device interrupts in this case,
which is unfair and not what we want.

Then, how about change it to a smaller value like smaller than all device
 interrupts? That's unfair again, since all the "xen events" are put on the 
bottom list even after keyboard interrupts. Some "xen events" are critical 
which has to be put at high priority. For example, xen may send 
VIRQ_DOM_EXC to guest when trying to kill domain. Also we shouldn't 
low down the priority of the traffic between frontend/backend drivers.

People may further think to place that hard-code vector in the middle. 
The question still holds that we can only move whole "xen events" to 
another priority level together. There is priority difference among external
 device interrupts, so are xen events. Two groups are interleaved without 
distinct split line. However current approach binds all "xen events" to one 
vector, thus such unfairness is difficult to handle at this rough grained level.

So how things go different for event channel mechanism? After introducing 
the callback to handle "xen events", actually "xen events" are the basic 
hardware event now that guest will receive. There'll be no interrupt injected 
into xenlinux any more. All the pirq, virq, ipi, and pure inter-domain event 
are bound to this "xen event". Since this is the new layer under interrupt, no 
need to change a lot to xenlinux (Major C change is contained in evtchn.c) 
and drivers in xenlinux still needs to request irq resource with difference 
that 
irq will be bound to event at the same time.

Since above two groups are now in a plain layer, and described by consistent 
evtchn_pending bit, it's possible and easy for event dispatcher to allocate 
priority globally. 

All in all, above long context is just one factor that I view to choose the 
proper 
mechanism. :-)

Thanks,
Kevin


>-----Original Message-----
>From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
>[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Tristan
>Gingold
>Sent: 2006年3月9日 16:40
>To: Dong, Eddie; Magenheimer, Dan (HP Labs Fort Collins);
>xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>Subject: [Xen-ia64-devel] Event channel vs current scheme speed
>[wasvIOSAPIC and IRQs delivery]
>
>I'd like to narrow the discussion on this problem within this thread.
>
>We all agree we'd like to support shared IRQs and drivers domain.
>I am saying I can do that without event channel, while Eddie says it is
>required.
>
>One of Eddie argument is performance, as discussed previously.  Since I
>don't
>agree and things should be obvious here, I'd like to understand why we don't
>agree.
>
>See my comments.
>
>> >> 4: More new hypercall is introduced and more call to hypervisor.
>> > Only physdev_op hypercall is added, but it is also used in x86 to set
>> > up IOAPIC.  You can't avoid it.
>>
>> Initial time is OK for no matter what approach, runtime is critical.
>Ok.
>
>> I saw a lot of hypercall for RTE write.
>Did you see them by reading the code or by running ?
>
>There are hypercalls to mask/unmask interrupts.  Is it a performance
>bottleneck ?  I don't think so, since masking/unmasking shouldn't be very
>frequent.  Please tell me if I am wrong.
>
>There are also hypercalls to do EOI.  This can be a performance issue.
>If the interrupt is edge triggered, EOI is not required and could be optimized
>out if not yet done.
>If the interrupt is level-triggered, then it is required and I don't
>understand how event-channel can avoid it.  For me, this is the purpose of
>hypercall PHYSDEVOP_IRQ_UNMASK_NOTIFY.  Xen has to know when all
>domains have
>finished to handle the interrupt.
>
>Finally, there is the LSAPIC TPR, IVR and EOI.  I think the overhead is very
>small thanks to Dan's work.  And this overhead was measured with the timer.
>
>In my current understanding, I don't see the performance gain of
>event-channel.  And I repeat, I'd like to understand.
>
>Tristan.
>
>
>_______________________________________________
>Xen-ia64-devel mailing list
>Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-ia64-devel

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel