Tristan Gingold wrote:
> I'd like to narrow the discussion on this problem within this thread.
>
> We all agree we'd like to support shared IRQs and drivers domain.
> I am saying I can do that without event channel, while Eddie says it
> is required.
The current vIOSAPIC can't, would like to see your enhancement if you
want.
>
> One of Eddie argument is performance, as discussed previously. Since
> I don't agree and things should be obvious here, I'd like to
> understand why we don't agree.
I said event channel based solution was slight better in performance,
but I also said this was not critical reason that I didn't buy in
vIOSAPIC.
The critical reason is in architecture correctness, compatability to Xen
and maintaince effort in future.
>
>
>>>> 4: More new hypercall is introduced and more call to hypervisor.
>>> Only physdev_op hypercall is added, but it is also used in x86 to
>>> set up IOAPIC. You can't avoid it.
>>
>> Initial time is OK for no matter what approach, runtime is critical.
>> Ok.
>
>> I saw a lot of hypercall for RTE write.
> Did you see them by reading the code or by running ?
>
> There are hypercalls to mask/unmask interrupts. Is it a performance
> bottleneck ? I don't think so, since masking/unmasking shouldn't be
> very frequent. Please tell me if I am wrong.
>
> There are also hypercalls to do EOI. This can be a performance issue.
> If the interrupt is edge triggered, EOI is not required and could be
> optimized out if not yet done.
> If the interrupt is level-triggered, then it is required and I don't
> understand how event-channel can avoid it. For me, this is the
With event channel based approach, TPR, IVR is read in xen, not guest.
That is the difference. Only EOI is conditionally written by a notifier
request
from event channel. So vIOSPAIC needs 3 hypercalls, while event channel
based solution needs at most 1 hypercall.
> purpose of hypercall PHYSDEVOP_IRQ_UNMASK_NOTIFY. Xen has to know
> when all domains have finished to handle the interrupt.
I have listed in the description of X86 flow, please refer. Basically
this is same.
>
> Finally, there is the LSAPIC TPR, IVR and EOI. I think the overhead
> is very small thanks to Dan's work. And this overhead was measured
> with the timer.
>
> In my current understanding, I don't see the performance gain of
No matter how small, it is a ring crossing and can never be less than a
simple
(event channel) bitmap scan.
If a hypercall can be faster than share memory access, the
virtualization life will
be totally different and we may lose job :-(
> event-channel. And I repeat, I'd like to understand.
>
>> The problem is that Xen already support it by default, the solution
>> is already there. What we do is just to use it :-) Linux already
>> support this !
>> Then why IA64 want to remove this support and leave an extra effort
>> to future. If it is a new one, I would say yes we should start from
>> simple.
> I also agree with enabling shared interrupt.
Good, at least we have one common ground now :-)
Thanks for your support!
>> 3: Stability:
>> Event channel based solution has undergone long time well test and
>> real deployment, but vIOSAPIC is not. BTW, IRQ is very important in
>> OS, a race condition issue may cost people weeks or even months
>> debug effort.
> I don't agree. Current logic is much more tested than event channel,
> at least on ia64!
> Radical changes are much more worrying.
>
Please be aware that vIOSAPIC need to change IOSAPIC code, and
original IOSAPIC is not aware of any race condition among
multiple VMs that is totally new. On the other hand, using
event channel based solution doesn't need to change single line code for
run time service. The dish is there already.
Background for others: Choosing IOSAPIC as hardware interrupt
ocntroller
or pirq is done at initialization time. If choosing pirq, the guest IRQ
goes
into event channel based approach, there is no extra changes.
>> 4: Which one change linux less ?
>> My answer is still event channel based solution, as all event
>> channel code are xen common and
>> is already in (VBD/VNIF use it).
> You will have to change iosapic.c almost like my change, see
> io_apic-xen.c and adding new event-channel irq_type.
Initialization time change is almost same (neglictable difference), but
for
runtime service changes. Event channel based solution changes less.
> Checking running_on_xen costs nothing. If you fear about that, forget
> transparent virtualization.
As previously said, although event channel based solution has slight
better
performance, but it is not critical even for me. Just show u how
transparent concept
is much better supported in event channel based solution.
>> 6: Stability of future:
>> My answer is clear, fixing an intial time bug only costs
>> one-tenth of runtime bug. Eventchannel based solution only change
>> intial time code.
> Current interrupt delivery is stable. Why changing something which
> is working ?
Because we need to support driver domain, support IRQ sharing among
guests.
Maybe I should ask u why xen code supporting interrupt delivery for
driver domain
and IRQ sharing is not choosed. xen code is a working code.
Thx,eddie
_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
|