WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery

To: "Tristan Gingold" <Tristan.Gingold@xxxxxxxx>, "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-ia64-devel] vIOSAPIC and IRQs delivery
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Thu, 9 Mar 2006 00:38:58 +0800
Delivery-date: Wed, 08 Mar 2006 16:40:22 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcZCvtC1wi4TOelcSySjvQqW26qAzAACtJAQ
Thread-topic: [Xen-ia64-devel] vIOSAPIC and IRQs delivery
Tristan:
        One more thing is that our proposal is to make out an idea
design for IRQ virtualization. We don't want to spend plenty of time to
argue each  detail here. I have found several limitations in previous
patch and I'd like to suggest us to work together to get the idea
design. Let us complete the event channel based design first, then
people can compare and comments. Will u contribute to that effort too?
Then you can fight between your left hand and right hand, like letting a
republican to debate for democracy or visa versa :-) 
        The current solution (dom0 own IOSAPIC and event channel built
on pseudo physical IRQ) can serve us for a while before a well
considered solution comes out.
        BTW, please refer to xen-devl, Keir confirmed io_apic.c in X86
is only used for initialization time, it is no longer necessary at run
time.
        See embedded too.
thx,eddie



Tristan Gingold wrote:
>>      The event channel model in some case will request real IOSAPIC
>> operation base on the type it is bund to. The software stack layer
>> is very clear:  1: guest PIRQ (top), 2: event channel (middle),
>>  3: machine IRQ (bottom).
>>      BTW, event channel is a pure software design, there is no
>> architecture dependency here.
> I don't wholy agree.  The callback entry is written in assembly, and
> seems to have tricks.

callback is one of the mechanism that event channel is carried on. Using

pseudo physical IRQ like in current xen/ia64 is another alternative.
No related to callback here.

>> Then let us see where the previous patch need to improve.
>> 1: IRQ sharing is not supported. This feature, especially for huge
>> Iron like Itanium, is a must.
> I agree.  However we won't reach this problem now as device drivers
> do not exist yet.
we are designing this patch to solve the driver domain problem in
xensummit.
If "won't reach the problem" is the reason to not do that, why we need
this change?
let dom0 own IOSAPIC is pretty simple and robust. Remember our goal is:
1: Support driver domain
2: Drive domain may share IRQ lines.

> 
>> 2: Sharing machine IOSAPIC resource to multiple guest introduces
>>     many dangerous situation. Example:
>>      If  DeviceA in DomX and Device B in DomY share IRQn, When domX
>> handle DeviceA IRQ (IRQn),
>>  take the example of function in the patch like mask_irq:
>>              s1:spin_lock_irqsave(&iosapic_lock, flags);
>>              s2:xen_iosapic_write ()         // write RTE to disable
>> the IRQ in this line
>>              s3:spin_unlock_irqrestore(&iosapic_lock, flags);
>>      Here is the domX is switched out at S3, and DeviceB fire an IRQ
>> at that time. Due to the
>> disable in RTE, DomY can never response to the IRQ till DomA get
>> executed again and enable RTE.
>>  This doesn't make sense for me.
> Neither for me.
> However my patch do not allow this behavior: once an IRQ is allocated
> by a domain, it can't be modified by another one.  Again I agree this
> is far from perfect and using an in_flight mechanism is better.
No, I don't think in_flight can help on this as if domX is masking
machine RTE.
The point is all those real IOSAPIC resource should be owned by xen, no
 partitioning, no sharing.

> 
>> 3: Another major issue is that there is no easy way in future to add
>> IRQ sharing support base on that patch. That is why I want to let
>> hypervisor own IOSAPIC exclusively, and guest are purely based on
>> software mechanism: Event channel. 
> I don't think IRQ sharing requires event channel.  This can also be
> done using current IRQ delivery.

Don;t know what is IRQ delivery mean.

> 
>> 4: More new hypercall is introduced and more call to hypervisor.
> Only physdev_op hypercall is added, but it is also used in x86 to set
> up IOAPIC.  You can't avoid it.

Initial time is OK for no matter what approach, runtime is critical.
I saw a lot of hypercall for RTE write.

> 
> Additionnal calls to hypervisor are for reading or writting IVR, EOI
> and TPR. I really think this is fast using hyper-privop.
> 
>>> The current ia64 model is well tested too and seems efficient too
>>> (according to Dan measures).
>> 
>> Yes, Xen/IA64 can say having undergone some level of test although
>> domU is still not that stable.
> Maybe because domU do not have pirqs :-)
> 
>> But vIOSAPIC is totally new for VMs and is not well tested.
> Whatever we do Xen will control IOSAPICs.  For sure my patch is not
> well tested, but simple enough.

We should not take this risk for a one month lifecycle patch.

> 
>>  On the other hand, the event channel based approach is well tested
>> in Xen with real deployment by customer.
> Correct but it won't drap and drop on ia64.

No, all the code are xen common in para-guest side, you don't need to
drap and drop.
And even more, the patch based on event channel will be less than your
previous patch.
I.e. less modification to xenlinux.
BTW, due to virtual driver support, all the xen event channel related
files are already 
imported in xen/ia64. This evtchn.c file contain all guest IRQ
virtualizaion code. 
You don't need to add new code. The only modification is initialization
code.

> 
>> And the event channel code is always there even now no matter you
>> call it once or 100 times.
> Yes but event channel is not yet bound to IRQs.
What do u mean here? Event channel is built on callback. PIRQ, VIRQ and
IPI 
is eventually built on event channel. What does reverse mean?


Eddie

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>