WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

RE: [Xen-ia64-devel] IOSAPIC virtualisation

To: "Tristan Gingold" <Tristan.Gingold@xxxxxxxx>, "Alex Williamson" <alex.williamson@xxxxxx>
Subject: RE: [Xen-ia64-devel] IOSAPIC virtualisation
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Thu, 9 Feb 2006 18:05:20 +0800
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 09 Feb 2006 10:17:42 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcYr/9XqMnCKRRQ9QCyMu80Mlpt08gBW8/1Q
Thread-topic: [Xen-ia64-devel] IOSAPIC virtualisation
Tristan Gingold wrote:
> Le Vendredi 03 Février 2006 17:13, Alex Williamson a écrit :
>> On Fri, 2006-02-03 at 09:33 +0100, Tristan Gingold wrote: [...]
>>    I agree that we can't hit this problem right now, but it's easy to
>> fix and would be one less thing we might miss when we do enable
>> driver domains.  It looks the block of code to mask the vector could
>> be copied identically into the section to unmask the vector with the
>> appropriate s/mask_vec/unmask_vec and setting of the rte values.  I
>> guess it keeps catching my eye because the mask and unmask are not
>> symmetric.  Thanks, Hi, 
> 
> I have slightly modified the patch so that it looks almost symmetric.
> 
> Thanks,
> Tristan.

Tristan:
        Great work! And sorry I don't find time to go through all.
        A quick question is that why we need to do cpu_wake() immediately after 
IRQ injection? 
        In Xen design, this API is mainly used for VP pause/unpause and manual 
ops that is reasonable to disturb/bypass the scheduler decision. This disturb 
is heavily costed as the scheduler triggered in the next time tick will go back 
to its normal decision tree that probably means preemption of dom0 quantum. 
What X86 did is to wait for the scheduler to take the decision. 
        I know the original code also do in this way, but it is not an 
architecture requirement. Rather it is a shortcut in previous implementation, 
and I think it is time to revise now.


+xen_reflect_interrupt_to_domains (ia64_vector vector)
+{
+       struct iosapic_intr_info *info = &iosapic_intr_info[vector];
+       struct iosapic_rte_info *rte;
+       int res = 1;
+
+       list_for_each_entry(rte, &info->rtes, rte_list) {
+               if (rte->vcpu != NULL) {
+                       if (rte->vcpu == VCPU_XEN)
+                               res = 0;
+                       else {
+                               /* printf ("Send %d to vcpu as %d\n",
+                                  vector, rte->vec); */
+                               /* FIXME: vcpus should be really
+                                  interrupted.  This should currently works
+                                  because only domain0 receive interrupts and
+                                  domain0 runs on CPU#0, which receives all
+                                  the interrupts... */
+                               vcpu_pend_interrupt(rte->vcpu, rte->vcpu_vec);
+                               vcpu_wake(rte->vcpu);
+                       }

        Another minor comments are:
        1: +#define VCPU_XEN ((struct vcpu *)1)    looks stranger for me. 
                Further more, I'd like to put a bit in RTE indicating ownership 
of IRQ, anything else you considered?
        2: Similar with #1, checking IRQ vector  (if (vector == 
IA64_TIMER_VECTOR || vector == IA64_IPI_VECTOR)) in following code is too 
"hardcode". Today we only have 2 IRQs in hypervisor, but actually we need more 
such as platform management interrupt like Alex mentioned previously for 
hotplug, thermal sensor IRQ. So we don't want to see a long list of check here. 
My suggestion is to adopt similar mechanism with X86. I.e. like __do_IRQ_guest 
in arch/x86/irq.c, the detail implementation can be architecture dependant like 
x86 use "desc->status & IRQ_GUEST" but we may not.
                Anyway, keep the capability that a machine IRQ may be bound to 
multiple guest like X86 did today is better and it is not so difficult. you may 
also be able to reuse some code there :-)


 xen_do_IRQ(ia64_vector vector)
 {
-       if (vector != IA64_TIMER_VECTOR && vector != IA64_IPI_VECTOR) {
-               extern void vcpu_pend_interrupt(void *, int);
+       struct vcpu *vcpu;
+       ia64_vector v;
+
+       /*  Do not reflect special interrupts.  */
+       if (vector == IA64_TIMER_VECTOR || vector == IA64_IPI_VECTOR)
+               return 0;
+

Thx,eddie

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>