|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v1 16/27] xen/riscv: implement IRQ mapping for device passthrough
On 14.04.2026 13:29, Oleksii Kurochko wrote:
> On 4/2/26 2:22 PM, Jan Beulich wrote:
>> On 10.03.2026 18:08, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/include/asm/setup.h
>>> +++ b/xen/arch/riscv/include/asm/setup.h
>>> @@ -5,6 +5,10 @@
>>>
>>> #include <xen/types.h>
>>>
>>> +struct domain;
>>> +struct dt_device_node;
>>> +struct rangeset;
>>> +
>>> #define max_init_domid (0)
>>>
>>> void setup_mm(void);
>>> @@ -13,6 +17,19 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned
>>> long len);
>>>
>>> void init_csr_masks(void);
>>>
>>> +/* TODO: move somewhere to common header? */
>>
>> Counter question: Why ...
>>
>>> +/*
>>> + * Retrieves the interrupts configuration from a device tree node and maps
>>> + * those interrupts to the target domain.
>>> + *
>>> + * Returns:
>>> + * < 0 error
>>> + * 0 success
>>> + */
>>> +int map_device_irqs_to_domain(struct domain *d, struct dt_device_node *dev,
>>> + bool need_mapping,
>>> + struct rangeset *irq_ranges);
>>
>> ... is this not an inline function, when ...
>>
>>> --- a/xen/arch/riscv/intc.c
>>> +++ b/xen/arch/riscv/intc.c
>>> @@ -79,3 +79,11 @@ int __init intc_make_domu_dt_node(const struct
>>> kernel_info *kinfo)
>>>
>>> return -ENOSYS;
>>> }
>>> +
>>> +int map_device_irqs_to_domain(struct domain *d, struct dt_device_node *dev,
>>> + bool need_mapping,
>>> + struct rangeset *irq_ranges)
>>> +{
>>> + return d->arch.vintc->ops->map_device_irqs_to_domain(d, dev,
>>> need_mapping,
>>> + irq_ranges);
>>> +}
>>
>> ... it's merely a wrapper around an indirect function call? And then the
>> function isn't used anywhere anyway.
>
> It is used by dom0less common code and it is a wrapper because Arm has
> different implementation and Arm doesn't have
> map_device_irqs_to_domain() in its virtual interrupt controller operations.
But the question wasn't why this is a wrapper, but why this wrapper isn't an
inline function.
>>> +int vaplic_map_device_irqs_to_domain(struct domain *d,
>>> + struct dt_device_node *dev,
>>> + bool need_mapping,
>>> + struct rangeset *irq_ranges)
>>> +{
>>> + unsigned int i, nirq;
>>> + int res, irq;
>>> + struct dt_raw_irq rirq;
>>> + uint32_t *auth_irq_bmp = d->arch.vintc->private;
>>> + unsigned int reg_num;
>>> +
>>> + nirq = dt_number_of_irq(dev);
>>> +
>>> + /* Give permission and map IRQs */
>>> + for ( i = 0; i < nirq; i++ )
>>> + {
>>> + res = dt_device_get_raw_irq(dev, i, &rirq);
>>> + if ( res )
>>> + {
>>> + printk(XENLOG_ERR "Unable to retrieve irq %u for %s\n",
>>> + i, dt_node_full_name(dev));
>>> + return res;
>>> + }
>>> +
>>> + /*
>>> + * Don't map IRQ that have no physical meaning
>>> + * ie: IRQ whose controller is not APLIC/IMSIC/PLIC.
>>> + */
>>> + if ( rirq.controller != dt_interrupt_controller )
>>> + {
>>> + dt_dprintk("irq %u not connected to primary controller."
>>> + "Connected to %s\n", i,
>>> + dt_node_full_name(rirq.controller));
>>> + continue;
>>> + }
>>> +
>>> + irq = platform_get_irq(dev, i);
>>> + if ( irq < 0 )
>>> + {
>>> + printk("Unable to get irq %u for %s\n", i,
>>> dt_node_full_name(dev));
>>> + return irq;
>>> + }
>>> +
>>> + res = irq_permit_access(d, irq);
>>> + if ( res )
>>> + {
>>> + printk(XENLOG_ERR "Unable to permit to %pd access to IRQ
>>> %u\n", d,
>>> + irq);
>>
>> This time the other way around: %d please with plain int. (Again at least
>> once further down.)
>>
>>> + return res;
>>> + }
>>> +
>>> + reg_num = irq / APLIC_NUM_REGS;
>>> +
>>> + if ( is_irq_shared_among_domains(d, irq) )
>>> + {
>>> + printk("%s: Shared IRQ isn't supported\n", __func__);
>>> + return -EINVAL;
>>> + }
>>> +
>>> + auth_irq_bmp[reg_num] |= BIT(irq % APLIC_NUM_REGS, U);
>>
>> ... all of this leaves me with the impression that IRQ numbering isn't really
>> virtualized. IRQs are merely split into groups, one group per domain (and
>> maybe some unused). How are you going to fit in truly virtual IRQs?
>
> What do you mean by truly virtual IRQs?
Ones where no aspects are represented by any piece of hardware.
> I can't totally agree that the current approach isn't use virtual IRQs,
> yes, they are 1:1 mapped but on the other side Xen is responsible to
> give an IRQ number for guest's device and Xen is responsible that guest
> isn't trying to reach IRQ which not belongs to it.
In a non-virtualized environment I expect IRQs are going to be "sparse"
(i.e. with perhaps large blocks of items used elsewhere). If you had
proper translation of IRQ numbers, the same could be true for your
guests.
>>> + dt_dprintk(" - IRQ: %u\n", irq);
>>> +
>>> + if ( irq_ranges )
>>> + {
>>> + res = rangeset_add_singleton(irq_ranges, irq);
>>> + if ( res )
>>> + return res;
>>> + }
>>
>> What is irq_ranges?
>
> IIUC based on Arm code irq_ranges is an optional output accumulator, the
> caller allocates and passes it in when it needs to track which IRQs were
> mapped (overlay use case), or passes NULL when that tracking is not needed.
>
> I added here as map_device_irqs_to_domain() is called from the common
> code and so maybe one day someone will decide to pass irq_ranges to this
> functions. At the moment, for RISC-V it is the only one user of
> map_device_irqs_to_domain() and it passes NULL.
Simply assert then that it's NULL?
>>> @@ -34,6 +142,7 @@ static int __init cf_check vcpu_vaplic_init(struct vcpu
>>> *v)
>>>
>>> static const struct vintc_ops vaplic_ops = {
>>> .vcpu_init = vcpu_vaplic_init,
>>> + .map_device_irqs_to_domain = vaplic_map_device_irqs_to_domain,
>>> };
>>
>> What about the inverse function, needed for domain cleanup?
>
> I planned to add it when it will be really needed. At the momemnt, I
> don't have such use cases.
I.e. if any domain needs re-starting, the entire system needs rebooting?
Recall that "dom0less" is slightly misleading a name, as it only allows
there to not be a Dom0. One can be there, and hence re-starting a crashed
domain ought to be possible. For that, you need to correctly clean up
after the crashed one.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |