|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] [PATCH] Per vcpu IO evtchn patch for HVM domain
>> Per vcpu IO evtchn patch for HVM domain.
>> We are starting to send patches to support SMP VMX guest,
>for SVM side,
>> should have a test to see if this patch breaks anything there.
>
>Can you explain the bind_interdomain logic? Looks as though both the
>device model *and* Xen are doing bind_interdomain now? I'd
>prefer to do
>it just in the device model, especially since you had to punch a hole
>through to evtchn_bind_vcpu() to be able to do it within Xen!
>
For the bind_interdomain logic, I think it should be almost the same as
the current 2 steps binding, no xen hypervisor code changed for this.
1) the current code allocates an *unbound* port from VMX domain in
python code (image.py), which in turn calls xc_hvm_build with this port
parameter very soon. And my patch just moves this allocation to
xc_hvm_build. Now it's no need to pass the port parameter, or we need
pass an array of unbound ports for each vcpu to xc_hvm_build.
2) And the logic in device model actually is almost the same, to bind
the previously allocated unbound port to a dom0 port, my patch changes
it to a loop for binding for each vcpu.
Bind_interdomain binds a port to vcpu0 by default, to notify different
vcpu of VMX domain in device model, seems I have to call
evtchn_bind_vcpu in vmx_do_launch if only use the current event channle
interface. Any comments?
Did I really understand your question?
Thanks
-Xin
> -- Keir
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|