WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Per vcpu IO evtchn patch for HVM domain

To: "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Per vcpu IO evtchn patch for HVM domain
From: "Li, Xin B" <xin.b.li@xxxxxxxxx>
Date: Thu, 23 Feb 2006 05:38:39 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 22 Feb 2006 21:39:23 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcY34GdxeSsZ78rnTjWtYDygLHNSWAAE2EEw
Thread-topic: [Xen-devel] [PATCH] Per vcpu IO evtchn patch for HVM domain
>> Per vcpu IO evtchn patch for HVM domain.
>> We are starting to send patches to support SMP VMX guest, 
>for SVM side,
>> should have a test to see if this patch breaks anything there.
>
>Can you explain the bind_interdomain logic? Looks as though both the 
>device model *and* Xen are doing bind_interdomain now? I'd 
>prefer to do 
>it just in the device model, especially since you had to punch a hole 
>through to evtchn_bind_vcpu() to be able to do it within Xen!
>

For the bind_interdomain logic, I think it should be almost the same as
the current 2 steps binding, no xen hypervisor code changed for this.
1) the current code allocates an *unbound* port from VMX domain in
python code (image.py), which in turn calls xc_hvm_build with this port
parameter very soon. And my patch just moves this allocation to
xc_hvm_build. Now it's no need to pass the port parameter, or we need
pass an array of unbound ports for each vcpu to xc_hvm_build.
2) And the logic in device model actually is almost the same, to bind
the previously allocated unbound port to a dom0 port, my patch changes
it to a loop for binding for each vcpu.


Bind_interdomain binds a port to vcpu0 by default, to notify different
vcpu of VMX domain in device model, seems I have to call
evtchn_bind_vcpu in vmx_do_launch if only use the current event channle
interface. Any comments?

Did I really understand your question?

Thanks
-Xin



>  -- Keir
>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel