This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] increase evtchn limits

To: "Xen-devel@xxxxxxxxxxxxxxxxxxx" <Xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] increase evtchn limits
From: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Date: Thu, 20 May 2010 20:41:10 -0700
Delivery-date: Thu, 20 May 2010 20:43:46 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Oracle Corporation
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

I'm trying to boot up with lot more than 32 vcpus on this very large box.
I overcame vcpu_info[MAX_VIRT_CPUS] by doing vcpu placement hypercall
in guest, but now running into evt channel limit (lots of devices):

       unsigned long evtchn_pending[sizeof(unsigned long) * 8];

which limits to 512 max for my 64bit dom0. The only recourse seems to 
create a new struct shared_info_v2{}, and re-arrange it a bit with lot 
more event channels. Since, start_info has magic with version info, I 
can just check that in guest and use new shared_info...(doing the design 
on the fly here). I can create a new vcpuop saying the guest is using 
newer version.  Or forget new version of shared_info{}, I can just
put evtchn stuff in my own mfn and tell hypervisor to relocate it,
(just like vcpu_info does) via new VCPUOP_ call.

Keir, what do you think?


Xen-devel mailing list