WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] RE: Rather slow time of Ping in Windows with GPLPVdriver

To: "Paul Durrant" <Paul.Durrant@xxxxxxxxxx>, Pasi Kärkkäinen <pasik@xxxxxx>
Subject: RE: [Xen-devel] RE: Rather slow time of Ping in Windows with GPLPVdriver
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Mon, 14 Mar 2011 10:43:43 +1100
Cc: MaoXiaoyun <tinnycloud@xxxxxxxxxxx>, xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 13 Mar 2011 16:44:44 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <291EDFCB1E9E224A99088639C47620228E936E1B27@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D01C55E87@trantor> <291EDFCB1E9E224A99088639C47620228E936E1A88@xxxxxxxxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01C55E8E@trantor> <291EDFCB1E9E224A99088639C47620228E936E1A9A@xxxxxxxxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01C55E91@trantor> <291EDFCB1E9E224A99088639C47620228E936E1AA0@xxxxxxxxxxxxxxxxxxxxxxxxx> <20110310182259.GG5345@xxxxxxxxxxx> <291EDFCB1E9E224A99088639C47620228E936E1B27@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcvfUD1JqX6k3Vk7RoK63NP2k3k7OgAgZFjgAH/HdsA=
Thread-topic: [Xen-devel] RE: Rather slow time of Ping in Windows with GPLPVdriver
> 
> I did post a patch ages ago. It was deemed a bit too hacky. I think it would
> probably be better to re-examine the way Windows PV drivers are handling
> interrupts. It would be much nicer if we could properly bind event channels
> across all our vCPUs; we may be able to leverage what Stefano did for Linux
> PV-on-HVM.
> 

What would also be nice is to have multiple interrupts attached to the platform 
pci driver, and bind events to a specific interrupt, and be able to control the 
affinity of each interrupt.

Another idea would be that each xenbus device hotplugs a new pci device with an 
interrupt. That only works for OS's that support hotplug pci though... 

MSI interrupts might be another way of conveying event channel information as 
part of the interrupt, but I don't know enough about how MSI works to know if 
that is possible. I believe you still need one irq per 'message id' so your 
back to my first wish item.

Under Windows, if we set the affinity of the platform pci irq to cpu0 will that 
do the job (bind the irq to cpu0), or are there inefficiencies in doing that?

James


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>