This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] RE: Rather slow time of Ping in Windows with GPLPVdriver

To: James Harper <james.harper@xxxxxxxxxxxxxxxx>, Pasi Kärkkäinen <pasik@xxxxxx>
Subject: RE: [Xen-devel] RE: Rather slow time of Ping in Windows with GPLPVdriver
From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
Date: Mon, 14 Mar 2011 10:22:34 +0000
Accept-language: en-US
Acceptlanguage: en-US
Cc: MaoXiaoyun <tinnycloud@xxxxxxxxxxx>, xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 14 Mar 2011 03:22:59 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AEC6C66638C05B468B556EA548C1A77D01C55F23@trantor>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D01C55E87@trantor> <291EDFCB1E9E224A99088639C47620228E936E1A88@xxxxxxxxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01C55E8E@trantor> <291EDFCB1E9E224A99088639C47620228E936E1A9A@xxxxxxxxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01C55E91@trantor> <291EDFCB1E9E224A99088639C47620228E936E1AA0@xxxxxxxxxxxxxxxxxxxxxxxxx> <20110310182259.GG5345@xxxxxxxxxxx> <291EDFCB1E9E224A99088639C47620228E936E1B27@xxxxxxxxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01C55F23@trantor>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcvfUD1JqX6k3Vk7RoK63NP2k3k7OgAgZFjgAH/HdsAAGAiccA==
Thread-topic: [Xen-devel] RE: Rather slow time of Ping in Windows with GPLPVdriver
Nope, limiting the affinity mask before your IoConnectInterrupt(Ex) will work 
just fine, although you do risk Windows not giving you an interrupt if it 
decides for some reason that it's out of vectors on CPU0. Pretty small risk 
though, given that it's shareable :-)


> -----Original Message-----
> From: James Harper [mailto:james.harper@xxxxxxxxxxxxxxxx]
> Sent: 13 March 2011 23:44
> To: Paul Durrant; Pasi Kärkkäinen
> Cc: MaoXiaoyun; xen devel
> Subject: RE: [Xen-devel] RE: Rather slow time of Ping in Windows
> with GPLPVdriver
> >
> > I did post a patch ages ago. It was deemed a bit too hacky. I
> think it
> > would probably be better to re-examine the way Windows PV drivers
> are
> > handling interrupts. It would be much nicer if we could properly
> bind
> > event channels across all our vCPUs; we may be able to leverage
> what
> > Stefano did for Linux PV-on-HVM.
> >
> What would also be nice is to have multiple interrupts attached to
> the platform pci driver, and bind events to a specific interrupt,
> and be able to control the affinity of each interrupt.
> Another idea would be that each xenbus device hotplugs a new pci
> device with an interrupt. That only works for OS's that support
> hotplug pci though...
> MSI interrupts might be another way of conveying event channel
> information as part of the interrupt, but I don't know enough about
> how MSI works to know if that is possible. I believe you still need
> one irq per 'message id' so your back to my first wish item.
> Under Windows, if we set the affinity of the platform pci irq to
> cpu0 will that do the job (bind the irq to cpu0), or are there
> inefficiencies in doing that?
> James

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>