WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] RE: Rather slow time of Pin in Windows with GPL PVdriver

To: Pasi Kärkkäinen <pasik@xxxxxx>
Subject: RE: [Xen-devel] RE: Rather slow time of Pin in Windows with GPL PVdriver
From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
Date: Fri, 11 Mar 2011 09:53:45 +0000
Accept-language: en-US
Acceptlanguage: en-US
Cc: MaoXiaoyun <tinnycloud@xxxxxxxxxxx>, James Harper <james.harper@xxxxxxxxxxxxxxxx>, xen devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 11 Mar 2011 01:54:31 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110310182259.GG5345@xxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D01C55E87@trantor> <291EDFCB1E9E224A99088639C47620228E936E1A88@xxxxxxxxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01C55E8E@trantor> <291EDFCB1E9E224A99088639C47620228E936E1A9A@xxxxxxxxxxxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01C55E91@trantor> <291EDFCB1E9E224A99088639C47620228E936E1AA0@xxxxxxxxxxxxxxxxxxxxxxxxx> <20110310182259.GG5345@xxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcvfUD1JqX6k3Vk7RoK63NP2k3k7OgAgZFjg
Thread-topic: [Xen-devel] RE: Rather slow time of Pin in Windows with GPL PVdriver
I did post a patch ages ago. It was deemed a bit too hacky. I think it would 
probably be better to re-examine the way Windows PV drivers are handling 
interrupts. It would be much nicer if we could properly bind event channels 
across all our vCPUs; we may be able to leverage what Stefano did for Linux 
PV-on-HVM.

  Paul

> -----Original Message-----
> From: Pasi Kärkkäinen [mailto:pasik@xxxxxx]
> Sent: 10 March 2011 18:23
> To: Paul Durrant
> Cc: James Harper; MaoXiaoyun; xen devel
> Subject: Re: [Xen-devel] RE: Rather slow time of Pin in Windows with
> GPL PVdriver
> 
> On Thu, Mar 10, 2011 at 11:05:56AM +0000, Paul Durrant wrote:
> > It's kind of pointless because you're always having to go to
> vCPU0's shared info for the event info. so you're just going to keep
> pinging this between caches all the time. Same holds true of data
> you access in your DPC if it's constantly moving around. Better IMO
> to keep locality by default and distribute DPCs accessing distinct
> data explicitly.
> >
> 
> Should this patch be upstreamed then?
> 
> -- Pasi
> 
> >   Paul
> >
> > > -----Original Message-----
> > > From: James Harper [mailto:james.harper@xxxxxxxxxxxxxxxx]
> > > Sent: 10 March 2011 10:41
> > > To: Paul Durrant; MaoXiaoyun
> > > Cc: xen devel
> > > Subject: RE: [Xen-devel] RE: Rather slow time of Pin in Windows
> with
> > > GPL PVdriver
> > >
> > > >
> > > > Yeah, you're right. We have a patch in XenServer to just use
> the
> > > lowest
> > > > numbered vCPU but in unstable it still pointlessly round
> robins.
> > > Thus,
> > > if you
> > > > bind DPCs and don't set their importance up you will end up
> with
> > > them
> > > not
> > > > being immediately scheduled quite a lot of the time.
> > > >
> > >
> > > You say "pointlessly round robins"... why is the behaviour
> > > considered pointless? (assuming you don't use bound DPCs)
> > >
> > > I'm looking at my networking code and if I could schedule DPC's
> on
> > > processors on a round-robin basis (eg because the IRQ's are
> > > submitted on a round robin basis), one CPU could grab the rx
> ring
> > > lock, pull the data off the ring into local buffers, release the
> > > lock, then process the local buffers (build packets, submit to
> NDIS,
> > > etc). While the first CPU is processing packets, another CPU can
> > > then start servicing the ring too.
> > >
> > > If Xen is changed to always send the IRQ to CPU zero then I'd
> have
> > > to start round-robining DPC's myself if I wanted to do it that
> > > way...
> > >
> > > Currently I'm suffering a bit from the small ring sizes not
> being
> > > able to hold enough buffers to keep packets flowing quickly in
> all
> > > situations.
> > >
> > > James
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>