On Mon, Jun 04 2007, Rusty Russell wrote:
> On Mon, 2007-06-04 at 15:43 +0200, Jens Axboe wrote:
> > On Mon, Jun 04 2007, Carsten Otte wrote:
> > > Jens Axboe wrote:
> > > >On Fri, Jun 01 2007, Carsten Otte wrote:
> > > >>With regard to compute power needed, almost none. The penalty is
> > > >>latency, not overhead: A small request may sit on the request queue to
> > > >>wait for other work to arrive until the queue gets unplugged. This
> > > >>penality is compensated by the benefit of a good chance that more
> > > >>requests will be merged during this time period.
> > > >>If we have this method both in host and guest, we have twice the
> > > >>penalty with no added benefit.
> > > >
> > > >I don't buy that argument. We can easily expose the unplug delay, so you
> > > >can kill it at what ever level you want. Or you could just do it in the
> > > >driver right now, but that is a bit hackish.
> > > That would be preferable if the device driver can chose the unplug
> > > delay, or even better it could be (guest)sysfs tuneable.
> >
> > Right. We probably should make it sysfs configurable in any case, right
> > now it's a (somewhat) policy decision in the kernel with the delay and
> > unplug depth.
>
> The danger is that it's just another knob noone knows how to use.
> Perhaps simply setting it to 0 for the noop scheduler will cover all
> known cases?
Most people should not fiddle with it, the defaults are there for good
reason. I can provide a blk_queue_unplug_thresholds(q, depth, delay)
helper that you could use for the virtualized drivers, perhaps that
would be better for that use?
--
Jens Axboe
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|