Steven Smith writes ("Re: [Xen-devel] disable qemu PCI devices in HVM domains"):
> > +#ifndef CONFIG_STUBDOM
> > + /* Invalidate buffer cache for this device. */
> > + ioctl(s->fd, BLKFLSBUF, 0);
> > +#endif
...
> So this hunk is probably, strictly speaking, redundant for all current
> driver implementations.
Right, good.
> Having said that, it's clearly more robust to not rely on the various
> drivers being able to get in before any writes are issued, so it's
> probably a good thing to have anyway.
Well, except that I would prefer not to carry a change in this part of
the qemu code unless it was actually necessary.
> > What about Linux platforms with existing PV drivers which do not
> > engage in the blacklisting/disabling protocol ?
>
> Yeah, things might go a bit funny if you write using emulated drivers
> and then switch to PV ones without rebooting in between. I think
> that's probably a fairly unusual thing to do, but it's not really
> invalid.
Provided that whatever is managing this change (be it user or some
tool in the guest) knows that this is a multipath situation and to
take the appropriate steps.
> I'm not sure what the best way of fixing this would be. You could
> conceivably have blkback tell qemu to do a flush when the frontend
> connects and before blkback starts doing IO, but that's kind of ugly.
> Alternatively, we could modify blkfront so that it tells qemu to flush
> devices when appropriate, but that won't help existing drivers.
The guest can instruct qemu to flush writes through the host buffer
cache by issueing an IDE FLUSH CACHE command, which translates to
fsync().
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|