> In fact, previous version of pvSCSI driver used 2 rings for frontend
> to backend and backend to frontend communication respectively. The
> backend also queued requests from frontend and released the ring
> immediately. This may be very similer concept to the Netchannel2.
Cool, that sounds better. Did you still have fixed length command
structs, or allow variable length messages? (I'm very keen we use the
Also, were the rings multi-page?
> We would like to enhance it as second step after this version is
> merged into Xen tree, if possible.
The problem with this approach is that it would change the ABI. The ABI
isn't guaranteed in the unstable tree, but it would have to be locked
before 3.3 could be released (or the code removed/disabled prior to
It's preferable to get stuff like this fixed up before it goes in the
tree as in our experience developers often get retasked by their
management to other work items as soon as the code goes in, and don't
get around to the fixups. Against that, getting it in the tree exposes
it to more testing earlier, which is helpful. If you're confident that
the former is not going to happen to you, let's talk about which minor
cleanups are important.
Thanks very much for your work on this project!
> Best regards,
> On Wed, 27 Feb 2008 12:23:28 -0000
> "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx> wrote:
> > > I think the current netchannel2 plan also calls for variable-sized
> > > messages with split front->back and back->front rings. It might
> > > possible to share some code there (although at present there
> > > exist any code to share).
> > >
> > > I'd also strongly recommend supporting multi-page rings. That
> > > allow you to have more requests in flight at any one time, which
> > should
> > > lead to better performance.
> > The PV SCSI stuff is great work and I'm very keen to get it into
> > mainline. However, I'd very much like to see it use the same
> > ring structure that's being used for netchannel2. The main features
> > as followed:
> > * A pair of rings, one for communication in each direction
> > don't go in the same ring as per original netchannel)
> > * The rings are fixed in size at allocation time, but the area of
> > they are allocated in may be bigger than a page, i.e. a list of
> > refs is communicated over xenbus.
> > * The data placed on the rings consists of 'self describing'
> > containing a type and a length. Messages simply wrap over the ring
> > boundaries. The producer simply needs to wait until there is enough
> > space on the ring before placing a message.
> > * Both the frontend and the backend remove data from the rings and
> > it in their own internal data structures eagerly. This is in
> > the netchannel where free buffers and TX packets were left waiting
> > the rings until they were required. Use of the eager approach
> > control messages to be muxed over the same ring. Both ends will
> > advertise the number of outstanding requests they're prepared to
> > internally using a message communicated over the ring, and will
> > attempts to queue more. Since the backend needs to copy the entries
> > before verification anyhow this minimal additional overhead.
> > Best,
> > Ian
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> Jun Kamada
Xen-devel mailing list