WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [Patch 3/7] pvSCSI driver

To: "Jun Kamada" <kama@xxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [Patch 3/7] pvSCSI driver
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx>
Date: Thu, 28 Feb 2008 09:23:17 -0000
Cc: Steven Smith <Steven.Smith@xxxxxxxxxxxxx>, Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 28 Feb 2008 01:24:13 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080228115925.8FF2.EB2C8575@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080227111628.GB26424@xxxxxxxxxxxxxxxxxxxxxxxxxx> <DD74FBB8EE28D441903D56487861CD9D2969787B@xxxxxxxxxxxxxxxxxxxxxx> <20080228115925.8FF2.EB2C8575@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ach5whnldW35YQUKQyiLrY9BpSREjAAJ+Egg
Thread-topic: [Xen-devel] [Patch 3/7] pvSCSI driver
> In fact, previous version of pvSCSI driver used 2 rings for frontend
> to backend and backend to frontend communication respectively. The
> backend also queued requests from frontend and released the ring
> immediately. This may be very similer concept to the Netchannel2.

Cool, that sounds better. Did you still have fixed length command
structs, or allow variable length messages? (I'm very keen we use the
latter)

Also, were the rings multi-page?

> We would like to enhance it as second step after this version is
> merged into Xen tree, if possible.

The problem with this approach is that it would change the ABI. The ABI
isn't guaranteed in the unstable tree, but it would have to be locked
before 3.3 could be released (or the code removed/disabled prior to
release).

It's preferable to get stuff like this fixed up before it goes in the
tree as in our experience developers often get retasked by their
management to other work items as soon as the code goes in, and don't
get around to the fixups.  Against that, getting it in the tree exposes
it to more testing earlier, which is helpful. If you're confident that
the former is not going to happen to you, let's talk about which minor
cleanups are important.

Thanks very much for your work on this project!

Best,
Ian

> 
> 
> Best regards,
> 
> 
> On Wed, 27 Feb 2008 12:23:28 -0000
> "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx> wrote:
> 
> > > I think the current netchannel2 plan also calls for variable-sized
> > > messages with split front->back and back->front rings.  It might
be
> > > possible to share some code there (although at present there
> doesn't
> > > exist any code to share).
> > >
> > > I'd also strongly recommend supporting multi-page rings.  That
will
> > > allow you to have more requests in flight at any one time, which
> > should
> > > lead to better performance.
> >
> >
> > The PV SCSI stuff is great work and I'm very keen to get it into
> > mainline. However, I'd very much like to see it use the same
flexible
> > ring structure that's being used for netchannel2. The main features
> are
> > as followed:
> >
> > * A pair of rings, one for communication in each direction
(responses
> > don't go in the same ring as per original netchannel)
> >
> > * The rings are fixed in size at allocation time, but the area of
> memory
> > they are allocated in may be bigger than a page, i.e. a list of
grant
> > refs is communicated over xenbus.
> >
> > * The data placed on the rings consists of 'self describing'
messages
> > containing a type and a length. Messages simply wrap over the ring
> > boundaries. The producer simply needs to wait until there is enough
> free
> > space on the ring before placing a message.
> >
> > * Both the frontend and the backend remove data from the rings and
> place
> > it in their own internal data structures eagerly. This is in
contrast
> to
> > the netchannel where free buffers and TX packets were left waiting
on
> > the rings until they were required. Use of the eager approach
enables
> > control messages to be muxed over the same ring.  Both ends will
> > advertise the number of outstanding requests they're prepared to
> queue
> > internally using a message communicated over the ring, and will
error
> > attempts to queue more. Since the backend needs to copy the entries
> > before verification anyhow this minimal additional overhead.
> >
> >
> > Best,
> > Ian
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> Jun Kamada
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel