[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] /proc/xen/xenbus supports watch?



On Thu, 2005-09-15 at 11:53 +0100, Keir Fraser wrote:
> On 15 Sep 2005, at 02:39, Rusty Russell wrote:
> 
> > we really do want separate connections
> > for each client: they're logically separate, so overloading them on one
> > transport is going to be a hack.
> 
> How does two connections being 'logically separate' imply that it is 
> improper for them not to also be 'physically separate'? Multiplexing 
> multiple simultaneous connections/transactions onto a single underlying 
> page-level transport would seem fine to me!

Um, multiplexing, like any feature, adds complexity: if we don't need
it, don't do it.  <shrug>

We have a way of establishing new ringbuffers to talk to the store, we
just currently assume one per domain.  Loosening that seems simpler and
more robust than introducing a multiplexing layer, unless you two can
see something I can't?

Christian says:
> My main objections against multiple pages are:
> - setup/teardown overhead: we'll have to add messages to setup
>   and teardown a new connection

Which we already have, as above.

> - we have to maintain state about the connection in the daemon

But allowing multiple connections over one transport doesn't change
this.

> - save/restore becomes more complicated

Actually, I think it becomes simpler we simply force the device closed,
which should be handled by libxenstore just the same as a unix domain
socket closing on daemon restart, AFAICT.  So it has an appeal.

What was the reason for wanting multiple transactions per connection?
Changing the interface is going to be a PITA, so we should figure out if
we're going to need that soon...

Thanks!
Rusty.
-- 
A bad analogy is like a leaky screwdriver -- Richard Braakman


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.