WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Skip vcpu_hotplug for VCPU 0 in smp_resume

On Wednesday, 01 April 2009 at 12:00, Kieran Mansley wrote:
> On Wed, 2009-04-01 at 11:31 +0100, Keir Fraser wrote:
> > On 01/04/2009 11:26, "Kieran Mansley" <kmansley@xxxxxxxxxxxxxx> wrote:
> > 
> > >> Could it be as simple as this? I can't remember what happens if
> > >> unregister_xenbus_watch is called after the xenbus connection has been
> > >> reset. Should we just free the guest structures without interacting
> > >> with xenstore at the start of the resume method?
> > > 
> > > It may be possible to synchronise the watch handler with the
> > > suspend/resume/cancel cycle without removing the watch, but that starts
> > > to get complicated.
> > 
> > Could we avoid any of this logic executing if there are no net accelerators?
> 
> The watch handler will try to load an accelerator if the configuration
> changes, so even if there were no accelerators before the suspend,
> unless you can prevent the watch from firing, you could end up with one
> trying to load between the suspend and resume.
> 
> If you got rid of the feature to load the requested accelerator
> automatically when the configuration changes, then yes, that might be
> possible, but I think I'd rather leave that in and use an extra lock and
> some state to ignore the watch firing at bad times.  This would mean we
> could leave the watch in place during the suspend/resume/cancel cycle
> (refreshing on resume).  The suspend_cancel callback would still be
> necessary, but it would just be acquiring a lock and modifying some
> state rather than doing a xenbus watch operation.

I'm afraid my memory of the kernel suspend mechanics has gotten a bit
rusty over the last few months, so I may be off-base here. But I
thought that xs_suspend masked watches until xs_reusume or
xs_suspend_cancel? If that is the case, isn't that on its own enough
to protect netfront?

> It's not clear to me what the source of the long delay is, and whether
> that change would solve it: the extra lock would be contended with the
> watch handler's work queue, and so if the watch is the source of the
> delay it's possible that we'd just contend in a different way and the
> delay would still be there.  Brendan: can you explain the delay for me?

It's the round trips to xenstored to manage the watches, which always
have a chance to be very slow.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel