Which of your three xend patches ought to be ported to 3.2 (and 3.1?)
branches?
-- Keir
On 25/1/08 02:00, "Jim Fehlig" <jfehlig@xxxxxxxxxx> wrote:
> Keir Fraser wrote:
>> On 18/1/08 23:47, "Jim Fehlig" <jfehlig@xxxxxxxxxx> wrote:
>>
>>
>>>> I guess
>>>> that's the way to go, if so, and I'll commit to 3.3 and 3.2 trees.
>>>>
>>>>
>>> Hmm, don't know about replacing one lead with another - although to be
>>> fair it is much smaller :-). What about my other suggestion of source
>>> domain of these operations nuking its /vm/<uuid> path on destruction? I
>>> get the feeling there is a race in the localhost migration path that
>>> prevented such an approach, hence c/s 15957. I could look into this
>>> though if you like, but will be out of town for the next few days and
>>> won't be able to investigate until Tuesday.
>>>
>>
>> It would be nice to reference count, or otherwise track the vm<->domain
>> mapping, of /vm, so that we could somehow naturally avoid destroying the /vm
>> path on localhost migration yet we could destroy it on most saves and
>> relocates. I don't know how hard this would be.
>>
>> Could we just make all saves and relocates destroy the /vm/<uuid>-<number>
>> path? I can imagine I missed adding some deletions of that path when I
>> introduced the <number> extra level of discrimination, but would it be hard
>> to find the one or two places it might be needed and add it? Compared with
>> the other options we have?
>>
>
> Ok, I took this path as it seemed rather straight forward after fixing
> other issues.
>
> I have done the following testing with attached patch and see no leaks.
> Note: the 'crashed power state' and 'rename-restart' patches I
> submitted earlier were applied on the test rig as well.
>
> - Multiple reboot from within guest and 'xm reboot'
> - Multiple localhost migrations (followed by reboots)
> - Multiple save/restore
> - Multiple suspend/resume (managed domains)
> - Multiple checkpoints
> - Multiple shutdows from with guest and 'xm shut'
> - Crash guest with 'on_crash=preserve', followed by 'xm dump-core' and
> 'xm destroy' of preserved vm
> - Crash guest with 'on_crash=rename-restart', followed by 'xm dump-core'
> and 'xm destroy' of renamed vm
> - Crash guest with 'on_crash=restart'
> - Multiple 'xm network-[attach|detach]
>
> All of these tests were performed on PV as well as HVM guests, where it
> makes sense.
>
> Cheers,
> Jim
>
> # HG changeset patch
> # User Jim Fehlig <jfehlig@xxxxxxxxxx>
> # Date 1201226101 25200
> # Node ID ee07e51eefb8b75f8084d2e8b5c2bff04b95ae5e
> # Parent 605d470326da347552e17e9b1cc0466219f16dc2
> Fix leaking of /vm/<uuid> path in xenstore on various VM lifecycle events.
>
> Signed-off-by: Jim Fehlig <jfehlig@xxxxxxxxxx>
>
> diff -r 605d470326da -r ee07e51eefb8 tools/python/xen/xend/XendCheckpoint.py
> --- a/tools/python/xen/xend/XendCheckpoint.py Thu Jan 24 16:33:42 2008 -0700
> +++ b/tools/python/xen/xend/XendCheckpoint.py Thu Jan 24 18:55:01 2008 -0700
> @@ -125,10 +125,10 @@ def save(fd, dominfo, network, live, dst
> if checkpoint:
> dominfo.resumeDomain()
> else:
> - dominfo.destroyDomain()
> + dominfo.destroy()
> dominfo.testDeviceComplete()
> try:
> - dominfo.setName(domain_name)
> + dominfo.setName(domain_name, False)
> except VmError:
> # Ignore this. The name conflict (hopefully) arises because we
> # are doing localhost migration; if we are doing a suspend of a
> diff -r 605d470326da -r ee07e51eefb8 tools/python/xen/xend/XendDomainInfo.py
> --- a/tools/python/xen/xend/XendDomainInfo.py Thu Jan 24 16:33:42 2008 -0700
> +++ b/tools/python/xen/xend/XendDomainInfo.py Thu Jan 24 18:55:01 2008 -0700
> @@ -1125,10 +1125,11 @@ class XendDomainInfo:
> def getDomid(self):
> return self.domid
>
> - def setName(self, name):
> + def setName(self, name, to_store = True):
> self._checkName(name)
> self.info['name_label'] = name
> - self.storeVm("name", name)
> + if to_store:
> + self.storeVm("name", name)
>
> def getName(self):
> return self.info['name_label']
> @@ -1399,7 +1400,7 @@ class XendDomainInfo:
> new_dom_info = self._preserveForRestart()
> else:
> self._unwatchVm()
> - self.destroyDomain()
> + self.destroy()
>
> # new_dom's VM will be the same as this domain's VM, except where
> # the rename flag has instructed us to call preserveForRestart.
> @@ -1413,9 +1414,6 @@ class XendDomainInfo:
> new_dom_info)
> new_dom.waitForDevices()
> new_dom.unpause()
> - rst_cnt = self._readVm('xend/restart_count')
> - rst_cnt = int(rst_cnt) + 1
> - self._writeVm('xend/restart_count', str(rst_cnt))
> new_dom._removeVm(RESTART_IN_PROGRESS)
> except:
> if new_dom:
> @@ -1441,13 +1439,19 @@ class XendDomainInfo:
> new_name, new_uuid)
> self._unwatchVm()
> self._releaseDevices()
> + # Remove existing vm node in xenstore
> + self._removeVm()
> new_dom_info = self.info.copy()
> new_dom_info['name_label'] = self.info['name_label']
> new_dom_info['uuid'] = self.info['uuid']
> self.info['name_label'] = new_name
> self.info['uuid'] = new_uuid
> self.vmpath = XS_VMROOT + new_uuid
> + # Write out new vm node to xenstore
> self._storeVmDetails()
> + rst_cnt = self._readVm('xend/restart_count')
> + rst_cnt = int(rst_cnt) + 1
> + self._writeVm('xend/restart_count', str(rst_cnt))
> self._preserve()
> return new_dom_info
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|