WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [RFC][Patch] Improvemet the responce of xend.

To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [RFC][Patch] Improvemet the responce of xend.
From: Akio Takebe <takebe_akio@xxxxxxxxxxxxxx>
Date: Fri, 31 Aug 2007 14:15:11 +0900
Delivery-date: Thu, 30 Aug 2007 22:13:50 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi, all

This is a idea which improve the responce of xend.

When domU panic and xend is dumping the core,
if we do "xm list" or "xm create",
we cannot get the respornce from xend.

If domU panic and xend don't return such the responce,
users probably think the system hangup,
and they may reboot the system.
If domU is allocated big memory, dumping time is long.
(e.g. If domU have 256MB memory, the dumping time may be 1 hour and more.)

I make the patch which xend fork at dumping time.
But child process of xend cannot write xenstore.
Why cann't it write xenstore? Or I had some mistakes?

Do you have the better idea?
Any comments are welcome. :-)

Signed-off-by: Akio Takebe <takebe_akio@xxxxxxxxxxxxxx>

diff -r 6644d8486266 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Fri Aug 24 15:09:14 2007 -0600
+++ b/tools/python/xen/xend/XendDomainInfo.py   Fri Aug 31 13:13:57 2007 +0900
@@ -1107,6 +1107,18 @@ class XendDomainInfo:
     def getRestartCount(self):
         return self._readVm('xend/restart_count')
 
+    def getIs_dumping(self):
+        return self._readVm('xend/is_dumping')
+
+    def setIs_dumping(self, status):
+        return self._writeVm('xend/is_dumping', status)
+
+    def getDumpPid(self):
+        return self._readVm('xend/dump_pid')
+
+    def setDumpPid(self, status):
+        return self._writeVm('xend/dump_pid', status)
+
     def refreshShutdown(self, xeninfo = None):
         """ Checks the domain for whether a shutdown is required.
 
@@ -1164,8 +1176,12 @@ class XendDomainInfo:
                         # we can do in this context.
                         pass
 
-                restart_reason = 'crash'
-                self._stateSet(DOM_STATE_HALTED)
+                dump_status = self.getIs_dumping()
+                if dump_status == 'dumping':
+                    return
+                else:
+                    restart_reason = 'crash'
+                    self._stateSet(DOM_STATE_HALTED)
 
             elif xeninfo['shutdown']:
                 self._stateSet(DOM_STATE_SHUTDOWN)
@@ -1368,13 +1384,43 @@ class XendDomainInfo:
             if os.path.isdir(corefile):
                 raise XendError("Cannot dump core in a directory: %s" %
                                 corefile)
-            
-            xc.domain_dumpcore(self.domid, corefile)
+            dump_status = self.getIs_dumping()
+            if dump_status == 'dumped':
+                 dump_pid = self.getDumpPid()
+                 pid_exited, status = os.waitpid(dump_pid, 0)
+                 self.setDumpPid(str(0))
+                 if status != 0:
+                    log.exception("XendDomainInfo.dumpCore might failed: dump 
pid = %s status = %s",
+                                  dump_pid, status)
+
+            elif dump_status == 'dumping': 
+                return
+            else:
+                dump_pid = os.fork()
+                if dump_pid:
+                    self.setDumpPid(str(dump_pid))
+                    self.setIs_dumping('dumping')
+                else: 
+                    try:
+                        xc.domain_dumpcore(self.domid, corefile)
+                        log.warn('child xend: finish dumping')
+                        self.setIs_dumping('dumped')
+                        log.warn('child xend: exit')
+                        sys.exit(0)
+                    except RuntimeError, ex:
+                        corefile_incomp = corefile+'-incomplete'
+                        os.rename(corefile, corefile_incomp)
+                        log.exception("XendDomainInfo.dumpCore failed: id = %s 
name = %s",
+                                      self.domid, self.info['name_label'])
+                        self.setIs_dumping('dumped')
+                        sys.exit(1)
         except RuntimeError, ex:
             corefile_incomp = corefile+'-incomplete'
             os.rename(corefile, corefile_incomp)
             log.exception("XendDomainInfo.dumpCore failed: id = %s name = %s",
                           self.domid, self.info['name_label'])
+            if self.getDumpPID != 0:
+                self.setIs_dumping('dumped')
             raise XendError("Failed to dump core: %s" %  str(ex))
 
     #
@@ -2061,6 +2107,10 @@ class XendDomainInfo:
 
         if not self._readVm('xend/restart_count'):
             to_store['xend/restart_count'] = str(0)
+        if not self._readVm('xend/is_dumping'):
+            to_store['xend/is_dumping'] = 'no'
+        if not self._readVm('xend/dump_pid'):
+            to_store['xend/dump_pid'] = str(0)
 
         log.debug("Storing VM details: %s", scrub_password(to_store))
 


Best Regards,

Akio Takebe


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [RFC][Patch] Improvemet the responce of xend., Akio Takebe <=