WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] sorry for not top posting

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] sorry for not top posting
From: Xuehai Zhang <hai@xxxxxxxxxxxxxxx>
Date: Thu, 16 Dec 2004 11:29:42 -0600 (CST)
Delivery-date: Thu, 16 Dec 2004 17:36:36 +0000
Envelope-to: xen+James.Bulpin@xxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
i sent the following two emails without top posting before. i resend them
now for help. sorry for duplications!
-xuehai

[1] live migration problem

i tried to live migrate a domain from a source machine to a destination
 but i met the following problem:
    Error: Error: [Failure instance: Traceback: <type "int">, 1

 i can see the migrated domain when i ran the command "xm list" on the
 destination, however, when i tried to log in its console, it hanged
there.

 i attach the information from /var/log/xend-debug.log on the source
 machine for your reference.

 thank you in advance for your help.

 -xuehai

 sync_session> <type "str"> 1 ["migrate", ["id", "1"], ["state", "begin"],
 ["live", 1], ["resource", 0], ["src", ["host", "hamachi"], ["domain",
 "2"]], ["dst", ["host", "128.135.11.104"]]]
 Started to connect self= <xen.xend.XendMigrate.XfrdClientFactory instance
 at 0xb78aa20c> connector= <twisted.internet.tcp.Connector instance at
 0xb78aa22c>
 op_migrate>
 /usr/lib/python2.3/site-packages/twisted/internet/defer.py:398:
 FutureWarning: hex()/oct() of negative int will return a signed string in
 Python 2.4 and up
   return "<%s at %s>" % (cname, hex(id(self)))
 <Deferred at 0xb78aa04c>
 buildProtocol> IPv4Address(TCP, "localhost", 8002)
 xfr_err> ["xfr.err", "0"]
 xfr_err> <type "str"> 0
 xfr_vm_suspend> ["xfr.vm.suspend", "2"]
 VirqClient.virqReceived> 4
 xfr_vm_suspend>onSuspended> xend.domain.suspended ["haivm1", "2"]
 xfr_vm_destroy> ["xfr.vm.destroy", "2"]
 vif-bridge down vif=vif2.0 domain=haivm1 mac=aa:00:00:60:74:6a
 bridge=xen-br0
 xfr_err> ["xfr.err", "1"]
 xfr_err> <type "str"> 1
 Error> 1
 Error> calling errback
 ***cbremove> [Failure instance: Traceback: <type "int">, 1
 ]
 ***_delete_session> 1
 _op_migrate_err> [Failure instance: Traceback: <type "int">, 1
 ] <POST /xend/domain/haivm1 HTTP/1.1>
 Xfrd>loseConnection>
 Xfrd>connectionLost> [Failure instance: Traceback:
 twisted.internet.error.ConnectionDone, Connection was closed cleanly.
 ]
 XfrdMigrateInfo>connectionLost> [Failure instance: Traceback:
 twisted.internet.error.ConnectionDone, Connection was closed cleanly.
 ]
 XfrdInfo>connectionLost> [Failure instance: Traceback:
 twisted.internet.error.ConnectionDone, Connection was closed cleanly.
 ]
 Error> migrate failed
 clientConnectionLost> connector= <twisted.internet.tcp.Connector instance
 at 0xb78aa22c> reason= [Failure instance: Traceback:
 twisted.internet.error.ConnectionDone, Connection was closed cleanly.
 ]


[2] xm save and restore problem

Hi,

 I tried to do some experiments to test if the interrupted network traffic
 (for example, unfinished scp or http session) gets replayed between two
 vms (hosted in different machines) when I tried to freeze them first
 (using "xm save") and then resume them after a while (using "xm
restore").

 I ran the tests multiple times and found the traffic usually gets
replayed
 after both "frozen" vms get restored (that is what i am excited to see).
 However, in many rounds I also met vm kernel crash problem, especially I
 tried to restore them from the "frozen" disk files. I attached an example
 crash image at the end of the email, when I tried to resume both of the
 client and server vms after I froze them when there was an unfinished scp
 session between them (client just transfered 24% of the remote file).
 Since some previous threads in the mailing list show some other people
 also get such restore crash problems
 (http://sourceforge.net/mailarchive/message.php?msg_id=10113264), I am
 wondering if the crashes I met just solely come from Xen itself or also
 relate to the network traffic I injected.

 Thanks for any help.

 Xuehai

 ***********
 Client VM:
 ***********

 hamachi:/home/hai/vm# xm console haivm1
 ************ REMOTE CONSOLE: CTRL-] TO QUIT ********
 Unable to handle kernel paging request at virtual address 26991f7f
  printing eip:
 c030f000
 *pde = ma 00000000 pa 55555000
  [<c010a008>] __do_suspend+0x19f/0x1e0

  [<c012cc25>] worker_thread+0x22b/0x32f

  [<c010a110>] __shutdown_handler+0x0/0x48

  [<c0119882>] default_wake_function+0x0/0x12

  [<c0119882>] default_wake_function+0x0/0x12

  [<c012c9fa>] worker_thread+0x0/0x32f

  [<c0130d49>] kthread+0xa5/0xab

  [<c0130ca4>] kthread+0x0/0xab

  [<c010f705>] kernel_thread_helper+0x5/0xb

 Oops: 0002 [#1]
 PREEMPT
 Modules linked in:
 CPU:    0
 EIP:    0061:[<c030f000>]    Not tainted VLI
 EFLAGS: 00010297   (2.6.9-xenU)
 EIP is at free_all_bootmem_core+0x1c5/0x274
 eax: c02c5bd4   ebx: 0000c000   ecx: fbffc000   edx: 00000001
 esi: 00000010   edi: c0102000   ebp: 00000000   esp: c10ebf04
 ds: 0069   es: 0069   ss: 0069
 Process events/0 (pid: 3, threadinfo=c10ea000 task=c10d9020)
 Stack: c010ee65 00000000 c010a008 fbffc000 000001df 00000063 93de810d
 00000180
        c02c2000 c3f79000 c02c4b80 00000000 c10ea000 00000000 c012cc25
 00000000
        c10ebf74 00000000 c10ad278 c10ea000 c10ea000 c10ea000 c010a110
 c10ea000
 Call Trace:
  [<c010ee65>] time_resume+0x12/0x51

  [<c010a008>] __do_suspend+0x19f/0x1e0

  [<c012cc25>] worker_thread+0x22b/0x32f

  [<c010a110>] __shutdown_handler+0x0/0x48

  [<c0119882>] default_wake_function+0x0/0x12

  [<c0119882>] default_wake_function+0x0/0x12

  [<c012c9fa>] worker_thread+0x0/0x32f

  [<c0130d49>] kthread+0xa5/0xab

  [<c0130ca4>] kthread+0x0/0xab

  [<c010f705>] kernel_thread_helper+0x5/0xb

 Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00
 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 00 00 00 <00> b0 ab c3 6c 66 12 c1 bc 00 00 00 bc f0 30 c0 29 00 00 00 45
 /home/hai/ftp.img                              24%   50MB   0.0KB/s -
 stalled -



 ***********
 SERVER VM:
 ***********

 gardenia2:/home/hai/vm# xm console haivm1
 ************ REMOTE CONSOLE: CTRL-] TO QUIT ********
 ------------[ cut here ]------------
 kernel BUG at arch/xen/i386/kernel/time.c:685!
 invalid operand: 0000 [#1]
 PREEMPT
 Modules linked in:
 CPU:    0
 EIP:    0061:[<c010e8e9>]    Not tainted VLI
 EFLAGS: 00010286   (2.6.9-xenU)
 EIP is at time_resume+0x16/0x51
 eax: c02c57d4   ebx: 0000c000   ecx: fbffc000   edx: 00000001
 esi: 00000010   edi: c0102000   ebp: 00000000   esp: c10ec544
 ds: 0069   es: 0069   ss: 0069
 Unable to handle kernel paging request at virtual address ad84006f
  printing eip:
 c01164e2
 *pde = ma 00000000 pa 55555000
  [<c0224fd1>] __xencons_tx_flush+0x21b/0x244

  [<c010dbf8>] page_fault+0x38/0x40

  [<c01164e2>] do_page_fault+0x8d/0x67a

  [<c011d210>] release_console_sem+0x73/0x11d

  [<c011d0d0>] vprintk+0x121/0x1aa

  [<c011cfab>] printk+0x17/0x1b

  [<c013545f>] __print_symbol+0x86/0xd2

  [<c010dbf8>] page_fault+0x38/0x40

  [<c01164e2>] do_page_fault+0x8d/0x67a

  [<c0224b09>] kcons_write+0x71/0xcd

  [<c011cd28>] __call_console_drivers+0x5b/0x5d

  [<c011ce1a>] call_console_drivers+0x69/0x11f

  [<c010dbf8>] page_fault+0x38/0x40

  [<c010a7f1>] show_registers+0x100/0x1b8

  [<c010af14>] do_invalid_op+0x0/0x10e

  [<c010aa4a>] die+0x103/0x1aa

  [<c010e8e9>] time_resume+0x16/0x51

  [<c0118606>] fixup_exception+0x16/0x34

  [<c010b020>] do_invalid_op+0x10c/0x10e

  [<c010e8e9>] time_resume+0x16/0x51

  [<c010da11>] error_code+0x2d/0x38

  [<c010e8e9>] time_resume+0x16/0x51

  [<c024a115>] ip_push_pending_frames+0x2b5/0x411

  [<c02405ff>] netlink_ack+0x116/0x1bb

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c020053b>] cap_task_post_setuid+0x16/0x11f

  [<c020a308>] read_zero+0x1af/0x204

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c020a308>] read_zero+0x1af/0x204

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c024a110>] ip_push_pending_frames+0x2b0/0x411

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c02005c7>] cap_task_post_setuid+0xa2/0x11f

  [<c0310000>] biovec_init_pools+0x87/0x108

  =======================




-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now. 
http://productguide.itmanagersjournal.com/
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] sorry for not top posting, Xuehai Zhang <=