WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF

To: "You, Yongkang" <yongkang.you@xxxxxxxxx>
Subject: Re: [Xen-ia64-devel] Please try PV-on-HVM on IPF
From: Doi.Tsunehisa@xxxxxxxxxxxxxx
Date: Wed, 18 Oct 2006 16:55:59 +0900
Cc: xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 18 Oct 2006 00:56:16 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: Your message of Tue, 17 Oct 2006 22:27:28 +0800. <094BCE01AFBE9646AF220B0B3F367AAB5890EA@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <094BCE01AFBE9646AF220B0B3F367AAB5890EA@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
  Hi Yongkang,

  Thank you for your report.

  Does it output this trace back message when you detach vnif with
xm network-detach command ?

  I've looked `netif_release_rx_bufs: fix me for copying receiver.'
message sometimes at vnif detaching. But I've not met domain-vti
crash.

  What is the guest OS vesion ?

Thanks,
- Tsunehisa

You (yongkang.you) said:
> Hi Tsunehisa,
> 
> I have tried your patch and tried the modules in VTI domains. 
> VBD hasn't problem. I can mount VBD hard disk xvda successfully.
> But VNIF modules has problems. After I tried to insmod VNIF driver, VTI
> domain crashed. 
> 
> My vnif config: vif= [ 'type=ioemu, bridge=xenbr0', ' ' ]
> BTW, I remake the VTI kernel with 2.6.16. 
> 
> Following is the log:
> $B#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=#=(B
> [root@localhost ~]# insmod xen-platform-pci.ko
> PCI: Enabling device 0000:00:03.0 (0010 -> 0013)
> Grant table initialized
> [root@localhost ~]# insmod xenbus.ko
> [root@localhost ~]# insmod xen-vnif.ko
> [root@localhost ~]# vif vif-0: 2 parsing device/vif/0/mac
> netif_release_rx_bufs: fix me for copying receiver.
> kernel BUG at net/core/dev.c:3073!
> xenwatch[3970]: bugcheck! 0 [1]
> Modules linked in: xen_vnif xenbus xen_platform_pci sunrpc binfmt_misc dm_mod 
> thermal processor fan container button
> Pid: 3970, CPU 0, comm:             xenwatch
> psr : 00001010081a6018 ifs : 800000000000038b ip  : [<a0000001005eec40>] 
   Not tainted
> ip is at unregister_netdevice+0x1a0/0x580
> unat: 0000000000000000 pfs : 000000000000038b rsc : 0000000000000003
> rnat: a000000100a646c1 bsps: 0000000000000007 pr  : 0000000000006941
> ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70433f
> csd : 0000000000000000 ssd : 0000000000000000
> b0  : a0000001005eec40 b6  : a0000001000b79c0 b7  : a00000010000bbc0
> f6  : 1003e00000000000000a0 f7  : 1003e20c49ba5e353f7cf
> f8  : 1003e00000000000004e2 f9  : 1003e000000000fa00000
> f10 : 1003e000000003b9aca00 f11 : 1003e431bde82d7b634db
> r1  : a000000100b34120 r2  : 0000000000000002 r3  : 0000000000104000
> r8  : 0000000000000026 r9  : 0000000000000001 r10 : e000000001014644
> r11 : 0000000000000003 r12 : e0000000025b7da0 r13 : e0000000025b0000
> r14 : 0000000000004000 r15 : a00000010086f558 r16 : a00000010086f560
> r17 : e000000001d9fde8 r18 : e000000001d98030 r19 : e000000001014638
> r20 : 0000000000000073 r21 : 0000000000000003 r22 : 0000000000000002
> r23 : e000000001d98040 r24 : e000000001014608 r25 : e000000001014d80
> r26 : e000000001014d60 r27 : 0000000000000073 r28 : 0000000000000073
> r29 : 0000000000000000 r30 : 0000000000000000 r31 : 0000000000000000
> 
> Call Trace:
>  [<a000000100011df0>] show_stack+0x50/0xa0
>                                 sp=e0000000025b7910 bsp=e0000000025b12e0
>  [<a0000001000126c0>] show_regs+0x820/0x840
>                                 sp=e0000000025b7ae0 bsp=e0000000025b1298
>  [<a000000100037030>] die+0x1d0/0x2e0
>                                 sp=e0000000025b7ae0 bsp=e0000000025b1250
>  [<a000000100037180>] die_if_kernel+0x40/0x60
>                                 sp=e0000000025b7b00 bsp=e0000000025b1220
>  [<a0000001000373d0>] ia64_bad_break+0x230/0x480
>                                 sp=e0000000025b7b00 bsp=e0000000025b11f0
>  [<a00000010000c3c0>] ia64_leave_kernel+0x0/0x280
>                                 sp=e0000000025b7bd0 bsp=e0000000025b11f0
>  [<a0000001005eec40>] unregister_netdevice+0x1a0/0x580
>                                 sp=e0000000025b7da0 bsp=e0000000025b1198
>  [<a0000001005ef050>] unregister_netdev+0x30/0x60
>                                 sp=e0000000025b7da0 bsp=e0000000025b1178
>  [<a0000002000c5cd0>] close_netdev+0x90/0xc0 [xen_vnif]
>                                 sp=e0000000025b7da0 bsp=e0000000025b1140
>  [<a0000002000c7870>] backend_changed+0x1030/0x1080 [xen_vnif]
>                                 sp=e0000000025b7da0 bsp=e0000000025b10a8
>  [<a0000002000e5160>] otherend_changed+0x160/0x1a0 [xenbus]
>                                 sp=e0000000025b7dc0 bsp=e0000000025b1068
>  [<a0000002000e3e70>] xenwatch_handle_callback+0x70/0x100 [xenbus]
>                                 sp=e0000000025b7dc0 bsp=e0000000025b1040
>  [<a0000002000e4230>] xenwatch_thread+0x330/0x3a0 [xenbus]
>                                 sp=e0000000025b7dc0 bsp=e0000000025b1018
>  [<a0000001000b6e20>] kthread+0x180/0x200
>                                 sp=e0000000025b7e20 bsp=e0000000025b0fd8
>  [<a0000001000141b0>] kernel_thread_helper+0xd0/0x100
>                                 sp=e0000000025b7e30 bsp=e0000000025b0fb0
>  [<a0000001000094c0>] start_kernel_thread+0x20/0x40
>                                 sp=e0000000025b7e30 bsp=e0000000025b0fb0
>  BUG: xenwatch/3970, lock held at task exit time!
>  [a0000002000f0cf8] {xenwatch_mutex}
> .. held by:          xenwatch: 3970 [e0000000025b0000, 110]
> ... acquired at:               xenwatch_thread+0x1e0/0x3a0 [xenbus]
> 
> Best Regards,
> Yongkang (Kangkang) $BS@?5(B
> 
> >-----Original Message-----
> >From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
> >[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of DOI
> >Tsunehisa
> >Sent: 2006$BDj(B10$BTB(B16$BHU(B 20:31
> >To: xen-ia64-devel
> >Subject: [Xen-ia64-devel] Please try PV-on-HVM on IPF
> >
> >Hi all,
> >
> >  We've ported PV-on-HVM drivers for IPF. But I think that
> >only few tries it. Thus, I try to describe to use it.
> >
> >  And I attach several patches about PV-on-HVM.
> >
> >    + fix-warning.patch
> >      - warning fix for HVM PV driver
> >    + notsafe-comment.patch
> >      - add not-SMP-safe comment about PV-on-HVM
> >      - to take Isaku's suggestion.
> >    + pv-backport.patch (preliminary)
> >      - current HVM PV driver for only 2.6.16 or 2.6.16.* kernel
> >      - this is preliminary patch for backporting to before 2.6.16
> >        kernel
> >      - we tested only compiling on RHEL4.
> >
> >[Usage of PV-on-HVM]
> >
> >  1) get xen-ia64-unstable.hg tree (after cs:11805) and built it.
> >
> >  2) create a guest system image.
> >     - simply, install guest system on VT-i domain
> >
> >  3) build linux-2.6.16 kernel for guest system
> >     - get linux-2.6.16 kernel source and build
> >
> >  4) change guest kernel in the image to linux-2.6.16 kernel
> >     - edit config file of boot loader
> >
> >  5) build PV-on-HVM drivers
> >     # cd xen-ia64-unstable.hg/unmodified_drivers/linux-2.6
> >     # sh mkbuildtree
> >     # make -C /usr/src/linux-2.6.16 M=$PWD modules
> >
> >  6) copy the drivers to guest system image
> >     - mount guest system image with lomount command.
> >     - copy the drivers to guest system image
> >       # cp -p */*.ko guest_system...
> >
> >  7) start VT-i domain
> >
> >  8) attach drivers
> >    domvti# insmod xen-platform-pci.ko
> >    domvti# insmod xenbus.ko
> >    domvti# insmod xen-vbd.ko
> >    domvti# insmod xen-vnif.ko
> >
> >  9) attach devices with xm block-attach/network-attach
> >     - this operation is same for dom-u
> >
> >Thanks,
> >- Tsunehisa Doi
> 
> 

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel