WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] xennet: skb rides the rocket messages in domU dmesg

To: Mark Hurenkamp <mark.hurenkamp@xxxxxxxxx>
Subject: Re: [Xen-devel] xennet: skb rides the rocket messages in domU dmesg
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 26 May 2010 15:39:00 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 26 May 2010 15:40:31 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4BFD90D6.2020107@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4BFD90D6.2020107@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100430 Fedora/3.0.4-2.fc12 Lightning/1.0b2pre Thunderbird/3.0.4
On 05/26/2010 02:21 PM, Mark Hurenkamp wrote:
> Hi,
>
>
> On my home server i am running Xen-4.0.1-rc1 with a recent xen/next
> kernel,

Are you actually using the "xen/next" branch?  I recommend you use
xen/stable-2.6.32.x, since that's tracking all the other bugfixes going
into Linux 2.6.32.

> and a pvm domU with the same kernel, and 4 tuners passed through.
> Because the mythtv backend domain would sometimes become unstable, i
> decided
> to split up my mythtv backend into 3 seperate virtual machines, one
> master
> backend with the database, and 2 slave backends with the tuners.
> One of the slave backends has a cx23885 based dvb tuner card, the
> other slave
> backend runs 3 ivtv based tuners.
> To keep consistency with old recording data, and since i would like to
> have all
> recordings in a single volume, i tried to use an nfs mount of the
> recordings volume
> from the dom0 to mount on all backends. This resulted in a very
> unstable system,
> to the point where my most important slave backend became unusable.

Unstable how?

> So i tried it the other way, have the slave backends each mount their own
> recordings volume as a block device via xen, and for backwards
> compatibility mount
> the volume which holds the old recordings via nfs on the master backend.
>
> Now i see many "xennet: skb rides the rocket" messages appear in the
> (pv) slave
> backend which exports the recordings volume to the master backend. These
> messages i did not see when there was only a single mythtv backend.
> (both the dom0 as well as the mythtv domUs are ubuntu lucid server based)
> Overall the system seems to perform ok, and the messages are not
> causing the
> system to become unusable or more unstable, so it is not a major issue.

That appears to mean that you're getting single packets which are larger
than 18 pages long (72k).  I'm not quite sure how that's possible, since
I thought the datagram limit is 64k..

Are you using nfs over udp or tcp?  (I think tcp, from your stack trace.)

Does turning of tso/gso with ethtool make a difference?

    J

>
> Note that both the master backend, and the slave backend which exports
> the
> volume, are paravirtualised domains. The slave backend has the following
> xen config:
>
> kernel = '/boot/vmlinuz-2.6.32m5'
> ramdisk = '/boot/initrd.img-2.6.32m5'
> extra = 'root=/dev/xvda1 ro console=hvc0 noirqdebug iommu=soft
> swiotlb=force'
> maxmem = '1000'
> memory = '500'
> device_model='/usr/lib/xen/bin/qemu-dm'
> serial='pty'
> disk = [
>     'phy:/dev/vm/tilnes-lucid,hda,w',
>     'phy:/dev/mythtv/recordings,hdb,w',
> ]
> boot='c'
> name = 'tilnes'
> vif = [ 'mac=aa:20:00:00:01:72, bridge=loc' ]
> vfb = [ 'vnc=1,vnclisten=0.0.0.0,vncdisplay=5' ]
>
> usb=1
> usbdevice='tablet'
> monitor=1
> pci = [
>     '0000:08:02.0',
>     '0000:09:08.0',
>     '0000:09:09.0',
>     ]
> vcpus=8
>
>
> The messagedump i see (this is only 1 example, my dmesg is full of
> these):
>
> xennet: skb rides the rocket: 20 frags
> Pid: 3237, comm: nfsd Tainted: G      D    2.6.32m5 #9
> Call Trace:
> <IRQ>  [<ffffffffa005e2d4>] xennet_start_xmit+0x75/0x678 [xen_netfront]
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff8136bd3a>] ? rcu_read_unlock+0x0/0x1e
>  [<ffffffff8100efdf>] ? xen_restore_fl_direct_end+0x0/0x1
>  [<ffffffff81076983>] ? lock_release+0x1e0/0x1ed
>  [<ffffffff8136ee0e>] dev_hard_start_xmit+0x236/0x2e1
>  [<ffffffff8138114e>] sch_direct_xmit+0x68/0x16f
>  [<ffffffff8136f240>] dev_queue_xmit+0x274/0x3de
>  [<ffffffff8136f130>] ? dev_queue_xmit+0x164/0x3de
>  [<ffffffff8139cf30>] ? dst_output+0x0/0xd
>  [<ffffffff8139e16d>] ip_finish_output2+0x1df/0x222
>  [<ffffffff8139e218>] ip_finish_output+0x68/0x6a
>  [<ffffffff8139e503>] ip_output+0x9c/0xa0
>  [<ffffffff8139e5a2>] ip_local_out+0x20/0x24
>  [<ffffffff8139ebfe>] ip_queue_xmit+0x309/0x37a
>  [<ffffffff8100e871>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff8100e871>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff813b012a>] tcp_transmit_skb+0x648/0x686
>  [<ffffffff813b2654>] tcp_write_xmit+0x808/0x8f7
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff813af7e2>] ? tcp_established_options+0x2e/0xa9
>  [<ffffffff813b279e>] __tcp_push_pending_frames+0x2a/0x58
>  [<ffffffff813ac124>] tcp_data_snd_check+0x24/0xea
>  [<ffffffff813ae464>] tcp_rcv_established+0xdd/0x6d4
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff813b535b>] tcp_v4_do_rcv+0x1ba/0x375
>  [<ffffffff8100efdf>] ? xen_restore_fl_direct_end+0x0/0x1
>  [<ffffffff813b62d1>] ? tcp_v4_rcv+0x2b3/0x6b7
>  [<ffffffff813b6474>] tcp_v4_rcv+0x456/0x6b7
>  [<ffffffff8139ad27>] ? ip_local_deliver_finish+0x0/0x235
>  [<ffffffff8139ae9b>] ip_local_deliver_finish+0x174/0x235
>  [<ffffffff8139ad6b>] ? ip_local_deliver_finish+0x44/0x235
>  [<ffffffff8139afce>] ip_local_deliver+0x72/0x7c
>  [<ffffffff8139a89d>] ip_rcv_finish+0x3cd/0x3fb
>  [<ffffffff8139ab84>] ip_rcv+0x2b9/0x2f9
>  [<ffffffff813ec764>] ? packet_rcv_spkt+0xd6/0xe1
>  [<ffffffff8136e065>] netif_receive_skb+0x445/0x46f
>  [<ffffffff810c0b92>] ? free_hot_page+0x3a/0x3f
>  [<ffffffffa005f41d>] xennet_poll+0xaf4/0xc7b [xen_netfront]
>  [<ffffffff8136e7ac>] net_rx_action+0xab/0x1df
>  [<ffffffff81076983>] ? lock_release+0x1e0/0x1ed
>  [<ffffffff81053842>] __do_softirq+0xe0/0x1a2
>  [<ffffffff8109e984>] ? handle_level_irq+0xd1/0xda
>  [<ffffffff8126a152>] ? __xen_evtchn_do_upcall+0x12e/0x163
>  [<ffffffff81012cac>] call_softirq+0x1c/0x30
>  [<ffffffff8101428d>] do_softirq+0x41/0x81
>  [<ffffffff81053694>] irq_exit+0x36/0x78
>  [<ffffffff8126a646>] xen_evtchn_do_upcall+0x37/0x47
>  [<ffffffff81012cfe>] xen_do_hypervisor_callback+0x1e/0x30
> <EOI>  [<ffffffff8111dc4a>] ? __bio_add_page+0xee/0x212
>  [<ffffffff8111df9b>] ? bio_alloc+0x10/0x1f
>  [<ffffffff811217e1>] ? mpage_alloc+0x25/0x7d
>  [<ffffffff8111dd9f>] ? bio_add_page+0x31/0x33
>  [<ffffffff81121dde>] ? do_mpage_readpage+0x3d3/0x488
>  [<ffffffff810bba3b>] ? add_to_page_cache_locked+0xcc/0x108
>  [<ffffffff81121fbb>] ? mpage_readpages+0xcb/0x10f
>  [<ffffffff81159aee>] ? ext3_get_block+0x0/0xf9
>  [<ffffffff81159aee>] ? ext3_get_block+0x0/0xf9
>  [<ffffffff81157bbc>] ? ext3_readpages+0x18/0x1a
>  [<ffffffff810c387b>] ? __do_page_cache_readahead+0x140/0x1cd
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff8100efdf>] ? xen_restore_fl_direct_end+0x0/0x1
>  [<ffffffff810c3924>] ? ra_submit+0x1c/0x20
>  [<ffffffff810c3d1c>] ? ondemand_readahead+0x1de/0x1f1
>  [<ffffffff810c3dc3>] ? page_cache_sync_readahead+0x17/0x1c
>  [<ffffffff81118790>] ? __generic_file_splice_read+0xf0/0x41a
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff811c0c07>] ? rcu_read_unlock+0x0/0x1e
>  [<ffffffff8100efdf>] ? xen_restore_fl_direct_end+0x0/0x1
>  [<ffffffff81076983>] ? lock_release+0x1e0/0x1ed
>  [<ffffffff811c0c23>] ? rcu_read_unlock+0x1c/0x1e
>  [<ffffffff811c16e2>] ? avc_has_perm_noaudit+0x3b5/0x3c7
>  [<ffffffff810ec8eb>] ? check_object+0x170/0x1a9
>  [<ffffffff8100e871>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff8111770d>] ? spd_release_page+0x0/0x14
>  [<ffffffff811c4d62>] ? selinux_file_permission+0x57/0xae
>  [<ffffffff81118afe>] ? generic_file_splice_read+0x44/0x72
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff811171d4>] ? do_splice_to+0x6c/0x79
>  [<ffffffff8100eff2>] ? check_events+0x12/0x20
>  [<ffffffff811178ed>] ? splice_direct_to_actor+0xc2/0x1a1
>  [<ffffffff8100efdf>] ? xen_restore_fl_direct_end+0x0/0x1
>  [<ffffffffa01a0b53>] ? nfsd_direct_splice_actor+0x0/0x12 [nfsd]
>  [<ffffffffa01a0a44>] ? nfsd_vfs_read+0x276/0x385 [nfsd]
>  [<ffffffffa01a115a>] ? nfsd_read+0xa1/0xbf [nfsd]
>  [<ffffffffa00de128>] ? svc_xprt_enqueue+0x22b/0x238 [sunrpc]
>  [<ffffffffa01a7cbf>] ? nfsd3_proc_read+0xe2/0x121 [nfsd]
>  [<ffffffffa00d6551>] ? cache_put+0x2d/0x2f [sunrpc]
>  [<ffffffffa019c36f>] ? nfsd_dispatch+0xec/0x1c7 [nfsd]
>  [<ffffffffa00d2e99>] ? svc_process+0x436/0x637 [sunrpc]
>  [<ffffffffa01a4418>] ? exp_readlock+0x10/0x12 [nfsd]
>  [<ffffffffa019c8c0>] ? nfsd+0xf3/0x13e [nfsd]
>  [<ffffffffa019c7cd>] ? nfsd+0x0/0x13e [nfsd]
>  [<ffffffff8106601d>] ? kthread+0x7a/0x82
>  [<ffffffff81012baa>] ? child_rip+0xa/0x20
>  [<ffffffff81011ce6>] ? int_ret_from_sys_call+0x7/0x1b
>  [<ffffffff81012526>] ? retint_restore_args+0x5/0x6
>  [<ffffffff81012ba0>] ? child_rip+0x0/0x20
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>