WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Problem with qemu-dm

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Problem with qemu-dm
From: Roberto Scudeller <beto.rvs@xxxxxxxxx>
Date: Wed, 6 Oct 2010 20:40:08 -0300
Delivery-date: Wed, 06 Oct 2010 16:41:00 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=RiqZyCSPqHMatSYtchGywwN8plzcoySEy96m6jbxOSU=; b=AOifVkDoE0GXV7wV4QV4ZJHsKr+ZmOvLmA71EUU/QBjdl635oMYPw0d2sZUDohrAZr crBM/mgWDr1pAr4uEqKGarq8tah3w9nCAg6qOyf3I4b/c1qs+fzn0KAsnERq+McpDrdq YnFBi074g7PUCsiSKb8uUhI2BTojgbGdcDYLU=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=YP8OVD2cGF079hgmz9PSvcWhrdra5IEyYPvalArftUIKNG7/aQcAsmLg9LUgODYS9O Umo9qoqRYT49tIRKkbJgB6kiltoQvsLVfPK2vdqo8ClWDLH2P/0ga8+rCqVRIOoLU5MA sgHSbEawOr8I6yENK6nro/Ribq6gkxuNnfJCo=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi all,

We are using xen4.0.1 with kernel 2.6.32.21, x86_64, running Windows 2003 server 64 bits at domU with gplpv drivers.

When starting several instances in sequence, we notice that processor load becomes very heavy; most of processor load goes to the qemu-dm processes. Even the processes for domU instances that were already running get loaded.
Looking at xen load, only the first domU uses processing time; the other ones started in sequence do not use any process time:
[root@localhost ~]# xl list
Name                                        ID   Mem VCPUs    State    Time(s)
Domain-0                                     0  3602     8        r--  19232.7
wk3-64-ent.1                                 1  4099     4        ---   1548.7
wk3-64-ent.2                                 2  4099     4        ---   2181.8
wk3-64-ent.3                                 3  4099     4        ---   2362.0
wk3-64-ent.4                                 4  4099     4        r--   2407.3
wk3-64-ent.5                                 5  4099     4        ---   1662.9
wk3-64-ent.6                                 6  4099     4        ---    960.3
wk3-64-ent.7                                 7  4099     1        r--      6.4
wk3-64-ent.8                                 8  4099     1        r--      5.6
wk3-64-ent.9                                 9  4099     4        ---     10.9
wk3-64-ent.10                               10  4099     1        ---      4.1
wk3-64-ent.11                               11  4099     1        ---      3.4
wk3-64-ent.12                               12  4099     1        ---      3.0
wk3-64-ent.13                               13  4099     1        ---      2.6
wk3-64-ent.14                               14  4099     1        ---      2.5
wk3-64-ent.15                               15  4099     1        ---      2.3
wk3-64-ent.16                               16  4099     1        ---      2.5
wk3-64-ent.17                               17  4099     1        ---      2.2
wk3-64-ent.18                               18  4099     1        ---      2.0
wk3-64-ent.19                               19  4099     1        ---      1.5
wk3-64-ent.20                               20  4099     1        ---      1.1

Looking at the console of any of the instances, I see the initial screen, waiting for qemu disks.

Eventually, we get the exception below. After it, the server gets lost and needs to be restarted:

OctOct  6 13:50:28 localhost kernel: [ 9667.958935] ------------[ cut here ]------------
Oct  6 13:50:28 localhost kernel: [ 9667.958965] WARNING: at /root/xen4/xen-4.0-testing.hg/linux-2.6-pvops.git/net/sched/sch_generic.c:261 dev_watchdog+0x100/0x165
()
Oct  6 13:50:28 localhost kernel: [ 9667.958996] Hardware name: PowerEdge M610
Oct  6 13:50:28 localhost kernel: [ 9667.959011] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
Oct  6 13:50:28 localhost kernel: [ 9667.959013] Modules linked in: ebtable_nat tun bridge stp ebt_ip ebtable_filter ebtables bonding xt_state xt_multiport megarai
d_sas
  6 13:50:28 localhost kernel: [ 9667.959102] Pid: 27902, comm: qemu-dm Not tainted 2.6.32.21 #2
Oct  6 13:50:28 localhost kernel: [ 9667.959120] Call Trace:
Oct  6 13:50:28 localhost kernel: [ 9667.959131]  <IRQ>  [<ffffffff81483960>] ? dev_watchdog+0x100/0x165
Oct  6 13:50:28 localhost kernel: [ 9667.959162]  [<ffffffff81074dbf>] warn_slowpath_common+0x77/0x8f
Oct  6 13:50:28 localhost kernel: [ 9667.959183]  [<ffffffff81074e87>] warn_slowpath_fmt+0x9f/0xa1
Oct  6 13:50:28 localhost kernel: [ 9667.959204]  [<ffffffff810684c3>] ? __enqueue_entity+0x74/0x76
Oct  6 13:50:28 localhost kernel: [ 9667.959225]  [<ffffffff81038731>] ? xen_force_evtchn_callback+0xd/0xf
Oct  6 13:50:28 localhost kernel: [ 9667.959246]  [<ffffffff81038ea2>] ? check_events+0x12/0x20
Oct  6 13:50:28 localhost kernel: [ 9667.959481]  [<ffffffff81064bce>] ? __raw_local_irq_save+0x12/0x18
Oct  6 13:50:28 localhost kernel: [ 9667.959715]  [<ffffffff81482da6>] ? __netif_tx_lock+0x16/0x1f
Oct  6 13:50:28 localhost kernel: [ 9667.959946]  [<ffffffff81482e27>] ? netif_tx_lock+0x41/0x69
Oct  6 13:50:28 localhost kernel: [ 9667.960179]  [<ffffffff8146db0c>] ? netdev_drivername+0x43/0x4a
Oct  6 13:50:28 localhost kernel: [ 9667.960413]  [<ffffffff81483960>] dev_watchdog+0x100/0x165
Oct  6 13:50:28 localhost kernel: [ 9667.960649]  [<ffffffff8106c24b>] ? try_to_wake_up+0x2af/0x2c1
Oct  6 13:50:28 localhost kernel: [ 9667.960885]  [<ffffffff8158f4d3>] ? _spin_unlock_irqrestore+0xe/0x10
Oct  6 13:50:28 localhost kernel: [ 9667.961120]  [<ffffffff81080803>] ? process_timeout+0x0/0xb
Oct  6 13:50:28 localhost kernel: [ 9667.961353]  [<ffffffff81483860>] ? dev_watchdog+0x0/0x165
Oct  6 13:50:28 localhost kernel: [ 9667.961588]  [<ffffffff810804d0>] run_timer_softirq+0x172/0x20e
Oct  6 13:50:28 localhost kernel: [ 9667.961824]  [<ffffffff8107aad1>] __do_softirq+0xcb/0x18b
Oct  6 13:50:28 localhost kernel: [ 9667.962059]  [<ffffffff8103cdec>] call_softirq+0x1c/0x30
Oct  6 13:50:28 localhost kernel: [ 9667.962290]  [<ffffffff8103e5e7>] do_softirq+0x4d/0x8e
Oct  6 13:50:28 localhost kernel: [ 9667.962522]  [<ffffffff8107a964>] irq_exit+0x36/0x75
Oct  6 13:50:28 localhost kernel: [ 9667.962755]  [<ffffffff812bbdd4>] xen_evtchn_do_upcall+0x33/0x43
Oct  6 13:50:28 localhost kernel: [ 9667.962990]  [<ffffffff8103ce3e>] xen_do_hypervisor_callback+0x1e/0x30
Oct  6 13:50:28 localhost kernel: [ 9667.963224]  <EOI>  [<ffffffff8158f521>] ? _spin_lock_irqsave+0xd/0x24
Oct  6 13:50:28 localhost kernel: [ 9667.963467]  [<ffffffff8100922a>] ? hypercall_page+0x22a/0x1000
Oct  6 13:50:28 localhost kernel: [ 9667.963701]  [<ffffffff8100922a>] ? hypercall_page+0x22a/0x1000
Oct  6 13:50:28 localhost kernel: [ 9667.963935]  [<ffffffff81038731>] ? xen_force_evtchn_callback+0xd/0xf
Oct  6 13:50:28 localhost kernel: [ 9667.964171]  [<ffffffff81038ea2>] ? check_events+0x12/0x20
Oct  6 13:50:28 localhost kernel: [ 9667.964402]  [<ffffffff8158f521>] ? _spin_lock_irqsave+0xd/0x24
Oct  6 13:50:28 localhost kernel: [ 9667.964636]  [<ffffffff81038e49>] ? xen_irq_enable_direct_end+0x0/0x7
Oct  6 13:50:28 localhost kernel: [ 9667.964872]  [<ffffffff812c0e36>] ? __spin_unlock_irq+0x1b/0x1d
Oct  6 13:50:28 localhost kernel: [ 9667.965107]  [<ffffffff812c1431>] ? evtchn_write+0xe9/0x103
Oct  6 13:50:28 localhost kernel: [ 9667.965343]  [<ffffffff811107de>] ? vfs_write+0xab/0x105
Oct  6 13:50:28 localhost kernel: [ 9667.965576]  [<ffffffff811108f2>] ? sys_write+0x47/0x6c
Oct  6 13:50:28 localhost kernel: [ 9667.965808]  [<ffffffff8103bc82>] ? system_call_fastpath+0x16/0x1b
Oct  6 13:50:28 localhost kernel: [ 9667.966044] ---[ end trace 5ecb364f57668210 ]---

We would thank any help on this issue.

Regards

--
Roberto Scudeller
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] Problem with qemu-dm, Roberto Scudeller <=