WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Fix deadlock in schedule.c at TRACE mode

To: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH] Fix deadlock in schedule.c at TRACE mode
From: NISHIGUCHI Naoki <nisiguti@xxxxxxxxxxxxxx>
Date: Thu, 24 Apr 2008 16:03:04 +0900
Delivery-date: Thu, 24 Apr 2008 00:03:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200804240542.m3O5gswi016365@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <48100DF2.5000605@xxxxxxxxxxxxxx> <200804240542.m3O5gswi016365@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.12 (Windows/20080213)
Hi, Atsushi

Atsushi SAKAI wrote:
> I have two questions about this.
> 
> 1)How to reproduce your deadlock ?
>   Would you give me your test environment to reproduce this deadlock?
>   Is it easily reproduced by running xenmon.py or xentrace 
>   with one or two guest domain(s)?
>  or Any additional condition needed?

This deadlock can be easily reproduced by running xenmon.py without
guest domain.
Furthermore, this deadlock occurs easier by applying my patch to xenbaked.c.

Subject of my patch is "[PATCH] Fix access to trace buffer after
xentrace changes".

> 2)About fixing code,
>   I think __trace_var() should fix for this issue not schedule()
>   This issue cannot be fixed by modify the __trace_var()?

Thanks for your advise.
I agree with you.
I fixed this deadlock using tasklet in trace.c.

Here is the patch.

Thanks,
Naoki Nishiguchi
diff -r 77dec8732cde xen/common/trace.c
--- a/xen/common/trace.c        Wed Apr 23 16:58:44 2008 +0100
+++ b/xen/common/trace.c        Thu Apr 24 15:56:37 2008 +0900
@@ -69,6 +69,13 @@ static cpumask_t tb_cpu_mask = CPU_MASK_
 /* which tracing events are enabled */
 static u32 tb_event_mask = TRC_ALL;
 
+static void trace_notify_guest(unsigned long unused)
+{
+    send_guest_global_virq(dom0, VIRQ_TBUF);
+}
+
+static DECLARE_TASKLET(trace_tasklet, trace_notify_guest, 0);
+
 /**
  * alloc_trace_bufs - performs initialization of the per-cpu trace buffers.
  *
@@ -506,7 +513,7 @@ void __trace_var(u32 event, int cycles, 
     /* Notify trace buffer consumer that we've crossed the high water mark. */
     if ( started_below_highwater &&
          (calc_unconsumed_bytes(buf) >= t_buf_highwater) )
-        send_guest_global_virq(dom0, VIRQ_TBUF);
+        tasklet_schedule(&trace_tasklet);
 }
 
 /*
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>