Hi, Atsushi
Atsushi SAKAI wrote:
> I have two questions about this.
>
> 1)How to reproduce your deadlock ?
> Would you give me your test environment to reproduce this deadlock?
> Is it easily reproduced by running xenmon.py or xentrace
> with one or two guest domain(s)?
> or Any additional condition needed?
This deadlock can be easily reproduced by running xenmon.py without
guest domain.
Furthermore, this deadlock occurs easier by applying my patch to xenbaked.c.
Subject of my patch is "[PATCH] Fix access to trace buffer after
xentrace changes".
> 2)About fixing code,
> I think __trace_var() should fix for this issue not schedule()
> This issue cannot be fixed by modify the __trace_var()?
Thanks for your advise.
I agree with you.
I fixed this deadlock using tasklet in trace.c.
Here is the patch.
Thanks,
Naoki Nishiguchi
diff -r 77dec8732cde xen/common/trace.c
--- a/xen/common/trace.c Wed Apr 23 16:58:44 2008 +0100
+++ b/xen/common/trace.c Thu Apr 24 15:56:37 2008 +0900
@@ -69,6 +69,13 @@ static cpumask_t tb_cpu_mask = CPU_MASK_
/* which tracing events are enabled */
static u32 tb_event_mask = TRC_ALL;
+static void trace_notify_guest(unsigned long unused)
+{
+ send_guest_global_virq(dom0, VIRQ_TBUF);
+}
+
+static DECLARE_TASKLET(trace_tasklet, trace_notify_guest, 0);
+
/**
* alloc_trace_bufs - performs initialization of the per-cpu trace buffers.
*
@@ -506,7 +513,7 @@ void __trace_var(u32 event, int cycles,
/* Notify trace buffer consumer that we've crossed the high water mark. */
if ( started_below_highwater &&
(calc_unconsumed_bytes(buf) >= t_buf_highwater) )
- send_guest_global_virq(dom0, VIRQ_TBUF);
+ tasklet_schedule(&trace_tasklet);
}
/*
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|