WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC][PATCH] 1/3] [XEN] Use explicit bit sized fields fo

To: "Tony Breeds" <tony@xxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC][PATCH] 1/3] [XEN] Use explicit bit sized fields for exported xentrace data.
From: "George Dunlap " <dunlapg@xxxxxxxxx>
Date: Thu, 30 Nov 2006 11:58:17 -0500
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 30 Nov 2006 08:58:20 -0800
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references:x-google-sender-auth; b=F2P1sY2+uI8qgEFQPrUwRx7vlsFBSuft15TeEUCMgrbVJ3eRKRViiapJucK0PWibHiIN/tBtzoaU8mTm429xmBN19bf5IbziJDXwIR5hr1FaQjBHTO5WR0qUXTyiLfYH9U/dD3ZcMSdNWpv2Zj7gXUKjFEebn+zTlaqtdfh5XqY=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200611300601.kAU614qa025335@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1164866397.55184.656346817748.qpush@thor> <200611300601.kAU614qa025335@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hmm... this has the unfortunate side-effect of doubling the size of
the trace, and effectively halving the effectiveness of the trace
buffer in avoiding drops.  My moderate-length traces are already in
the gigabyte range, and I occasionally lose trace records even with a
buffer size of 256.  It would be really nice if we could avoid that.

I happen to be using the VMENTER/VMEXIT tracing, which could be
consolidated into one record if we went to a 64-bit trace.  Is anyone
else doing high-bandwidth tracing that this would affect in a
significantly negative way?

-George

On 11/30/06, Tony Breeds <tony@xxxxxxxxxxxxxxxxxx> wrote:
Signed-off-by: Tony Breeds <tony@xxxxxxxxxxxxxxxxxx>
---

 xen/common/trace.c         |    6 +++---
 xen/include/public/trace.h |    2 +-
 xen/include/xen/trace.h    |   14 +++++++-------
 3 files changed, 11 insertions(+), 11 deletions(-)

Index: xen-unstable.hg-mainline.xentrace/xen/common/trace.c
===================================================================
--- xen-unstable.hg-mainline.xentrace.orig/xen/common/trace.c
+++ xen-unstable.hg-mainline.xentrace/xen/common/trace.c
@@ -46,7 +46,7 @@ static int nr_recs;
 static int t_buf_highwater;

 /* Number of records lost due to per-CPU trace buffer being full. */
-static DEFINE_PER_CPU(unsigned long, lost_records);
+static DEFINE_PER_CPU(uint64_t, lost_records);

 /* a flag recording whether initialization has been done */
 /* or more properly, if the tbuf subsystem is enabled right now */
@@ -228,8 +228,8 @@ int tb_control(xen_sysctl_tbuf_op_t *tbc
  * failure, otherwise 0.  Failure occurs only if the trace buffers are not yet
  * initialised.
  */
-void trace(u32 event, unsigned long d1, unsigned long d2,
-           unsigned long d3, unsigned long d4, unsigned long d5)
+void trace(uint32_t event, uint64_t d1, uint64_t d2, uint64_t d3, uint64_t d4,
+                           uint64_t d5)
 {
     struct t_buf *buf;
     struct t_rec *rec;
Index: xen-unstable.hg-mainline.xentrace/xen/include/public/trace.h
===================================================================
--- xen-unstable.hg-mainline.xentrace.orig/xen/include/public/trace.h
+++ xen-unstable.hg-mainline.xentrace/xen/include/public/trace.h
@@ -76,7 +76,7 @@
 struct t_rec {
     uint64_t cycles;          /* cycle counter timestamp */
     uint32_t event;           /* event ID                */
-    unsigned long data[5];    /* event data items        */
+    uint64_t data[5];         /* event data items        */
 };

 /*
Index: xen-unstable.hg-mainline.xentrace/xen/include/xen/trace.h
===================================================================
--- xen-unstable.hg-mainline.xentrace.orig/xen/include/xen/trace.h
+++ xen-unstable.hg-mainline.xentrace/xen/include/xen/trace.h
@@ -33,19 +33,19 @@ void init_trace_bufs(void);
 /* used to retrieve the physical address of the trace buffers */
 int tb_control(struct xen_sysctl_tbuf_op *tbc);

-void trace(u32 event, unsigned long d1, unsigned long d2,
-           unsigned long d3, unsigned long d4, unsigned long d5);
+void trace(uint32_t event, uint64_t d1, uint64_t d2, uint64_t d3, uint64_t d4,
+                           uint64_t d5);

 /* Avoids troubling the caller with casting their arguments to a trace macro */
 #define trace_do_casts(e,d1,d2,d3,d4,d5) \
     do {                                 \
         if ( unlikely(tb_init_done) )    \
             trace(e,                     \
-                 (unsigned long)d1,      \
-                 (unsigned long)d2,      \
-                 (unsigned long)d3,      \
-                 (unsigned long)d4,      \
-                 (unsigned long)d5);     \
+                 (uint64_t)d1,           \
+                 (uint64_t)d2,           \
+                 (uint64_t)d3,           \
+                 (uint64_t)d4,           \
+                 (uint64_t)d5);          \
     } while ( 0 )

 /* Convenience macros for calling the trace function. */


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>