WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 15 of 16] credit2: Different unbalance tolerance for

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH 15 of 16] credit2: Different unbalance tolerance for underloaded and overloaded queues
From: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Date: Thu, 23 Dec 2010 12:38:47 +0000
Cc: george.dunlap@xxxxxxxxxxxxx
Delivery-date: Thu, 23 Dec 2010 04:54:37 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <patchbomb.1293107912@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <patchbomb.1293107912@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mercurial-patchbomb/1.6.3
Allow the "unbalance tolerance" -- the amount of difference between two 
runqueues
that will be allowed before rebalancing -- to differ depending on how busy the 
runqueue
is.  If it's less than 100%, default to a difference of 1.0; if it's more than 
100%,
default to a tolerance of 0.125.

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

diff -r dca9ad897502 -r 975588ecb94e xen/common/sched_credit2.c
--- a/xen/common/sched_credit2.c        Thu Dec 23 12:27:14 2010 +0000
+++ b/xen/common/sched_credit2.c        Thu Dec 23 12:27:27 2010 +0000
@@ -193,6 +193,10 @@
 int opt_load_window_shift=18;
 #define  LOADAVG_WINDOW_SHIFT_MIN 4
 integer_param("credit2_load_window_shift", opt_load_window_shift);
+int opt_underload_balance_tolerance=0;
+integer_param("credit2_balance_under", opt_underload_balance_tolerance);
+int opt_overload_balance_tolerance=-3;
+integer_param("credit2_balance_over", opt_overload_balance_tolerance);
 
 /*
  * Per-runqueue data
@@ -1232,14 +1236,34 @@
 
     /* Minimize holding the big lock */
     spin_unlock(&prv->lock);
-
     if ( max_delta_rqi == -1 )
         goto out;
 
-    /* Don't bother with load differences less than 25%. */
-    if ( load_delta < (1ULL<<(prv->load_window_shift - 2)) )
-        goto out;
+    {
+        s_time_t load_max;
+        int cpus_max;
 
+        
+        load_max = lrqd->b_avgload;
+        if ( orqd->b_avgload > load_max )
+            load_max = orqd->b_avgload;
+
+        cpus_max=cpus_weight(lrqd->active);
+        if ( cpus_weight(orqd->active) > cpus_max )
+            cpus_max = cpus_weight(orqd->active);
+
+        /* If we're under 100% capacaty, only shift if load difference
+         * is > 1.  otherwise, shift if under 12.5% */
+        if ( load_max < (1ULL<<(prv->load_window_shift))*cpus_max )
+        {
+            if ( load_delta < 
(1ULL<<(prv->load_window_shift+opt_underload_balance_tolerance) ) )
+                 goto out;
+        }
+        else
+            if ( load_delta < 
(1ULL<<(prv->load_window_shift+opt_overload_balance_tolerance)) )
+                goto out;
+    }
+             
     /* Try to grab the other runqueue lock; if it's been taken in the
      * meantime, try the process over again.  This can't deadlock
      * because if it doesn't get any other rqd locks, it will simply
@@ -1982,6 +2006,8 @@
            " Use at your own risk.\n");
 
     printk(" load_window_shift: %d\n", opt_load_window_shift);
+    printk(" underload_balance_tolerance: %d\n", 
opt_underload_balance_tolerance);
+    printk(" overload_balance_tolerance: %d\n", 
opt_overload_balance_tolerance);
 
     if ( opt_load_window_shift < LOADAVG_WINDOW_SHIFT_MIN )
     {

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel