WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] [PATCH] protect ridblock_owner.

To: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-ia64-devel] [PATCH] protect ridblock_owner.
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Tue, 28 Oct 2008 14:55:13 +0900
Delivery-date: Mon, 27 Oct 2008 22:55:17 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6i
[IA64] protect ridblock_owner.

protect ridblock_owner by spin lock.
deallocate_rid() is called by arch_domain_destroy() which
is called as rcu callback.
On the other hand allocate_rid() is called from domctl hypercall.
So the access to ridblock_owner is racy.
Protect it by spin lock.
So far probably xend serializes creating domains, so it hasn't
been caused issues.

Signed-off-by: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>

diff --git a/xen/arch/ia64/xen/regionreg.c b/xen/arch/ia64/xen/regionreg.c
--- a/xen/arch/ia64/xen/regionreg.c
+++ b/xen/arch/ia64/xen/regionreg.c
@@ -100,6 +100,7 @@ static unsigned long allocate_metaphysic
 
 static int implemented_rid_bits = 0;
 static int mp_rid_shift;
+static DEFINE_SPINLOCK(ridblock_lock);
 static struct domain *ridblock_owner[MAX_RID_BLOCKS] = { 0 };
 
 void __init init_rid_allocator (void)
@@ -169,6 +170,7 @@ int allocate_rid_range(struct domain *d,
        n_rid_blocks = 1UL << (ridbits - IA64_MIN_IMPL_RID_BITS);
        
        // skip over block 0, reserved for "meta-physical mappings (and Xen)"
+       spin_lock(&ridblock_lock);
        for (i = n_rid_blocks; i < MAX_RID_BLOCKS; i += n_rid_blocks) {
                if (ridblock_owner[i] == NULL) {
                        for (j = i; j < i + n_rid_blocks; ++j) {
@@ -182,16 +184,19 @@ int allocate_rid_range(struct domain *d,
                                break;
                }
        }
-       
-       if (i >= MAX_RID_BLOCKS)
+
+       if (i >= MAX_RID_BLOCKS) {
+               spin_unlock(&ridblock_lock);
                return 0;
-       
+       }
+
        // found an unused block:
        //   (i << min_rid_bits) <= rid < ((i + n) << min_rid_bits)
        // mark this block as owned
        for (j = i; j < i + n_rid_blocks; ++j)
                ridblock_owner[j] = d;
-       
+       spin_unlock(&ridblock_lock);
+
        // setup domain struct
        d->arch.rid_bits = ridbits;
        d->arch.starting_rid = i << IA64_MIN_IMPL_RID_BITS;
@@ -221,11 +226,12 @@ int deallocate_rid_range(struct domain *
        if (d->arch.rid_bits == 0)
                return 1;
 
-       
+       spin_lock(&ridblock_lock);
        for (i = rid_block_start; i < rid_block_end; ++i) {
                ASSERT(ridblock_owner[i] == d);
                ridblock_owner[i] = NULL;
        }
+       spin_unlock(&ridblock_lock);
 
        d->arch.rid_bits = 0;
        d->arch.starting_rid = 0;


-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-ia64-devel] [PATCH] protect ridblock_owner., Isaku Yamahata <=