This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] The overhead of VCPu migration in xen

To: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] The overhead of VCPu migration in xen
From: "michaeli.zhi" <michaeli.zhi@xxxxxxxxx>
Date: Fri, 20 Nov 2009 15:26:28 +0800
Delivery-date: Thu, 19 Nov 2009 23:26:27 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:subject :message-id:x-mailer:mime-version:content-type; bh=DfR9V6JZGAVOrtEsfp9MtdMyGm0o4UAG1qgshtcZdhM=; b=GOBPq8hIJ/wwCpg3STu3q9oqzHz9nvJan66TYUZWTKir2920p7bwbqBAPba5ZN9WaK eiVImck0RyTZ5kk6vrEn96OQo6cTsHJUz3QqYmi3T1IkXdS+iGH+65X5q3DqwmDZwCJK 4TDFQ9gvnPo0c59peHRS6rS/FeVPWIFgvehVM=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:subject:message-id:x-mailer:mime-version:content-type; b=KI67synNR18FUh9AKX7C4SHttcEUvDPmsmRlTCAT5iVkwKUsMegud8UYs7tGXH1saS 7i7H0mfLFLOMQJj9DtTE23Iin1glps2BI7msk1HVFuXZkd1iaSTj/VnqesDrnqfsjPP/ gfgUD+E1RSR7VH7kfYgc1td2qRXOjLG6kLyww=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi, everyone.
I did a little study on the code of credit scheduler of xen.
I am confused on overhear of the migration of VCPU .
As I learned, the course of VCPU migration is just to return a "reasonable" VCPU of other peer PCPU
to the PCPU which is busy on other VCPU, as well as some judgments before(eg., comparision of VCPU priority, affintiy). 
In a word, it is just a pointer returned.  
My question is where is the overhead,
or the overhead of executing those load_balance codes is expensive,
or will the migration of VCPU introducing the somekind of cache failure?
Best regards

Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] The overhead of VCPu migration in xen, michaeli.zhi <=