WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] unnecessary VCPU migration happens again

To: "Emmanuel Ackaouy" <ack@xxxxxxxxxxxxx>
Subject: [Xen-devel] unnecessary VCPU migration happens again
From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Date: Fri, 1 Dec 2006 18:11:32 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 01 Dec 2006 02:11:50 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AccVMRLitIB9zId+RRS13uwP0CSHrw==
Thread-topic: [Xen-devel] unnecessary VCPU migration happens again
Emmanue,

I found that unnecessary VCPU migration happens again.


My environment is,

IPF two sockes, two cores per socket, 1 thread per core.

There are 4 core totally.

There are 3 domain, they are all UP,
So there are 3 VCPU totally.

One is domain0,
The other two are VTI-domain.

I found there are lots of migrations.


This is caused by below code segment in function csched_cpu_pick.
When I comments this code segment, there is no migration in above 
enviroment. 



I have a little analysis about this code.

This code handls multi-core and multi-thread, that's very good,
If two VCPUS running on LPs which belong to the same core, then the
performance
is bad, so if there are free LPS, we should let this two VCPUS run on
different cores.

This code may work well with para-domain.
Because para-domain is seldom blocked,
It may be block due to guest call "halt" instruction.
This means if a idle VCPU is running on a LP,
there is no non-idle VCPU running on this LP.
In this evironment, I think below code should work well.


But in HVM environment, HVM is blocked by IO operation,
That is to say, if a idle VCPU is running on a LP, maybe a
HVM VCPU is blocked, and HVM VCPU will run on this LP, when
it is woken up.
In this evironment, below code cause unnecessary migrations.
I think this doesn't reach the goal ot this code segment.

In IPF side, migration is time-consuming, so it caused some performance
degradation.


I have a proposal and it may be not good.

We can change the meaning of idle-LP,

Idle-LP means a idle-VCPU is running on this LP, and there is no VCPU
blocked on this
LP.( if this VCPU is woken up, this VCPU will run on this LP).



--Anthony


        /*
         * In multi-core and multi-threaded CPUs, not all idle execution
         * vehicles are equal!
         *
         * We give preference to the idle execution vehicle with the
most
         * idling neighbours in its grouping. This distributes work
across
         * distinct cores first and guarantees we don't do something
stupid
         * like run two VCPUs on co-hyperthreads while there are idle
cores
         * or sockets.
         */
        while ( !cpus_empty(cpus) )
        {
            nxt = first_cpu(cpus);

            if ( csched_idler_compare(cpu, nxt) < 0 )
            {
                cpu = nxt;
                cpu_clear(nxt, cpus);
            }
            else if ( cpu_isset(cpu, cpu_core_map[nxt]) )
            {
                cpus_andnot(cpus, cpus, cpu_sibling_map[nxt]);
            }
            else
            {
                cpus_andnot(cpus, cpus, cpu_core_map[nxt]);
            }

            ASSERT( !cpu_isset(nxt, cpus) );
        }

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel