On 03/08/2011 04:51, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx> wrote:
> Hello,
>
> The domain dump for dom34 is
> (XEN) General information for domain 34:
> (XEN) refcnt=3 dying=0 nr_pages=131065 xenheap_pages=8 dirty_cpus={}
> max_pages=133376
> (XEN) handle=97ef6eef-69c2-024c-1bbb-a150ca668691 vm_assist=00000000
> (XEN) paging assistance: hap refcounts translate external
> (XEN) Rangesets belonging to domain 34:
> (XEN) I/O Ports { }
> (XEN) Interrupts { 32-55 }
> (XEN) I/O Memory { f9f00-f9f03, fa001-fa003, fa19c-fa19f,
> fa29d-fa29f, fa39c-fa39f, fa49d-fa49f, fa59c-fa59f, fa69d-fa69f,
> fa79c-fa79f, fa89d-fa89f, fa99c-fa99f, faa9d-faa9f, fab9c-fab9f,
> fac9d-fac9f, fad9c-fad9f, fae9d-fae9f }
> (XEN) Memory pages belonging to domain 34:
> (XEN) DomPage list too long to display
> (XEN) P2M entry stats:
> (XEN) L1: 1590 entries, 6512640 bytes
> (XEN) L2: 253 entries, 530579456 bytes
> (XEN) PoD entries=0 cachesize=0 superpages=0
> (XEN) XenPage 00000000001146e1: caf=c000000000000001,
> taf=7400000000000001
> (XEN) XenPage 00000000001146e0: caf=c000000000000001,
> taf=7400000000000001
> (XEN) XenPage 00000000001146df: caf=c000000000000001,
> taf=7400000000000001
> (XEN) XenPage 00000000001146de: caf=c000000000000001,
> taf=7400000000000001
> (XEN) XenPage 00000000000bdc0e: caf=c000000000000001,
> taf=7400000000000001
> (XEN) XenPage 0000000000114592: caf=c000000000000001,
> taf=7400000000000001
> (XEN) XenPage 000000000011458f: caf=c000000000000001,
> taf=7400000000000001
> (XEN) XenPage 000000000011458c: caf=c000000000000001,
> taf=7400000000000001
> (XEN) VCPU information and callbacks for domain 34:
> (XEN) VCPU0: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00,
> upcall_mask = 00 dirty_cpus={} cpu_affinity={3}
> (XEN) paging assistance: hap, 4 levels
> (XEN) No periodic timer
> (XEN) VCPU1: CPU3 [has=F] flags=1 poll=0 upcall_pend = 00,
> upcall_mask = 00 dirty_cpus={3} cpu_affinity={3}
> (XEN) paging assistance: hap, 4 levels
> (XEN) No periodic timer
>
> Showing that this domain is actually pinned to pcpu 3.
>
> Am I mis-interpreting the information, or does this indicate that the
> scheduler (credit) is not obeying the cpu_affinity? The virtual
> functions seem to be passing network traffic correctly so I would assume
> that interrupts are getting where they are supposed to be going.
The cpu_affinity sets for DOM34 above include CPU3 only, and that is where
it is executing. So the scheduler is working fine -- the scheduler doesn't
know anything about interrupts and their affinities.
> Another question which may or may not be related. cpu_cfg has a vector
> and a cpu_mask. From this, I assume that the same interrupt must occupy
> the same IDT entry for every pcpu it might be received on. Is there an
> architectural reason why this should be the case, or is it just the way
> Xen is coded?
Just the way it's implemented. Avoids needing a more complex irq_cfg
structure I suppose.
> (Also, it seems that <asm/irq.h> and <xen/irq.h> both define struct
> irq_cfg and while one is strictly an extension of the other, there
> appears to be no guards around them meaning that sizeof(irq_cfg) depends
> on which header file you include. I don't know if this is relevant or
> not, but it strikes me that code getting confused as to which they are
> using could be computing on junk if it is expecting the longer irq_cfg
> and actually getting the shorter irq_cfg.)
It wouldn't compile if this were the case. The definition in xen/irq.h is in
an ia64-specific code block. Pretty skanky.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|