WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] SMP designs and discuss

To: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-ia64-devel] SMP designs and discuss
From: Tristan Gingold <Tristan.Gingold@xxxxxxxx>
Date: Fri, 14 Oct 2005 15:05:42 +0200
Delivery-date: Fri, 14 Oct 2005 12:00:12 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.5
Hi,

here is a list of point I'd like to share and to discuss with you.
Please, comment and do not hesitate to cut this mail to create new threads.

Tristan.

* smp_processor_id() (getting the current cpu number).
  Currently, this number is stored inside the current domain (cpu field),
  and is read throught the variable 'current' (ie r13=tp).
  Another possibility is to store this number inside the per-cpu storage.
 

* scheduler spin-lock.
  Currently, I use an hack to release the spin-lock acquired in
   __enter_schedule.  This is done in schedule_tail.
  The problem is during the first activation of a domain: the spin-lock is
  acquired, but context_switch never returns, and on x86 the spin-lock is
  released after context_switch.


* Idle regs.
  Currently, idle domains have no regs (the regs field is NULL).
  [I am not sure it is true for idle0].
  Is it a problem ?
  I had to modify the heartbeat so that it doesn't reference regs.


* Why xentime.c is so complicate ?
  What is the purpose of itc_at_irq, stime_irq ?


* Xenheap size.
  It is too small for more than 2 cpus.
  Maybe its size must depends of MAX_CPUS ?


* I'd like to catch memory access out of Xen.
  I think it is very easy for code (just remove TR size).  Also alt_itlb_miss
  must crash Xen.
  Maybe quite more difficult for data.  I have to identify where Xen try to
  access out of its data region.  Here is a first try:
  *  mmio(serial/vga/...) (can be mapped)
  *  ACPI tables (can be copied)
  *  Call to PAL/SAL/EFI (can enable alt_dtlb_miss)


* VHPT
  How many VHPT per system ?
   1) Only one
   2) One per LP  (current design)
   3) One per VCPU (original Xen-VTI).
  I think (1) is not scalable...


* Instructions to be virtualized (when using SMP):
  * TLB related (itc, itr, ptc)
  * cache related (fc, PAL_CACHE_FLUSH)
  * ALAT: nothing to be done as invalidated during domain switch.
  * other ?
 Currently, any problems are avoided by pinning VCPUs to LP.
 If you don't want to pin VCPU, I have a proposal:
 * Add per VCPUs bitmaps of LP.  Bit is set when the VCPU runs on the LP.
   There may be one bitmap for cache and one bitmap for TLB.
   For cache operations, send IPI to every LP whose bit is set in the bitmap.
     PAL_CACHE_FLUSH clears bit of all VCPUS which have run of the LP.
   For ptc, send IPI or ptc.g according to number of bits set.



My TODO plan (please discuss):
* boot VTI on SMP (wait until code merge)
* vhptsize option.
* memory required per proc and per domain.
* SMP with more LP (4, 8, 16 .. 48)
* N-to-P (do not pin ie VCPUs on any LP)
* serial console
* SMT
* SEDF

* code clean-up (compile without -w!)
* kernel issues: why does linux do an illegal memory access during boot ?

Other TODO subjects (for later):
* VNIF
* shutdown domain, shutdown Xen

* SMP Guest

* performance, stability tests

* NUMA


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>