WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] MErge with xenppc-unstable-merge.hg

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] MErge with xenppc-unstable-merge.hg
From: Xen patchbot-unstable <patchbot-unstable@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 28 Jul 2006 16:21:57 +0000
Delivery-date: Fri, 28 Jul 2006 09:32:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User kfraser@xxxxxxxxxxxxxxxxxxxxx
# Node ID e5c84586c333c7be0a70228cca51865c29bab21c
# Parent  1eb42266de1b0a312dc5981381c1968581e6b243
# Parent  158db2446071c0d6aad69c12070a98a25092aa78
MErge with xenppc-unstable-merge.hg
---
 tools/libxc/xc_ia64_stubs.c                          |  756 --------
 xen/include/asm-ia64/linux/asm/asmmacro.h            |  111 -
 Config.mk                                            |    2 
 buildconfigs/linux-defconfig_xen0_ia64               |   28 
 buildconfigs/linux-defconfig_xen_ia64                |   28 
 linux-2.6-xen-sparse/arch/ia64/Kconfig               |    9 
 linux-2.6-xen-sparse/arch/ia64/kernel/gate.S         |  488 +++++
 linux-2.6-xen-sparse/arch/ia64/kernel/gate.lds.S     |  117 +
 linux-2.6-xen-sparse/arch/ia64/kernel/irq_ia64.c     |   77 
 linux-2.6-xen-sparse/arch/ia64/kernel/patch.c        |  268 +++
 linux-2.6-xen-sparse/arch/ia64/kernel/setup.c        |   24 
 linux-2.6-xen-sparse/arch/ia64/xen/hypercall.S       |   56 
 linux-2.6-xen-sparse/arch/ia64/xen/hypervisor.c      |   20 
 linux-2.6-xen-sparse/arch/ia64/xen/util.c            |    3 
 linux-2.6-xen-sparse/arch/ia64/xen/xensetup.S        |   20 
 linux-2.6-xen-sparse/drivers/xen/core/reboot.c       |   27 
 linux-2.6-xen-sparse/drivers/xen/netback/netback.c   |   11 
 linux-2.6-xen-sparse/drivers/xen/netfront/netfront.c |    2 
 linux-2.6-xen-sparse/include/asm-ia64/hypercall.h    |   18 
 linux-2.6-xen-sparse/include/asm-ia64/xen/privop.h   |    2 
 tools/ioemu/patches/domain-reset                     |    8 
 tools/ioemu/patches/domain-timeoffset                |   18 
 tools/ioemu/patches/hypervisor-pit                   |   10 
 tools/ioemu/patches/ioemu-ia64                       |   27 
 tools/ioemu/patches/qemu-bugfixes                    |   14 
 tools/ioemu/patches/qemu-logging                     |   16 
 tools/ioemu/patches/qemu-smp                         |   10 
 tools/ioemu/patches/qemu-target-i386-dm              |   20 
 tools/ioemu/patches/shared-vram                      |   16 
 tools/ioemu/patches/support-xm-console               |   12 
 tools/ioemu/patches/vnc-cleanup                      |   22 
 tools/ioemu/patches/vnc-fixes                        |    8 
 tools/ioemu/patches/vnc-start-vncviewer              |   18 
 tools/ioemu/patches/xen-domain-name                  |   14 
 tools/ioemu/patches/xen-domid                        |   15 
 tools/ioemu/patches/xen-mm                           |   12 
 tools/ioemu/patches/xen-network                      |    6 
 tools/ioemu/target-i386-dm/exec-dm.c                 |    8 
 tools/ioemu/vl.c                                     |    1 
 tools/libxc/Makefile                                 |    6 
 tools/libxc/ia64/Makefile                            |    5 
 tools/libxc/ia64/xc_ia64_hvm_build.c                 |  673 +++++++
 tools/libxc/ia64/xc_ia64_linux_restore.c             |  320 +++
 tools/libxc/ia64/xc_ia64_linux_save.c                |  509 ++++++
 tools/libxc/ia64/xc_ia64_stubs.c                     |  106 +
 tools/libxc/xc_hvm_build.c                           |   32 
 tools/libxc/xc_linux_build.c                         |   64 
 tools/libxc/xc_private.c                             |    2 
 tools/libxc/xenctrl.h                                |    3 
 xen/arch/ia64/Makefile                               |   14 
 xen/arch/ia64/asm-offsets.c                          |   11 
 xen/arch/ia64/linux-xen/Makefile                     |    2 
 xen/arch/ia64/linux-xen/README.origin                |    2 
 xen/arch/ia64/linux-xen/entry.S                      |   15 
 xen/arch/ia64/linux-xen/iosapic.c                    |    8 
 xen/arch/ia64/linux-xen/mca.c                        | 1600 +++++++++++++++++++
 xen/arch/ia64/linux-xen/mca_asm.S                    |  970 +++++++++++
 xen/arch/ia64/linux-xen/minstate.h                   |   46 
 xen/arch/ia64/linux-xen/tlb.c                        |    4 
 xen/arch/ia64/linux-xen/unwind.c                     |   22 
 xen/arch/ia64/tools/README.RunVT                     |   95 -
 xen/arch/ia64/vmx/mmio.c                             |   11 
 xen/arch/ia64/vmx/pal_emul.c                         |  591 +++----
 xen/arch/ia64/vmx/vlsapic.c                          |   49 
 xen/arch/ia64/vmx/vmmu.c                             |    8 
 xen/arch/ia64/vmx/vmx_entry.S                        |   64 
 xen/arch/ia64/vmx/vmx_init.c                         |   80 
 xen/arch/ia64/vmx/vmx_interrupt.c                    |    2 
 xen/arch/ia64/vmx/vmx_ivt.S                          |   51 
 xen/arch/ia64/vmx/vmx_minstate.h                     |   51 
 xen/arch/ia64/vmx/vmx_phy_mode.c                     |   22 
 xen/arch/ia64/vmx/vmx_process.c                      |   35 
 xen/arch/ia64/vmx/vmx_support.c                      |    2 
 xen/arch/ia64/vmx/vmx_utility.c                      |    2 
 xen/arch/ia64/vmx/vmx_vcpu.c                         |   59 
 xen/arch/ia64/vmx/vmx_virt.c                         |   51 
 xen/arch/ia64/xen/Makefile                           |    1 
 xen/arch/ia64/xen/dom0_ops.c                         |  183 +-
 xen/arch/ia64/xen/dom_fw.c                           |  187 +-
 xen/arch/ia64/xen/domain.c                           |  408 +++-
 xen/arch/ia64/xen/faults.c                           |  139 +
 xen/arch/ia64/xen/fw_emul.c                          |   17 
 xen/arch/ia64/xen/hypercall.c                        |   11 
 xen/arch/ia64/xen/irq.c                              |   13 
 xen/arch/ia64/xen/ivt.S                              |  181 +-
 xen/arch/ia64/xen/mm.c                               |  171 +-
 xen/arch/ia64/xen/privop.c                           |  420 ----
 xen/arch/ia64/xen/privop_stat.c                      |  389 ++++
 xen/arch/ia64/xen/regionreg.c                        |   19 
 xen/arch/ia64/xen/vcpu.c                             |  136 +
 xen/arch/ia64/xen/vhpt.c                             |   24 
 xen/arch/ia64/xen/xenasm.S                           |    4 
 xen/arch/ia64/xen/xenmisc.c                          |    4 
 xen/arch/ia64/xen/xensetup.c                         |   12 
 xen/arch/x86/hvm/vmx/vmx.c                           |   18 
 xen/arch/x86/shadow32.c                              |   24 
 xen/arch/x86/shadow_public.c                         |   19 
 xen/common/memory.c                                  |    2 
 xen/include/asm-ia64/bundle.h                        |  231 ++
 xen/include/asm-ia64/config.h                        |   20 
 xen/include/asm-ia64/dom_fw.h                        |    4 
 xen/include/asm-ia64/domain.h                        |  122 -
 xen/include/asm-ia64/iocap.h                         |    8 
 xen/include/asm-ia64/linux-xen/asm/README.origin     |    1 
 xen/include/asm-ia64/linux-xen/asm/asmmacro.h        |  119 +
 xen/include/asm-ia64/linux-xen/asm/mca_asm.h         |    4 
 xen/include/asm-ia64/linux-xen/asm/pgtable.h         |    5 
 xen/include/asm-ia64/linux-xen/asm/system.h          |    2 
 xen/include/asm-ia64/linux/asm/README.origin         |    1 
 xen/include/asm-ia64/mm.h                            |    7 
 xen/include/asm-ia64/privop.h                        |  225 --
 xen/include/asm-ia64/privop_stat.h                   |   66 
 xen/include/asm-ia64/regionreg.h                     |    1 
 xen/include/asm-ia64/shadow.h                        |   18 
 xen/include/asm-ia64/tlbflush.h                      |    6 
 xen/include/asm-ia64/vcpu.h                          |   25 
 xen/include/asm-ia64/vhpt.h                          |    3 
 xen/include/asm-ia64/vmx.h                           |    4 
 xen/include/asm-ia64/vmx_pal.h                       |    5 
 xen/include/asm-ia64/vmx_phy_mode.h                  |    1 
 xen/include/asm-ia64/vmx_vcpu.h                      |    1 
 xen/include/asm-ia64/vmx_vpd.h                       |    2 
 xen/include/asm-ia64/xenpage.h                       |    7 
 xen/include/asm-ia64/xensystem.h                     |    1 
 xen/include/asm-x86/hvm/vmx/vmx.h                    |   52 
 xen/include/asm-x86/mm.h                             |    5 
 xen/include/public/arch-ia64.h                       |  109 -
 xen/include/public/dom0_ops.h                        |    4 
 128 files changed, 8346 insertions(+), 3004 deletions(-)

diff -r 1eb42266de1b -r e5c84586c333 Config.mk
--- a/Config.mk Thu Jul 27 17:44:14 2006 -0500
+++ b/Config.mk Fri Jul 28 10:51:38 2006 +0100
@@ -36,6 +36,8 @@ CFLAGS    ?= -O2 -fomit-frame-pointer
 CFLAGS    ?= -O2 -fomit-frame-pointer
 CFLAGS    += -DNDEBUG
 else
+# Less than -O1 produces bad code and large stack frames
+CFLAGS    ?= -O1 -fno-omit-frame-pointer
 CFLAGS    += -g
 endif
 
diff -r 1eb42266de1b -r e5c84586c333 buildconfigs/linux-defconfig_xen0_ia64
--- a/buildconfigs/linux-defconfig_xen0_ia64    Thu Jul 27 17:44:14 2006 -0500
+++ b/buildconfigs/linux-defconfig_xen0_ia64    Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@
 #
 # Automatically generated make config: don't edit
 # Linux kernel version: 2.6.16.13-xen0
-# Mon May 22 14:46:31 2006
+# Fri Jun 30 12:59:19 2006
 #
 
 #
@@ -721,21 +721,10 @@ CONFIG_SERIAL_NONSTANDARD=y
 #
 # Serial drivers
 #
-CONFIG_SERIAL_8250=y
-CONFIG_SERIAL_8250_CONSOLE=y
-CONFIG_SERIAL_8250_ACPI=y
-CONFIG_SERIAL_8250_NR_UARTS=6
-CONFIG_SERIAL_8250_RUNTIME_UARTS=4
-CONFIG_SERIAL_8250_EXTENDED=y
-CONFIG_SERIAL_8250_SHARE_IRQ=y
-# CONFIG_SERIAL_8250_DETECT_IRQ is not set
-# CONFIG_SERIAL_8250_RSA is not set
 
 #
 # Non-8250 serial port support
 #
-CONFIG_SERIAL_CORE=y
-CONFIG_SERIAL_CORE_CONSOLE=y
 # CONFIG_SERIAL_JSM is not set
 CONFIG_UNIX98_PTYS=y
 CONFIG_LEGACY_PTYS=y
@@ -1516,8 +1505,16 @@ CONFIG_CRYPTO_DES=y
 #
 # Hardware crypto devices
 #
+# CONFIG_XEN_UTIL is not set
 CONFIG_HAVE_ARCH_ALLOC_SKB=y
 CONFIG_HAVE_ARCH_DEV_ALLOC_SKB=y
+CONFIG_XEN_BALLOON=y
+CONFIG_XEN_SKBUFF=y
+CONFIG_XEN_NETDEV_BACKEND=y
+CONFIG_XEN_NETDEV_FRONTEND=y
+# CONFIG_XEN_DEVMEM is not set
+# CONFIG_XEN_REBOOT is not set
+# CONFIG_XEN_SMPBOOT is not set
 CONFIG_XEN_INTERFACE_VERSION=0x00030202
 
 #
@@ -1525,20 +1522,21 @@ CONFIG_XEN_INTERFACE_VERSION=0x00030202
 #
 CONFIG_XEN_PRIVILEGED_GUEST=y
 # CONFIG_XEN_UNPRIVILEGED_GUEST is not set
+CONFIG_XEN_PRIVCMD=y
 CONFIG_XEN_BACKEND=y
 # CONFIG_XEN_PCIDEV_BACKEND is not set
 CONFIG_XEN_BLKDEV_BACKEND=y
+CONFIG_XEN_XENBUS_DEV=y
 # CONFIG_XEN_BLKDEV_TAP is not set
-CONFIG_XEN_NETDEV_BACKEND=y
 # CONFIG_XEN_NETDEV_PIPELINED_TRANSMITTER is not set
 CONFIG_XEN_NETDEV_LOOPBACK=y
 # CONFIG_XEN_TPMDEV_BACKEND is not set
 CONFIG_XEN_BLKDEV_FRONTEND=y
-CONFIG_XEN_NETDEV_FRONTEND=y
 # CONFIG_XEN_SCRUB_PAGES is not set
-# CONFIG_XEN_DISABLE_SERIAL is not set
+CONFIG_XEN_DISABLE_SERIAL=y
 CONFIG_XEN_SYSFS=y
 CONFIG_XEN_COMPAT_030002_AND_LATER=y
 # CONFIG_XEN_COMPAT_LATEST_ONLY is not set
 CONFIG_XEN_COMPAT_030002=y
+CONFIG_HAVE_IRQ_IGNORE_UNHANDLED=y
 CONFIG_NO_IDLE_HZ=y
diff -r 1eb42266de1b -r e5c84586c333 buildconfigs/linux-defconfig_xen_ia64
--- a/buildconfigs/linux-defconfig_xen_ia64     Thu Jul 27 17:44:14 2006 -0500
+++ b/buildconfigs/linux-defconfig_xen_ia64     Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@
 #
 # Automatically generated make config: don't edit
 # Linux kernel version: 2.6.16.13-xen
-# Mon May 22 14:15:20 2006
+# Thu Jun 29 16:23:48 2006
 #
 
 #
@@ -727,21 +727,10 @@ CONFIG_SERIAL_NONSTANDARD=y
 #
 # Serial drivers
 #
-CONFIG_SERIAL_8250=y
-CONFIG_SERIAL_8250_CONSOLE=y
-CONFIG_SERIAL_8250_ACPI=y
-CONFIG_SERIAL_8250_NR_UARTS=6
-CONFIG_SERIAL_8250_RUNTIME_UARTS=4
-CONFIG_SERIAL_8250_EXTENDED=y
-CONFIG_SERIAL_8250_SHARE_IRQ=y
-# CONFIG_SERIAL_8250_DETECT_IRQ is not set
-# CONFIG_SERIAL_8250_RSA is not set
 
 #
 # Non-8250 serial port support
 #
-CONFIG_SERIAL_CORE=y
-CONFIG_SERIAL_CORE_CONSOLE=y
 # CONFIG_SERIAL_JSM is not set
 CONFIG_UNIX98_PTYS=y
 CONFIG_LEGACY_PTYS=y
@@ -1522,8 +1511,16 @@ CONFIG_CRYPTO_DES=y
 #
 # Hardware crypto devices
 #
+# CONFIG_XEN_UTIL is not set
 CONFIG_HAVE_ARCH_ALLOC_SKB=y
 CONFIG_HAVE_ARCH_DEV_ALLOC_SKB=y
+CONFIG_XEN_BALLOON=y
+CONFIG_XEN_SKBUFF=y
+CONFIG_XEN_NETDEV_BACKEND=y
+CONFIG_XEN_NETDEV_FRONTEND=y
+# CONFIG_XEN_DEVMEM is not set
+# CONFIG_XEN_REBOOT is not set
+# CONFIG_XEN_SMPBOOT is not set
 CONFIG_XEN_INTERFACE_VERSION=0x00030202
 
 #
@@ -1531,20 +1528,21 @@ CONFIG_XEN_INTERFACE_VERSION=0x00030202
 #
 CONFIG_XEN_PRIVILEGED_GUEST=y
 # CONFIG_XEN_UNPRIVILEGED_GUEST is not set
+CONFIG_XEN_PRIVCMD=y
 CONFIG_XEN_BACKEND=y
 # CONFIG_XEN_PCIDEV_BACKEND is not set
 CONFIG_XEN_BLKDEV_BACKEND=y
+CONFIG_XEN_XENBUS_DEV=y
 # CONFIG_XEN_BLKDEV_TAP is not set
-CONFIG_XEN_NETDEV_BACKEND=y
 # CONFIG_XEN_NETDEV_PIPELINED_TRANSMITTER is not set
 CONFIG_XEN_NETDEV_LOOPBACK=y
 # CONFIG_XEN_TPMDEV_BACKEND is not set
 CONFIG_XEN_BLKDEV_FRONTEND=y
-CONFIG_XEN_NETDEV_FRONTEND=y
 # CONFIG_XEN_SCRUB_PAGES is not set
-# CONFIG_XEN_DISABLE_SERIAL is not set
+CONFIG_XEN_DISABLE_SERIAL=y
 CONFIG_XEN_SYSFS=y
 CONFIG_XEN_COMPAT_030002_AND_LATER=y
 # CONFIG_XEN_COMPAT_LATEST_ONLY is not set
 CONFIG_XEN_COMPAT_030002=y
+CONFIG_HAVE_IRQ_IGNORE_UNHANDLED=y
 CONFIG_NO_IDLE_HZ=y
diff -r 1eb42266de1b -r e5c84586c333 linux-2.6-xen-sparse/arch/ia64/Kconfig
--- a/linux-2.6-xen-sparse/arch/ia64/Kconfig    Thu Jul 27 17:44:14 2006 -0500
+++ b/linux-2.6-xen-sparse/arch/ia64/Kconfig    Fri Jul 28 10:51:38 2006 +0100
@@ -70,6 +70,13 @@ config XEN_IA64_DOM0_NON_VP
        default y
        help
          dom0 P=M model
+
+config XEN_IA64_VDSO_PARAVIRT
+       bool
+       depends on XEN && !ITANIUM
+       default y
+       help
+         vDSO paravirtualization
 
 config SCHED_NO_NO_OMIT_FRAME_POINTER
        bool
@@ -518,7 +525,7 @@ config XEN_DEVMEM
        default n
 
 config XEN_REBOOT
-       default n
+       default y
 
 config XEN_SMPBOOT
        default n
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/arch/ia64/kernel/irq_ia64.c
--- a/linux-2.6-xen-sparse/arch/ia64/kernel/irq_ia64.c  Thu Jul 27 17:44:14 
2006 -0500
+++ b/linux-2.6-xen-sparse/arch/ia64/kernel/irq_ia64.c  Fri Jul 28 10:51:38 
2006 +0100
@@ -31,6 +31,9 @@
 #include <linux/smp_lock.h>
 #include <linux/threads.h>
 #include <linux/bitops.h>
+#ifdef CONFIG_XEN
+#include <linux/cpu.h>
+#endif
 
 #include <asm/delay.h>
 #include <asm/intrinsics.h>
@@ -235,6 +238,9 @@ static struct irqaction ipi_irqaction = 
 #include <xen/evtchn.h>
 #include <xen/interface/callback.h>
 
+static DEFINE_PER_CPU(int, timer_irq) = -1;
+static DEFINE_PER_CPU(int, ipi_irq) = -1;
+static DEFINE_PER_CPU(int, resched_irq) = -1;
 static char timer_name[NR_CPUS][15];
 static char ipi_name[NR_CPUS][15];
 static char resched_name[NR_CPUS][15];
@@ -252,6 +258,7 @@ static unsigned short saved_irq_cnt = 0;
 static unsigned short saved_irq_cnt = 0;
 static int xen_slab_ready = 0;
 
+#ifdef CONFIG_SMP
 /* Dummy stub. Though we may check RESCHEDULE_VECTOR before __do_IRQ,
  * it ends up to issue several memory accesses upon percpu data and
  * thus adds unnecessary traffic to other paths.
@@ -268,6 +275,7 @@ static struct irqaction resched_irqactio
        .flags =        SA_INTERRUPT,
        .name =         "RESCHED"
 };
+#endif
 
 /*
  * This is xen version percpu irq registration, which needs bind
@@ -294,6 +302,7 @@ xen_register_percpu_irq (unsigned int ir
                        ret = bind_virq_to_irqhandler(VIRQ_ITC, cpu,
                                action->handler, action->flags,
                                timer_name[cpu], action->dev_id);
+                       per_cpu(timer_irq,cpu) = ret;
                        printk(KERN_INFO "register VIRQ_ITC (%s) to xen irq 
(%d)\n", timer_name[cpu], ret);
                        break;
                case IA64_IPI_RESCHEDULE:
@@ -301,6 +310,7 @@ xen_register_percpu_irq (unsigned int ir
                        ret = bind_ipi_to_irqhandler(RESCHEDULE_VECTOR, cpu,
                                action->handler, action->flags,
                                resched_name[cpu], action->dev_id);
+                       per_cpu(resched_irq,cpu) = ret;
                        printk(KERN_INFO "register RESCHEDULE_VECTOR (%s) to 
xen irq (%d)\n", resched_name[cpu], ret);
                        break;
                case IA64_IPI_VECTOR:
@@ -308,6 +318,7 @@ xen_register_percpu_irq (unsigned int ir
                        ret = bind_ipi_to_irqhandler(IPI_VECTOR, cpu,
                                action->handler, action->flags,
                                ipi_name[cpu], action->dev_id);
+                       per_cpu(ipi_irq,cpu) = ret;
                        printk(KERN_INFO "register IPI_VECTOR (%s) to xen irq 
(%d)\n", ipi_name[cpu], ret);
                        break;
                case IA64_SPURIOUS_INT_VECTOR:
@@ -343,7 +354,7 @@ xen_bind_early_percpu_irq (void)
         */
        for (i = 0; i < late_irq_cnt; i++)
                xen_register_percpu_irq(saved_percpu_irqs[i].irq,
-                       saved_percpu_irqs[i].action, 0);
+                                       saved_percpu_irqs[i].action, 0);
 }
 
 /* FIXME: There's no obvious point to check whether slab is ready. So
@@ -352,6 +363,38 @@ extern void (*late_time_init)(void);
 extern void (*late_time_init)(void);
 extern char xen_event_callback;
 extern void xen_init_IRQ(void);
+
+#ifdef CONFIG_HOTPLUG_CPU
+static int __devinit
+unbind_evtchn_callback(struct notifier_block *nfb,
+                       unsigned long action, void *hcpu)
+{
+       unsigned int cpu = (unsigned long)hcpu;
+
+       if (action == CPU_DEAD) {
+               /* Unregister evtchn.  */
+               if (per_cpu(ipi_irq,cpu) >= 0) {
+                       unbind_from_irqhandler (per_cpu(ipi_irq, cpu), NULL);
+                       per_cpu(ipi_irq, cpu) = -1;
+               }
+               if (per_cpu(resched_irq,cpu) >= 0) {
+                       unbind_from_irqhandler (per_cpu(resched_irq, cpu),
+                                               NULL);
+                       per_cpu(resched_irq, cpu) = -1;
+               }
+               if (per_cpu(timer_irq,cpu) >= 0) {
+                       unbind_from_irqhandler (per_cpu(timer_irq, cpu), NULL);
+                       per_cpu(timer_irq, cpu) = -1;
+               }
+       }
+       return NOTIFY_OK;
+}
+
+static struct notifier_block unbind_evtchn_notifier = {
+       .notifier_call = unbind_evtchn_callback,
+       .priority = 0
+};
+#endif
 
 DECLARE_PER_CPU(int, ipi_to_irq[NR_IPIS]);
 void xen_smp_intr_init(void)
@@ -363,21 +406,22 @@ void xen_smp_intr_init(void)
                .type = CALLBACKTYPE_event,
                .address = (unsigned long)&xen_event_callback,
        };
-       static cpumask_t registered_cpumask;
-
-       if (!cpu)
+
+       if (cpu == 0) {
+               /* Initialization was already done for boot cpu.  */
+#ifdef CONFIG_HOTPLUG_CPU
+               /* Register the notifier only once.  */
+               register_cpu_notifier(&unbind_evtchn_notifier);
+#endif
                return;
+       }
 
        /* This should be piggyback when setup vcpu guest context */
        BUG_ON(HYPERVISOR_callback_op(CALLBACKOP_register, &event));
 
-       if (!cpu_isset(cpu, registered_cpumask)) {
-               cpu_set(cpu, registered_cpumask);
-               for (i = 0; i < saved_irq_cnt; i++)
-                       xen_register_percpu_irq(saved_percpu_irqs[i].irq,
-                                               saved_percpu_irqs[i].action,
-                                               0);
-       }
+       for (i = 0; i < saved_irq_cnt; i++)
+               xen_register_percpu_irq(saved_percpu_irqs[i].irq,
+                                       saved_percpu_irqs[i].action, 0);
 #endif /* CONFIG_SMP */
 }
 #endif /* CONFIG_XEN */
@@ -388,12 +432,13 @@ register_percpu_irq (ia64_vector vec, st
        irq_desc_t *desc;
        unsigned int irq;
 
+#ifdef CONFIG_XEN
+       if (is_running_on_xen())
+               return xen_register_percpu_irq(vec, action, 1);
+#endif
+
        for (irq = 0; irq < NR_IRQS; ++irq)
                if (irq_to_vector(irq) == vec) {
-#ifdef CONFIG_XEN
-                       if (is_running_on_xen())
-                               return xen_register_percpu_irq(vec, action, 1);
-#endif
                        desc = irq_descp(irq);
                        desc->status |= IRQ_PER_CPU;
                        desc->handler = &irq_type_ia64_lsapic;
@@ -441,6 +486,7 @@ ia64_send_ipi (int cpu, int vector, int 
         if (is_running_on_xen()) {
                int irq = -1;
 
+#ifdef CONFIG_SMP
                /* TODO: we need to call vcpu_up here */
                if (unlikely(vector == ap_wakeup_vector)) {
                        extern void xen_send_ipi (int cpu, int vec);
@@ -448,6 +494,7 @@ ia64_send_ipi (int cpu, int vector, int 
                        //vcpu_prepare_and_up(cpu);
                        return;
                }
+#endif
 
                switch(vector) {
                case IA64_IPI_VECTOR:
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/arch/ia64/kernel/setup.c
--- a/linux-2.6-xen-sparse/arch/ia64/kernel/setup.c     Thu Jul 27 17:44:14 
2006 -0500
+++ b/linux-2.6-xen-sparse/arch/ia64/kernel/setup.c     Fri Jul 28 10:51:38 
2006 +0100
@@ -75,6 +75,20 @@ EXPORT_SYMBOL(__per_cpu_offset);
 EXPORT_SYMBOL(__per_cpu_offset);
 #endif
 
+#ifdef CONFIG_XEN
+static int
+xen_panic_event(struct notifier_block *this, unsigned long event, void *ptr)
+{
+       HYPERVISOR_shutdown(SHUTDOWN_crash);
+       /* we're never actually going to get here... */
+       return NOTIFY_DONE;
+}
+
+static struct notifier_block xen_panic_block = {
+       xen_panic_event, NULL, 0 /* try to go last */
+};
+#endif
+
 extern void ia64_setup_printk_clock(void);
 
 DEFINE_PER_CPU(struct cpuinfo_ia64, cpu_info);
@@ -418,8 +432,11 @@ setup_arch (char **cmdline_p)
        unw_init();
 
 #ifdef CONFIG_XEN
-       if (is_running_on_xen())
+       if (is_running_on_xen()) {
                setup_xen_features();
+               /* Register a call for panic conditions. */
+               notifier_chain_register(&panic_notifier_list, &xen_panic_block);
+       }
 #endif
 
        ia64_patch_vtop((u64) __start___vtop_patchlist, (u64) 
__end___vtop_patchlist);
@@ -523,15 +540,14 @@ setup_arch (char **cmdline_p)
                shared_info_t *s = HYPERVISOR_shared_info;
 
                xen_start_info = __va(s->arch.start_info_pfn << PAGE_SHIFT);
-               xen_start_info->flags = s->arch.flags;
 
                printk("Running on Xen! start_info_pfn=0x%lx nr_pages=%ld "
                       "flags=0x%x\n", s->arch.start_info_pfn,
                       xen_start_info->nr_pages, xen_start_info->flags);
 
                /* xen_start_info isn't setup yet, get the flags manually */
-               if (s->arch.flags & SIF_INITDOMAIN) {
-                       if (!(s->arch.flags & SIF_PRIVILEGED))
+               if (xen_start_info->flags & SIF_INITDOMAIN) {
+                       if (!(xen_start_info->flags & SIF_PRIVILEGED))
                                panic("Xen granted us console access "
                                      "but not privileged status");
                } else {
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/arch/ia64/xen/hypercall.S
--- a/linux-2.6-xen-sparse/arch/ia64/xen/hypercall.S    Thu Jul 27 17:44:14 
2006 -0500
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/hypercall.S    Fri Jul 28 10:51:38 
2006 +0100
@@ -351,3 +351,59 @@ GLOBAL_ENTRY(xen_send_ipi)
         br.ret.sptk.many rp
         ;;
 END(xen_send_ipi)
+
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+// Those are vdso specialized.
+// In fsys mode, call, ret can't be used.
+GLOBAL_ENTRY(xen_rsm_be_i)
+       ld8 r22=[r22]
+       ;; 
+       st1 [r22]=r20
+       st4 [r23]=r0
+       XEN_HYPER_RSM_BE
+       st4 [r23]=r20
+       brl.cond.sptk   .vdso_rsm_be_i_ret
+       ;; 
+END(xen_rsm_be_i)
+
+GLOBAL_ENTRY(xen_get_psr)
+       mov r31=r8
+       mov r25=IA64_PSR_IC
+       st4 [r23]=r0
+       XEN_HYPER_GET_PSR
+       ;; 
+       st4 [r23]=r20
+       or r29=r8,r25 // vpsr.ic was cleared for hyperprivop
+       mov r8=r31
+       brl.cond.sptk   .vdso_get_psr_ret
+       ;; 
+END(xen_get_psr)
+
+GLOBAL_ENTRY(xen_ssm_i_0)
+       st4 [r22]=r20
+       ld4 r25=[r24]
+       ;;
+       cmp.ne.unc p11,p0=r0, r25
+       ;; 
+(p11)  st4 [r22]=r0
+(p11)  st4 [r23]=r0
+(p11)  XEN_HYPER_SSM_I
+       
+       brl.cond.sptk   .vdso_ssm_i_0_ret
+       ;; 
+END(xen_ssm_i_0)
+
+GLOBAL_ENTRY(xen_ssm_i_1)
+       st4 [r22]=r20
+       ld4 r25=[r24]
+       ;; 
+       cmp.ne.unc p11,p0=r0, r25
+       ;; 
+(p11)  st4 [r22]=r0
+(p11)  st4 [r23]=r0
+(p11)  XEN_HYPER_SSM_I
+       ;;
+       brl.cond.sptk   .vdso_ssm_i_1_ret
+       ;; 
+END(xen_ssm_i_1)
+#endif
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/arch/ia64/xen/hypervisor.c
--- a/linux-2.6-xen-sparse/arch/ia64/xen/hypervisor.c   Thu Jul 27 17:44:14 
2006 -0500
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/hypervisor.c   Fri Jul 28 10:51:38 
2006 +0100
@@ -198,7 +198,7 @@ __xen_create_contiguous_region(unsigned 
                .nr_exchanged = 0
        };
 
-       if (order > MAX_CONTIG_ORDER)
+       if (unlikely(order > MAX_CONTIG_ORDER))
                return -ENOMEM;
        
        set_xen_guest_handle(exchange.in.extent_start, in_frames);
@@ -299,7 +299,7 @@ __xen_destroy_contiguous_region(unsigned
        if (!test_bit(start_gpfn, contiguous_bitmap))
                return;
 
-       if (order > MAX_CONTIG_ORDER)
+       if (unlikely(order > MAX_CONTIG_ORDER))
                return;
 
        set_xen_guest_handle(exchange.in.extent_start, &in_frame);
@@ -547,8 +547,10 @@ xen_ia64_privcmd_entry_mmap(struct vm_ar
        unsigned long gpfn;
        unsigned long flags;
 
-       BUG_ON((addr & ~PAGE_MASK) != 0);
-       BUG_ON(mfn == INVALID_MFN);
+       if ((addr & ~PAGE_MASK) != 0 || mfn == INVALID_MFN) {
+               error = -EINVAL;
+               goto out;
+       }
 
        if (entry->gpfn != INVALID_GPFN) {
                error = -EBUSY;
@@ -793,3 +795,13 @@ direct_remap_pfn_range(struct vm_area_st
        return error;
 }
 
+
+/* Called after suspend, to resume time.  */
+void
+time_resume(void)
+{
+       extern void ia64_cpu_local_tick(void);
+
+       /* Just trigger a tick.  */
+       ia64_cpu_local_tick();
+}
diff -r 1eb42266de1b -r e5c84586c333 linux-2.6-xen-sparse/arch/ia64/xen/util.c
--- a/linux-2.6-xen-sparse/arch/ia64/xen/util.c Thu Jul 27 17:44:14 2006 -0500
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/util.c Fri Jul 28 10:51:38 2006 +0100
@@ -71,6 +71,9 @@ void free_vm_area(struct vm_struct *area
        unsigned int order = get_order(area->size);
        unsigned long i;
 
+       /* xenbus_map_ring_valloc overrides this field!  */
+       area->phys_addr = __pa(area->addr);
+
        // This area is used for foreign page mappping.
        // So underlying machine page may not be assigned.
        for (i = 0; i < (1 << order); i++) {
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/arch/ia64/xen/xensetup.S
--- a/linux-2.6-xen-sparse/arch/ia64/xen/xensetup.S     Thu Jul 27 17:44:14 
2006 -0500
+++ b/linux-2.6-xen-sparse/arch/ia64/xen/xensetup.S     Fri Jul 28 10:51:38 
2006 +0100
@@ -33,3 +33,23 @@ GLOBAL_ENTRY(early_xen_setup)
        br.ret.sptk.many rp
        ;;
 END(early_xen_setup)
+
+#include <xen/interface/xen.h>
+
+/* Stub for suspend.
+   Just force the stacked registers to be written in memory.  */       
+GLOBAL_ENTRY(HYPERVISOR_suspend)
+       alloc r20=ar.pfs,0,0,0,0
+       mov r14=2
+       mov r15=r12
+       ;;
+       /* We don't want to deal with RSE.  */
+       flushrs
+       mov r2=__HYPERVISOR_sched_op
+       st4 [r12]=r14
+       ;;
+       break 0x1000
+       ;; 
+       mov ar.pfs=r20
+       br.ret.sptk.many b0
+END(HYPERVISOR_suspend)
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/drivers/xen/core/reboot.c
--- a/linux-2.6-xen-sparse/drivers/xen/core/reboot.c    Thu Jul 27 17:44:14 
2006 -0500
+++ b/linux-2.6-xen-sparse/drivers/xen/core/reboot.c    Fri Jul 28 10:51:38 
2006 +0100
@@ -39,6 +39,7 @@ extern void ctrl_alt_del(void);
  */
 #define SHUTDOWN_HALT      4
 
+#if defined(__i386__) || defined(__x86_64__)
 void machine_emergency_restart(void)
 {
        /* We really want to get pending console data out before we die. */
@@ -60,10 +61,8 @@ void machine_power_off(void)
 {
        /* We really want to get pending console data out before we die. */
        xencons_force_flush();
-#if defined(__i386__) || defined(__x86_64__)
        if (pm_power_off)
                pm_power_off();
-#endif
        HYPERVISOR_shutdown(SHUTDOWN_poweroff);
 }
 
@@ -71,7 +70,7 @@ EXPORT_SYMBOL(machine_restart);
 EXPORT_SYMBOL(machine_restart);
 EXPORT_SYMBOL(machine_halt);
 EXPORT_SYMBOL(machine_power_off);
-
+#endif
 
 /******************************************************************************
  * Stop/pickle callback handling.
@@ -82,6 +81,7 @@ static void __shutdown_handler(void *unu
 static void __shutdown_handler(void *unused);
 static DECLARE_WORK(shutdown_work, __shutdown_handler, NULL);
 
+#if defined(__i386__) || defined(__x86_64__)
 /* Ensure we run on the idle task page tables so that we will
    switch page tables before running user space. This is needed
    on architectures with separate kernel and user page tables
@@ -98,25 +98,30 @@ static void switch_idle_mm(void)
        current->active_mm = &init_mm;
        mmdrop(mm);
 }
+#endif
 
 static int __do_suspend(void *ignore)
 {
-       int i, j, k, fpp, err;
-
+       int err;
+#if defined(__i386__) || defined(__x86_64__)
+       int i, j, k, fpp;
        extern unsigned long max_pfn;
        extern unsigned long *pfn_to_mfn_frame_list_list;
        extern unsigned long *pfn_to_mfn_frame_list[];
+#endif
 
        extern void time_resume(void);
 
        BUG_ON(smp_processor_id() != 0);
        BUG_ON(in_interrupt());
 
+#if defined(__i386__) || defined(__x86_64__)
        if (xen_feature(XENFEAT_auto_translated_physmap)) {
                printk(KERN_WARNING "Cannot suspend in "
                       "auto_translated_physmap mode.\n");
                return -EOPNOTSUPP;
        }
+#endif
 
        err = smp_suspend();
        if (err)
@@ -129,18 +134,24 @@ static int __do_suspend(void *ignore)
 #ifdef __i386__
        kmem_cache_shrink(pgd_cache);
 #endif
+#if defined(__i386__) || defined(__x86_64__)
        mm_pin_all();
 
        __cli();
+#elif defined(__ia64__)
+       local_irq_disable();
+#endif
        preempt_enable();
 
        gnttab_suspend();
 
+#if defined(__i386__) || defined(__x86_64__)
        HYPERVISOR_shared_info = (shared_info_t *)empty_zero_page;
        clear_fixmap(FIX_SHARED_INFO);
 
        xen_start_info->store_mfn = mfn_to_pfn(xen_start_info->store_mfn);
        xen_start_info->console_mfn = mfn_to_pfn(xen_start_info->console_mfn);
+#endif
 
        /*
         * We'll stop somewhere inside this hypercall. When it returns,
@@ -150,6 +161,7 @@ static int __do_suspend(void *ignore)
 
        shutting_down = SHUTDOWN_INVALID;
 
+#if defined(__i386__) || defined(__x86_64__)
        set_fixmap(FIX_SHARED_INFO, xen_start_info->shared_info);
 
        HYPERVISOR_shared_info = (shared_info_t *)fix_to_virt(FIX_SHARED_INFO);
@@ -171,6 +183,7 @@ static int __do_suspend(void *ignore)
                        virt_to_mfn(&phys_to_machine_mapping[i]);
        }
        HYPERVISOR_shared_info->arch.max_pfn = max_pfn;
+#endif
 
        gnttab_resume();
 
@@ -178,9 +191,13 @@ static int __do_suspend(void *ignore)
 
        time_resume();
 
+#if defined(__i386__) || defined(__x86_64__)
        switch_idle_mm();
 
        __sti();
+#elif defined(__ia64__)
+       local_irq_enable();
+#endif
 
        xencons_resume();
 
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/drivers/xen/netback/netback.c
--- a/linux-2.6-xen-sparse/drivers/xen/netback/netback.c        Thu Jul 27 
17:44:14 2006 -0500
+++ b/linux-2.6-xen-sparse/drivers/xen/netback/netback.c        Fri Jul 28 
10:51:38 2006 +0100
@@ -99,24 +99,21 @@ static spinlock_t net_schedule_list_lock
 #define MAX_MFN_ALLOC 64
 static unsigned long mfn_list[MAX_MFN_ALLOC];
 static unsigned int alloc_index = 0;
-static DEFINE_SPINLOCK(mfn_lock);
 
 static unsigned long alloc_mfn(void)
 {
-       unsigned long mfn = 0, flags;
+       unsigned long mfn = 0;
        struct xen_memory_reservation reservation = {
                .nr_extents   = MAX_MFN_ALLOC,
                .extent_order = 0,
                .domid        = DOMID_SELF
        };
        set_xen_guest_handle(reservation.extent_start, mfn_list);
-       spin_lock_irqsave(&mfn_lock, flags);
        if ( unlikely(alloc_index == 0) )
                alloc_index = HYPERVISOR_memory_op(
                        XENMEM_increase_reservation, &reservation);
        if ( alloc_index != 0 )
                mfn = mfn_list[--alloc_index];
-       spin_unlock_irqrestore(&mfn_lock, flags);
        return mfn;
 }
 
@@ -222,9 +219,13 @@ static void net_rx_action(unsigned long 
        unsigned long vdata, old_mfn, new_mfn;
        struct sk_buff_head rxq;
        struct sk_buff *skb;
-       u16 notify_list[NET_RX_RING_SIZE];
        int notify_nr = 0;
        int ret;
+       /*
+        * Putting hundreds of bytes on the stack is considered rude.
+        * Static works because a tasklet can only be on one CPU at any time.
+        */
+       static u16 notify_list[NET_RX_RING_SIZE];
 
        skb_queue_head_init(&rxq);
 
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/drivers/xen/netfront/netfront.c
--- a/linux-2.6-xen-sparse/drivers/xen/netfront/netfront.c      Thu Jul 27 
17:44:14 2006 -0500
+++ b/linux-2.6-xen-sparse/drivers/xen/netfront/netfront.c      Fri Jul 28 
10:51:38 2006 +0100
@@ -788,6 +788,8 @@ static int network_start_xmit(struct sk_
 
                gso->u.gso.size = skb_shinfo(skb)->gso_size;
                gso->u.gso.type = XEN_NETIF_GSO_TYPE_TCPV4;
+               gso->u.gso.pad = 0;
+               gso->u.gso.features = 0;
 
                gso->type = XEN_NETIF_EXTRA_TYPE_GSO;
                gso->flags = 0;
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/include/asm-ia64/hypercall.h
--- a/linux-2.6-xen-sparse/include/asm-ia64/hypercall.h Thu Jul 27 17:44:14 
2006 -0500
+++ b/linux-2.6-xen-sparse/include/asm-ia64/hypercall.h Fri Jul 28 10:51:38 
2006 +0100
@@ -302,23 +302,7 @@ HYPERVISOR_vcpu_op(
     return _hypercall3(int, vcpu_op, cmd, vcpuid, extra_args);
 }
 
-static inline int
-HYPERVISOR_suspend(
-       unsigned long srec)
-{
-       struct sched_shutdown sched_shutdown = {
-               .reason = SHUTDOWN_suspend
-       };
-
-       int rc = _hypercall3(int, sched_op, SCHEDOP_shutdown,
-                            &sched_shutdown, srec);
-
-       if (rc == -ENOSYS)
-               rc = _hypercall3(int, sched_op_compat, SCHEDOP_shutdown,
-                                SHUTDOWN_suspend, srec);
-
-       return rc;
-}
+extern int HYPERVISOR_suspend(unsigned long srec);
 
 static inline int
 HYPERVISOR_callback_op(
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/include/asm-ia64/xen/privop.h
--- a/linux-2.6-xen-sparse/include/asm-ia64/xen/privop.h        Thu Jul 27 
17:44:14 2006 -0500
+++ b/linux-2.6-xen-sparse/include/asm-ia64/xen/privop.h        Fri Jul 28 
10:51:38 2006 +0100
@@ -48,6 +48,8 @@
 #define        XEN_HYPER_GET_PMD               break HYPERPRIVOP_GET_PMD
 #define        XEN_HYPER_GET_EFLAG             break HYPERPRIVOP_GET_EFLAG
 #define        XEN_HYPER_SET_EFLAG             break HYPERPRIVOP_SET_EFLAG
+#define        XEN_HYPER_RSM_BE                break HYPERPRIVOP_RSM_BE
+#define        XEN_HYPER_GET_PSR               break HYPERPRIVOP_GET_PSR
 
 #define XSI_IFS                        (XSI_BASE + XSI_IFS_OFS)
 #define XSI_PRECOVER_IFS       (XSI_BASE + XSI_PRECOVER_IFS_OFS)
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/domain-reset
--- a/tools/ioemu/patches/domain-reset  Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/domain-reset  Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/target-i386-dm/helper2.c
 Index: ioemu/target-i386-dm/helper2.c
 ===================================================================
---- ioemu.orig/target-i386-dm/helper2.c        2006-07-12 11:35:00.710827712 
+0100
-+++ ioemu/target-i386-dm/helper2.c     2006-07-12 11:35:02.419613627 +0100
+--- ioemu.orig/target-i386-dm/helper2.c        2006-07-27 11:16:57.527492229 
+0100
++++ ioemu/target-i386-dm/helper2.c     2006-07-27 11:16:59.381287013 +0100
 @@ -123,6 +123,25 @@
  /* called from main_cpu_reset */
  void cpu_reset(CPUX86State *env)
@@ -41,9 +41,9 @@ Index: ioemu/target-i386-dm/helper2.c
          /* Wait up to 10 msec. */
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-12 11:35:02.273631916 +0100
-+++ ioemu/vl.c 2006-07-12 11:35:02.421613376 +0100
-@@ -4411,7 +4411,7 @@
+--- ioemu.orig/vl.c    2006-07-27 11:16:59.317294097 +0100
++++ ioemu/vl.c 2006-07-27 11:16:59.384286681 +0100
+@@ -4412,7 +4412,7 @@
  } QEMUResetEntry;
  
  static QEMUResetEntry *first_reset_entry;
@@ -54,8 +54,8 @@ Index: ioemu/vl.c
  
 Index: ioemu/vl.h
 ===================================================================
---- ioemu.orig/vl.h    2006-07-12 11:35:01.454734511 +0100
-+++ ioemu/vl.h 2006-07-12 11:35:02.422613251 +0100
+--- ioemu.orig/vl.h    2006-07-27 11:16:58.127425816 +0100
++++ ioemu/vl.h 2006-07-27 11:16:59.384286681 +0100
 @@ -122,6 +122,7 @@
  
  void qemu_register_reset(QEMUResetHandler *func, void *opaque);
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/domain-timeoffset
--- a/tools/ioemu/patches/domain-timeoffset     Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/domain-timeoffset     Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/hw/mc146818rtc.c
 Index: ioemu/hw/mc146818rtc.c
 ===================================================================
---- ioemu.orig/hw/mc146818rtc.c        2006-07-26 15:17:35.110819901 +0100
-+++ ioemu/hw/mc146818rtc.c     2006-07-26 15:17:40.292255496 +0100
+--- ioemu.orig/hw/mc146818rtc.c        2006-07-27 11:17:18.007225084 +0100
++++ ioemu/hw/mc146818rtc.c     2006-07-27 11:17:48.250876949 +0100
 @@ -178,10 +178,27 @@
      }
  }
@@ -46,8 +46,8 @@ Index: ioemu/hw/mc146818rtc.c
  static void rtc_copy_date(RTCState *s)
 Index: ioemu/hw/pc.c
 ===================================================================
---- ioemu.orig/hw/pc.c 2006-07-26 15:17:39.820306906 +0100
-+++ ioemu/hw/pc.c      2006-07-26 15:17:40.293255388 +0100
+--- ioemu.orig/hw/pc.c 2006-07-27 11:17:47.993905398 +0100
++++ ioemu/hw/pc.c      2006-07-27 11:17:48.251876839 +0100
 @@ -151,7 +151,7 @@
  }
  
@@ -117,8 +117,8 @@ Index: ioemu/hw/pc.c
  QEMUMachine pc_machine = {
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-26 15:17:40.169268893 +0100
-+++ ioemu/vl.c 2006-07-26 15:17:40.296255061 +0100
+--- ioemu.orig/vl.c    2006-07-27 11:17:48.126890676 +0100
++++ ioemu/vl.c 2006-07-27 11:17:48.254876507 +0100
 @@ -164,6 +164,8 @@
  
  int xc_handle;
@@ -128,7 +128,7 @@ Index: ioemu/vl.c
  char domain_name[1024] = { 'H','V', 'M', 'X', 'E', 'N', '-'};
  extern int domid;
  
-@@ -4799,6 +4801,7 @@
+@@ -4800,6 +4802,7 @@
  #endif
             "-loadvm file    start right away with a saved state (loadvm in 
monitor)\n"
           "-vnc display    start a VNC server on display\n"
@@ -136,7 +136,7 @@ Index: ioemu/vl.c
             "\n"
             "During emulation, the following keys are useful:\n"
             "ctrl-alt-f      toggle full screen\n"
-@@ -4889,6 +4892,7 @@
+@@ -4890,6 +4893,7 @@
  
      QEMU_OPTION_d,
      QEMU_OPTION_vcpus,
@@ -144,7 +144,7 @@ Index: ioemu/vl.c
  };
  
  typedef struct QEMUOption {
-@@ -4967,6 +4971,7 @@
+@@ -4968,6 +4972,7 @@
      
      { "d", HAS_ARG, QEMU_OPTION_d },
      { "vcpus", 1, QEMU_OPTION_vcpus },
@@ -152,7 +152,7 @@ Index: ioemu/vl.c
      { NULL },
  };
  
-@@ -5669,6 +5674,9 @@
+@@ -5670,6 +5675,9 @@
                  vcpus = atoi(optarg);
                  fprintf(logfile, "qemu: the number of cpus is %d\n", vcpus);
                  break;
@@ -162,7 +162,7 @@ Index: ioemu/vl.c
              }
          }
      }
-@@ -5992,7 +6000,8 @@
+@@ -5993,7 +6001,8 @@
  
      machine->init(ram_size, vga_ram_size, boot_device,
                    ds, fd_filename, snapshot,
@@ -174,8 +174,8 @@ Index: ioemu/vl.c
      qemu_mod_timer(gui_timer, qemu_get_clock(rt_clock));
 Index: ioemu/vl.h
 ===================================================================
---- ioemu.orig/vl.h    2006-07-26 15:17:39.825306361 +0100
-+++ ioemu/vl.h 2006-07-26 15:17:40.297254952 +0100
+--- ioemu.orig/vl.h    2006-07-27 11:17:47.998904845 +0100
++++ ioemu/vl.h 2006-07-27 11:17:48.254876507 +0100
 @@ -556,7 +556,7 @@
                                   int boot_device,
               DisplayState *ds, const char **fd_filename, int snapshot,
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/hypervisor-pit
--- a/tools/ioemu/patches/hypervisor-pit        Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/hypervisor-pit        Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/Makefile.target
 Index: ioemu/Makefile.target
 ===================================================================
---- ioemu.orig/Makefile.target 2006-07-12 11:35:01.899678766 +0100
-+++ ioemu/Makefile.target      2006-07-12 11:35:02.711577049 +0100
+--- ioemu.orig/Makefile.target 2006-07-27 11:16:58.970332506 +0100
++++ ioemu/Makefile.target      2006-07-27 11:16:59.758245283 +0100
 @@ -333,7 +333,7 @@
  ifeq ($(TARGET_BASE_ARCH), i386)
  # Hardware support
@@ -13,8 +13,8 @@ Index: ioemu/Makefile.target
  endif
 Index: ioemu/hw/pc.c
 ===================================================================
---- ioemu.orig/hw/pc.c 2006-07-12 11:35:02.059658723 +0100
-+++ ioemu/hw/pc.c      2006-07-12 11:35:02.712576924 +0100
+--- ioemu.orig/hw/pc.c 2006-07-27 11:16:59.036325200 +0100
++++ ioemu/hw/pc.c      2006-07-27 11:16:59.759245173 +0100
 @@ -38,7 +38,9 @@
  
  static fdctrl_t *floppy_controller;
@@ -38,9 +38,9 @@ Index: ioemu/hw/pc.c
          pic_set_alt_irq_func(isa_pic, ioapic_set_irq, ioapic);
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-12 11:35:02.649584815 +0100
-+++ ioemu/vl.c 2006-07-12 11:35:02.715576548 +0100
-@@ -5033,6 +5033,7 @@
+--- ioemu.orig/vl.c    2006-07-27 11:16:59.614261222 +0100
++++ ioemu/vl.c 2006-07-27 11:16:59.762244841 +0100
+@@ -5034,6 +5034,7 @@
  
  #ifdef HAS_AUDIO
  struct soundhw soundhw[] = {
@@ -48,7 +48,7 @@ Index: ioemu/vl.c
  #ifdef TARGET_I386
      {
          "pcspk",
-@@ -5042,6 +5043,7 @@
+@@ -5043,6 +5044,7 @@
          { .init_isa = pcspk_audio_init }
      },
  #endif
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/ioemu-ia64
--- a/tools/ioemu/patches/ioemu-ia64    Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/ioemu-ia64    Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/hw/iommu.c
 Index: ioemu/hw/iommu.c
 ===================================================================
---- ioemu.orig/hw/iommu.c      2006-07-26 15:17:35.639762285 +0100
-+++ ioemu/hw/iommu.c   2006-07-26 15:17:39.078387722 +0100
+--- ioemu.orig/hw/iommu.c      2006-07-28 09:56:58.571272016 +0100
++++ ioemu/hw/iommu.c   2006-07-28 10:02:10.171049510 +0100
 @@ -82,7 +82,11 @@
  #define IOPTE_VALID         0x00000002 /* IOPTE is valid */
  #define IOPTE_WAZ           0x00000001 /* Write as zeros */
@@ -16,8 +16,8 @@ Index: ioemu/hw/iommu.c
  
 Index: ioemu/cpu-all.h
 ===================================================================
---- ioemu.orig/cpu-all.h       2006-07-26 15:17:38.728425843 +0100
-+++ ioemu/cpu-all.h    2006-07-26 15:17:39.079387613 +0100
+--- ioemu.orig/cpu-all.h       2006-07-28 09:58:38.815935452 +0100
++++ ioemu/cpu-all.h    2006-07-28 10:02:10.171049510 +0100
 @@ -835,6 +835,31 @@
                  :"=m" (*(volatile long *)addr)
                  :"dIr" (nr));
@@ -52,9 +52,9 @@ Index: ioemu/cpu-all.h
  /* memory API */
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-26 15:17:39.011395020 +0100
-+++ ioemu/vl.c 2006-07-26 21:11:35.957492161 +0100
-@@ -5577,6 +5577,7 @@
+--- ioemu.orig/vl.c    2006-07-28 09:58:59.672577418 +0100
++++ ioemu/vl.c 2006-07-28 10:02:10.174049171 +0100
+@@ -5578,6 +5578,7 @@
          exit(-1);
      }
  
@@ -62,7 +62,7 @@ Index: ioemu/vl.c
      if (xc_get_pfn_list(xc_handle, domid, page_array, nr_pages) != nr_pages) {
          fprintf(logfile, "xc_get_pfn_list returned error %d\n", errno);
          exit(-1);
-@@ -5597,6 +5598,34 @@
+@@ -5598,6 +5599,34 @@
      fprintf(logfile, "shared page at pfn:%lx, mfn: %"PRIx64"\n", nr_pages - 1,
              (uint64_t)(page_array[nr_pages - 1]));
  
@@ -99,9 +99,9 @@ Index: ioemu/vl.c
  #ifdef CONFIG_SOFTMMU
 Index: ioemu/target-i386-dm/exec-dm.c
 ===================================================================
---- ioemu.orig/target-i386-dm/exec-dm.c        2006-07-26 15:17:38.283474311 
+0100
-+++ ioemu/target-i386-dm/exec-dm.c     2006-07-26 15:17:39.081387395 +0100
-@@ -340,6 +340,23 @@
+--- ioemu.orig/target-i386-dm/exec-dm.c        2006-07-28 09:58:22.882736989 
+0100
++++ ioemu/target-i386-dm/exec-dm.c     2006-07-28 10:03:19.972165675 +0100
+@@ -341,6 +341,23 @@
      return io_mem_read[io_index >> IO_MEM_SHIFT];
  }
  
@@ -125,20 +125,20 @@ Index: ioemu/target-i386-dm/exec-dm.c
  /* physical memory access (slow version, mainly for debug) */
  #if defined(CONFIG_USER_ONLY)
  void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, 
-@@ -455,6 +472,9 @@
+@@ -456,6 +473,9 @@
                  ptr = phys_ram_base + (pd & TARGET_PAGE_MASK) + 
                      (addr & ~TARGET_PAGE_MASK);
                  memcpy(buf, ptr, l);
 +#ifdef __ia64__
 +                sync_icache((unsigned long)ptr, l);
 +#endif 
-             }
-         }
-         len -= l;
+             } else {
+                 /* unreported MMIO space */
+                 memset(buf, 0xff, len);
 Index: ioemu/exec-all.h
 ===================================================================
---- ioemu.orig/exec-all.h      2006-07-26 15:17:38.200483351 +0100
-+++ ioemu/exec-all.h   2006-07-26 21:11:41.262898983 +0100
+--- ioemu.orig/exec-all.h      2006-07-28 09:56:58.572271903 +0100
++++ ioemu/exec-all.h   2006-07-28 10:02:10.175049059 +0100
 @@ -462,12 +462,13 @@
  }
  #endif
@@ -158,8 +158,8 @@ Index: ioemu/exec-all.h
  
 Index: ioemu/target-i386-dm/cpu.h
 ===================================================================
---- ioemu.orig/target-i386-dm/cpu.h    2006-07-26 15:17:38.282474420 +0100
-+++ ioemu/target-i386-dm/cpu.h 2006-07-26 15:17:39.082387287 +0100
+--- ioemu.orig/target-i386-dm/cpu.h    2006-07-28 09:56:58.572271903 +0100
++++ ioemu/target-i386-dm/cpu.h 2006-07-28 10:02:10.175049059 +0100
 @@ -80,7 +80,11 @@
  /* helper2.c */
  int main_loop(void);
@@ -175,7 +175,7 @@ Index: ioemu/ia64_intrinsic.h
 Index: ioemu/ia64_intrinsic.h
 ===================================================================
 --- /dev/null  1970-01-01 00:00:00.000000000 +0000
-+++ ioemu/ia64_intrinsic.h     2006-07-26 15:17:39.083387178 +0100
++++ ioemu/ia64_intrinsic.h     2006-07-28 10:02:10.176048946 +0100
 @@ -0,0 +1,276 @@
 +#ifndef IA64_INTRINSIC_H
 +#define IA64_INTRINSIC_H
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/qemu-bugfixes
--- a/tools/ioemu/patches/qemu-bugfixes Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/qemu-bugfixes Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/console.c
 Index: ioemu/console.c
 ===================================================================
---- ioemu.orig/console.c       2006-07-26 13:39:11.999009495 +0100
-+++ ioemu/console.c    2006-07-26 14:15:19.413719225 +0100
+--- ioemu.orig/console.c       2006-07-27 11:16:53.732912290 +0100
++++ ioemu/console.c    2006-07-27 11:16:57.753467214 +0100
 @@ -449,7 +449,7 @@
              c++;
          }
@@ -50,8 +50,8 @@ Index: ioemu/console.c
      s->y_base = 0;
 Index: ioemu/usb-linux.c
 ===================================================================
---- ioemu.orig/usb-linux.c     2006-07-26 13:39:11.999009495 +0100
-+++ ioemu/usb-linux.c  2006-07-26 13:39:16.622514851 +0100
+--- ioemu.orig/usb-linux.c     2006-07-27 11:16:53.732912290 +0100
++++ ioemu/usb-linux.c  2006-07-27 11:16:57.754467103 +0100
 @@ -26,6 +26,7 @@
  #if defined(__linux__)
  #include <dirent.h>
@@ -60,3 +60,15 @@ Index: ioemu/usb-linux.c
  #include <linux/usbdevice_fs.h>
  #include <linux/version.h>
  
+Index: ioemu/vl.c
+===================================================================
+--- ioemu.orig/vl.c    2006-07-27 11:16:57.681475183 +0100
++++ ioemu/vl.c 2006-07-27 11:17:33.279534373 +0100
+@@ -3201,6 +3201,7 @@
+             if (net_tap_fd_init(vlan, fd))
+                 ret = 0;
+         } else {
++            ifname[0] = '\0';
+             get_param_value(ifname, sizeof(ifname), "ifname", p);
+             if (get_param_value(setup_script, sizeof(setup_script), "script", 
p) == 0) {
+                 pstrcpy(setup_script, sizeof(setup_script), 
DEFAULT_NETWORK_SCRIPT);
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/qemu-logging
--- a/tools/ioemu/patches/qemu-logging  Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/qemu-logging  Fri Jul 28 10:51:38 2006 +0100
@@ -1,8 +1,8 @@ Index: ioemu/vl.c
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-14 15:55:59.491503372 +0100
-+++ ioemu/vl.c 2006-07-14 15:55:59.693480386 +0100
-@@ -4697,7 +4697,7 @@
+--- ioemu.orig/vl.c    2006-07-27 11:16:57.756466882 +0100
++++ ioemu/vl.c 2006-07-27 11:16:57.828458912 +0100
+@@ -4698,7 +4698,7 @@
             "-S              freeze CPU at startup (use 'c' to start 
execution)\n"
             "-s              wait gdb connection to port %d\n"
             "-p port         change gdb connection port\n"
@@ -11,7 +11,7 @@ Index: ioemu/vl.c
             "-hdachs c,h,s[,t]  force hard disk 0 physical geometry and the 
optional BIOS\n"
             "                translation (t=none or lba) (usually qemu can 
guess them)\n"
             "-L path         set the directory for the BIOS and VGA BIOS\n"
-@@ -4775,7 +4775,7 @@
+@@ -4776,7 +4776,7 @@
      QEMU_OPTION_S,
      QEMU_OPTION_s,
      QEMU_OPTION_p,
@@ -20,7 +20,7 @@ Index: ioemu/vl.c
      QEMU_OPTION_hdachs,
      QEMU_OPTION_L,
  #ifdef USE_CODE_COPY
-@@ -4844,7 +4844,7 @@
+@@ -4845,7 +4845,7 @@
      { "S", 0, QEMU_OPTION_S },
      { "s", 0, QEMU_OPTION_s },
      { "p", HAS_ARG, QEMU_OPTION_p },
@@ -29,7 +29,7 @@ Index: ioemu/vl.c
      { "hdachs", HAS_ARG, QEMU_OPTION_hdachs },
      { "L", HAS_ARG, QEMU_OPTION_L },
  #ifdef USE_CODE_COPY
-@@ -5095,6 +5095,8 @@
+@@ -5096,6 +5096,8 @@
      char usb_devices[MAX_VM_USB_PORTS][128];
      int usb_devices_index;
  
@@ -38,7 +38,7 @@ Index: ioemu/vl.c
      LIST_INIT (&vm_change_state_head);
  #if !defined(CONFIG_SOFTMMU)
      /* we never want that malloc() uses mmap() */
-@@ -5144,6 +5146,11 @@
+@@ -5145,6 +5147,11 @@
      nb_nics = 0;
      /* default mac address of the first network interface */
      
@@ -50,7 +50,7 @@ Index: ioemu/vl.c
      optind = 1;
      for(;;) {
          if (optind >= argc)
-@@ -5329,7 +5336,7 @@
+@@ -5330,7 +5337,7 @@
                      exit(1);
                  }
                  break;
@@ -59,7 +59,7 @@ Index: ioemu/vl.c
                  {
                      int mask;
                      CPULogItem *item;
-@@ -5700,7 +5707,7 @@
+@@ -5701,7 +5708,7 @@
          stk.ss_flags = 0;
  
          if (sigaltstack(&stk, NULL) < 0) {
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/qemu-smp
--- a/tools/ioemu/patches/qemu-smp      Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/qemu-smp      Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/vl.c
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-12 11:35:01.687705323 +0100
-+++ ioemu/vl.c 2006-07-12 11:35:01.753697055 +0100
+--- ioemu.orig/vl.c    2006-07-27 11:16:58.619371357 +0100
++++ ioemu/vl.c 2006-07-27 11:16:58.823348777 +0100
 @@ -159,6 +159,8 @@
  #define MAX_CPUS 1
  #endif
@@ -11,7 +11,7 @@ Index: ioemu/vl.c
  int xc_handle;
  
  char domain_name[1024] = { 'H','V', 'M', 'X', 'E', 'N', '-'};
-@@ -4635,6 +4637,7 @@
+@@ -4636,6 +4638,7 @@
             "-m megs         set virtual RAM size to megs MB [default=%d]\n"
             "-smp n          set the number of CPUs to 'n' [default=1]\n"
             "-nographic      disable graphical output and redirect serial I/Os 
to console\n"
@@ -19,7 +19,7 @@ Index: ioemu/vl.c
  #ifndef _WIN32
           "-k language     use keyboard layout (for example \"fr\" for 
French)\n"
  #endif
-@@ -4809,6 +4812,7 @@
+@@ -4810,6 +4813,7 @@
      QEMU_OPTION_vnc,
  
      QEMU_OPTION_d,
@@ -27,7 +27,7 @@ Index: ioemu/vl.c
  };
  
  typedef struct QEMUOption {
-@@ -4886,6 +4890,7 @@
+@@ -4887,6 +4891,7 @@
      { "cirrusvga", 0, QEMU_OPTION_cirrusvga },
      
      { "d", HAS_ARG, QEMU_OPTION_d },
@@ -35,7 +35,7 @@ Index: ioemu/vl.c
      { NULL },
  };
  
-@@ -5508,6 +5513,10 @@
+@@ -5509,6 +5514,10 @@
                  domid = atoi(optarg);
                  fprintf(logfile, "domid: %d\n", domid);
                  break;
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/qemu-target-i386-dm
--- a/tools/ioemu/patches/qemu-target-i386-dm   Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/qemu-target-i386-dm   Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/Makefile.target
 Index: ioemu/Makefile.target
 ===================================================================
---- ioemu.orig/Makefile.target 2006-07-26 11:45:57.572129351 +0100
-+++ ioemu/Makefile.target      2006-07-26 11:45:57.589127569 +0100
+--- ioemu.orig/Makefile.target 2006-07-28 09:56:49.468301708 +0100
++++ ioemu/Makefile.target      2006-07-28 09:56:58.486281629 +0100
 @@ -57,6 +57,8 @@
  QEMU_SYSTEM=qemu-fast
  endif
@@ -32,8 +32,8 @@ Index: ioemu/Makefile.target
  endif
 Index: ioemu/configure
 ===================================================================
---- ioemu.orig/configure       2006-07-26 11:45:57.573129246 +0100
-+++ ioemu/configure    2006-07-26 11:45:57.590127464 +0100
+--- ioemu.orig/configure       2006-07-28 09:56:49.469301595 +0100
++++ ioemu/configure    2006-07-28 09:56:49.486299672 +0100
 @@ -359,6 +359,8 @@
      if [ "$user" = "yes" ] ; then
          target_list="i386-user arm-user armeb-user sparc-user ppc-user 
mips-user mipsel-user $target_list"
@@ -45,8 +45,8 @@ Index: ioemu/configure
  fi
 Index: ioemu/monitor.c
 ===================================================================
---- ioemu.orig/monitor.c       2006-07-26 11:45:57.576128931 +0100
-+++ ioemu/monitor.c    2006-07-26 11:45:57.591127359 +0100
+--- ioemu.orig/monitor.c       2006-07-28 09:56:49.472301255 +0100
++++ ioemu/monitor.c    2006-07-28 09:56:58.720255164 +0100
 @@ -1142,6 +1142,10 @@
        "", "show host USB devices", },
      { "profile", "", do_info_profile,
@@ -60,8 +60,8 @@ Index: ioemu/monitor.c
  
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-26 11:45:57.579128617 +0100
-+++ ioemu/vl.c 2006-07-26 11:45:57.593127149 +0100
+--- ioemu.orig/vl.c    2006-07-28 09:56:49.475300916 +0100
++++ ioemu/vl.c 2006-07-28 09:56:58.917232883 +0100
 @@ -87,7 +87,7 @@
  
  #include "exec-all.h"
@@ -98,8 +98,8 @@ Index: ioemu/vl.c
  {
 Index: ioemu/vl.h
 ===================================================================
---- ioemu.orig/vl.h    2006-07-26 11:45:39.289045710 +0100
-+++ ioemu/vl.h 2006-07-26 11:45:57.594127044 +0100
+--- ioemu.orig/vl.h    2006-07-28 09:56:49.281322859 +0100
++++ ioemu/vl.h 2006-07-28 09:56:58.917232883 +0100
 @@ -38,6 +38,8 @@
  #include <fcntl.h>
  #include <sys/stat.h>
@@ -132,7 +132,7 @@ Index: ioemu/target-i386-dm/cpu.h
 Index: ioemu/target-i386-dm/cpu.h
 ===================================================================
 --- /dev/null  1970-01-01 00:00:00.000000000 +0000
-+++ ioemu/target-i386-dm/cpu.h 2006-07-26 11:45:57.594127044 +0100
++++ ioemu/target-i386-dm/cpu.h 2006-07-28 09:56:58.572271903 +0100
 @@ -0,0 +1,86 @@
 +/*
 + * i386 virtual CPU header
@@ -223,8 +223,8 @@ Index: ioemu/target-i386-dm/exec-dm.c
 Index: ioemu/target-i386-dm/exec-dm.c
 ===================================================================
 --- /dev/null  1970-01-01 00:00:00.000000000 +0000
-+++ ioemu/target-i386-dm/exec-dm.c     2006-07-26 11:46:01.059763730 +0100
-@@ -0,0 +1,512 @@
++++ ioemu/target-i386-dm/exec-dm.c     2006-07-28 09:58:22.882736989 +0100
+@@ -0,0 +1,516 @@
 +/*
 + *  virtual page mapping and translated block handling
 + * 
@@ -291,6 +291,7 @@ Index: ioemu/target-i386-dm/exec-dm.c
 +#endif /* !CONFIG_DM */
 +
 +uint64_t phys_ram_size;
++extern uint64_t ram_size;
 +int phys_ram_fd;
 +uint8_t *phys_ram_base;
 +uint8_t *phys_ram_dirty;
@@ -632,7 +633,7 @@ Index: ioemu/target-i386-dm/exec-dm.c
 +            l = len;
 +      
 +        pd = page;
-+        io_index = iomem_index(page);
++        io_index = iomem_index(addr);
 +        if (is_write) {
 +            if (io_index) {
 +                if (l >= 4 && ((addr & 3) == 0)) {
@@ -677,11 +678,14 @@ Index: ioemu/target-i386-dm/exec-dm.c
 +                    stb_raw(buf, val);
 +                    l = 1;
 +                }
-+            } else {
++            } else if (addr < ram_size) {
 +                /* RAM case */
 +                ptr = phys_ram_base + (pd & TARGET_PAGE_MASK) + 
 +                    (addr & ~TARGET_PAGE_MASK);
 +                memcpy(buf, ptr, l);
++            } else {
++                /* unreported MMIO space */
++                memset(buf, 0xff, len);
 +            }
 +        }
 +        len -= l;
@@ -740,7 +744,7 @@ Index: ioemu/target-i386-dm/helper2.c
 Index: ioemu/target-i386-dm/helper2.c
 ===================================================================
 --- /dev/null  1970-01-01 00:00:00.000000000 +0000
-+++ ioemu/target-i386-dm/helper2.c     2006-07-26 11:45:57.596126835 +0100
++++ ioemu/target-i386-dm/helper2.c     2006-07-28 09:56:58.312301309 +0100
 @@ -0,0 +1,464 @@
 +/*
 + *  i386 helpers (without register variable usage)
@@ -1209,7 +1213,7 @@ Index: ioemu/target-i386-dm/i8259-dm.c
 Index: ioemu/target-i386-dm/i8259-dm.c
 ===================================================================
 --- /dev/null  1970-01-01 00:00:00.000000000 +0000
-+++ ioemu/target-i386-dm/i8259-dm.c    2006-07-26 11:45:57.596126835 +0100
++++ ioemu/target-i386-dm/i8259-dm.c    2006-07-28 09:56:49.492298993 +0100
 @@ -0,0 +1,107 @@
 +/* Xen 8259 stub for interrupt controller emulation
 + * 
@@ -1321,7 +1325,7 @@ Index: ioemu/target-i386-dm/qemu-dm.debu
 Index: ioemu/target-i386-dm/qemu-dm.debug
 ===================================================================
 --- /dev/null  1970-01-01 00:00:00.000000000 +0000
-+++ ioemu/target-i386-dm/qemu-dm.debug 2006-07-26 11:45:57.596126835 +0100
++++ ioemu/target-i386-dm/qemu-dm.debug 2006-07-28 09:56:49.493298880 +0100
 @@ -0,0 +1,5 @@
 +#!/bin/sh
 +
@@ -1331,7 +1335,7 @@ Index: ioemu/target-i386-dm/qemu-ifup
 Index: ioemu/target-i386-dm/qemu-ifup
 ===================================================================
 --- /dev/null  1970-01-01 00:00:00.000000000 +0000
-+++ ioemu/target-i386-dm/qemu-ifup     2006-07-26 11:45:57.597126730 +0100
++++ ioemu/target-i386-dm/qemu-ifup     2006-07-28 09:56:49.493298880 +0100
 @@ -0,0 +1,10 @@
 +#!/bin/sh
 +
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/shared-vram
--- a/tools/ioemu/patches/shared-vram   Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/shared-vram   Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/hw/cirrus_vga.c
 Index: ioemu/hw/cirrus_vga.c
 ===================================================================
---- ioemu.orig/hw/cirrus_vga.c 2006-07-26 15:17:35.230806831 +0100
-+++ ioemu/hw/cirrus_vga.c      2006-07-26 15:17:39.819307015 +0100
+--- ioemu.orig/hw/cirrus_vga.c 2006-07-27 11:16:53.059986783 +0100
++++ ioemu/hw/cirrus_vga.c      2006-07-27 11:16:59.923227020 +0100
 @@ -28,6 +28,9 @@
   */
  #include "vl.h"
@@ -176,8 +176,8 @@ Index: ioemu/hw/cirrus_vga.c
  }
 Index: ioemu/hw/pc.c
 ===================================================================
---- ioemu.orig/hw/pc.c 2006-07-26 15:17:39.752314312 +0100
-+++ ioemu/hw/pc.c      2006-07-26 15:17:39.820306906 +0100
+--- ioemu.orig/hw/pc.c 2006-07-27 11:16:59.759245173 +0100
++++ ioemu/hw/pc.c      2006-07-27 11:16:59.924226909 +0100
 @@ -783,14 +783,14 @@
      if (cirrus_vga_enabled) {
          if (pci_enabled) {
@@ -198,8 +198,8 @@ Index: ioemu/hw/pc.c
  
 Index: ioemu/hw/vga.c
 ===================================================================
---- ioemu.orig/hw/vga.c        2006-07-26 15:17:39.352357879 +0100
-+++ ioemu/hw/vga.c     2006-07-26 15:17:39.821306797 +0100
+--- ioemu.orig/hw/vga.c        2006-07-27 11:16:59.103317784 +0100
++++ ioemu/hw/vga.c     2006-07-27 11:16:59.925226798 +0100
 @@ -1799,6 +1799,7 @@
      /* TODO: add vbe support if enabled */
  }
@@ -217,7 +217,7 @@ Index: ioemu/hw/vga.c
      s->vram_offset = vga_ram_offset;
      s->vram_size = vga_ram_size;
      s->ds = ds;
-@@ -1941,6 +1942,31 @@
+@@ -1943,6 +1944,31 @@
      return 0;
  }
  
@@ -251,8 +251,8 @@ Index: ioemu/hw/vga.c
  
 Index: ioemu/hw/vga_int.h
 ===================================================================
---- ioemu.orig/hw/vga_int.h    2006-07-26 15:17:38.201483242 +0100
-+++ ioemu/hw/vga_int.h 2006-07-26 15:17:39.822306688 +0100
+--- ioemu.orig/hw/vga_int.h    2006-07-27 11:16:57.447501084 +0100
++++ ioemu/hw/vga_int.h 2006-07-27 11:16:59.925226798 +0100
 @@ -166,5 +166,6 @@
                               unsigned int color0, unsigned int color1,
                               unsigned int color_xor);
@@ -262,9 +262,9 @@ Index: ioemu/hw/vga_int.h
  extern const uint8_t gr_mask[16];
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-26 15:17:39.755313985 +0100
-+++ ioemu/vl.c 2006-07-26 15:17:39.824306470 +0100
-@@ -5148,6 +5148,78 @@
+--- ioemu.orig/vl.c    2006-07-27 11:16:59.762244841 +0100
++++ ioemu/vl.c 2006-07-27 11:16:59.928226466 +0100
+@@ -5149,6 +5149,78 @@
  
  #define MAX_NET_CLIENTS 32
  
@@ -345,8 +345,8 @@ Index: ioemu/vl.c
  #ifdef CONFIG_GDBSTUB
 Index: ioemu/vl.h
 ===================================================================
---- ioemu.orig/vl.h    2006-07-26 15:17:39.621328580 +0100
-+++ ioemu/vl.h 2006-07-26 15:17:39.825306361 +0100
+--- ioemu.orig/vl.h    2006-07-27 11:16:59.549268417 +0100
++++ ioemu/vl.h 2006-07-27 11:16:59.929226356 +0100
 @@ -136,6 +136,13 @@
  
  void main_loop_wait(int timeout);
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/support-xm-console
--- a/tools/ioemu/patches/support-xm-console    Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/support-xm-console    Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,8 @@ diff -r d08c08f8fbf3 vl.c
-diff -r d08c08f8fbf3 vl.c
---- a/vl.c     Mon Jun 26 15:18:25 2006 +0100
-+++ b/vl.c     Mon Jun 26 15:18:37 2006 +0100
-@@ -1535,26 +1535,65 @@ CharDriverState *qemu_chr_open_stdio(voi
+Index: ioemu/vl.c
+===================================================================
+--- ioemu.orig/vl.c    2006-07-27 11:16:59.384286681 +0100
++++ ioemu/vl.c 2006-07-27 11:16:59.614261222 +0100
+@@ -1535,26 +1535,65 @@
      return chr;
  }
  
@@ -65,19 +66,18 @@ diff -r d08c08f8fbf3 vl.c
 -    tty.c_cc[VMIN] = 1;
 -    tty.c_cc[VTIME] = 0;
 -    tcsetattr (master_fd, TCSAFLUSH, &tty);
--
--    fprintf(stderr, "char device redirected to %s\n", slave_name);
 +    /* Set raw attributes on the pty. */
 +    cfmakeraw(&tty);
 +    tcsetattr(slave_fd, TCSAFLUSH, &tty);
 +    
 +    fprintf(stderr, "char device redirected to %s\n", ptsname(master_fd));
 +    store_console_dev(domid, ptsname(master_fd));
-+
+ 
+-    fprintf(stderr, "char device redirected to %s\n", slave_name);
      return qemu_chr_open_fd(master_fd, master_fd);
  }
  
-@@ -5297,7 +5336,9 @@ int main(int argc, char **argv)
+@@ -5298,7 +5337,9 @@
                  break;
              case QEMU_OPTION_nographic:
                  pstrcpy(monitor_device, sizeof(monitor_device), "stdio");
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/vnc-cleanup
--- a/tools/ioemu/patches/vnc-cleanup   Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/vnc-cleanup   Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,8 @@ diff -r c84300f3abc2 vnc.c
-diff -r c84300f3abc2 vnc.c
---- a/vnc.c    Wed Jul 05 18:11:23 2006 +0100
-+++ b/vnc.c    Thu Jul 06 14:27:28 2006 +0100
-@@ -83,13 +83,16 @@ static void vnc_dpy_update(DisplayState 
+Index: ioemu/vnc.c
+===================================================================
+--- ioemu.orig/vnc.c   2006-07-27 11:16:52.783017443 +0100
++++ ioemu/vnc.c        2006-07-27 11:17:00.722138579 +0100
+@@ -83,13 +83,16 @@
  static void vnc_dpy_update(DisplayState *ds, int x, int y, int w, int h)
  {
      VncState *vs = ds->opaque;
@@ -21,7 +22,7 @@ diff -r c84300f3abc2 vnc.c
  }
  
  static void vnc_framebuffer_update(VncState *vs, int x, int y, int w, int h,
-@@ -262,6 +265,7 @@ static void vnc_update_client(void *opaq
+@@ -262,6 +265,7 @@
  static void vnc_update_client(void *opaque)
  {
      VncState *vs = opaque;
@@ -29,7 +30,7 @@ diff -r c84300f3abc2 vnc.c
  
      if (vs->need_update && vs->csock != -1) {
        int y;
-@@ -282,7 +286,7 @@ static void vnc_update_client(void *opaq
+@@ -282,7 +286,7 @@
        row = vs->ds->data;
        old_row = vs->old_data;
  
@@ -38,7 +39,7 @@ diff -r c84300f3abc2 vnc.c
            if (vs->dirty_row[y] & width_mask) {
                int x;
                char *ptr, *old_ptr;
-@@ -307,10 +311,8 @@ static void vnc_update_client(void *opaq
+@@ -307,10 +311,8 @@
            old_row += vs->ds->linesize;
        }
  
@@ -51,7 +52,7 @@ diff -r c84300f3abc2 vnc.c
  
        /* Count rectangles */
        n_rectangles = 0;
-@@ -348,7 +350,9 @@ static void vnc_update_client(void *opaq
+@@ -348,7 +350,9 @@
        vnc_flush(vs);
  
      }
@@ -62,10 +63,11 @@ diff -r c84300f3abc2 vnc.c
  }
  
  static void vnc_timer_init(VncState *vs)
-diff -r c84300f3abc2 vl.c
---- a/vl.c     Wed Jul 05 18:11:23 2006 +0100
-+++ b/vl.c     Thu Jul 06 14:27:28 2006 +0100
-@@ -4586,10 +4586,10 @@ void main_loop_wait(int timeout)
+Index: ioemu/vl.c
+===================================================================
+--- ioemu.orig/vl.c    2006-07-27 11:17:00.311184072 +0100
++++ ioemu/vl.c 2006-07-27 11:17:00.724138358 +0100
+@@ -4587,10 +4587,10 @@
          /* XXX: better handling of removal */
          for(ioh = first_io_handler; ioh != NULL; ioh = ioh_next) {
              ioh_next = ioh->next;
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/vnc-fixes
--- a/tools/ioemu/patches/vnc-fixes     Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/vnc-fixes     Fri Jul 28 10:51:38 2006 +0100
@@ -1,8 +1,8 @@ Index: ioemu/vl.c
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-26 14:29:04.481598583 +0100
-+++ ioemu/vl.c 2006-07-26 14:31:22.668325993 +0100
-@@ -6003,8 +6003,10 @@
+--- ioemu.orig/vl.c    2006-07-27 11:17:00.724138358 +0100
++++ ioemu/vl.c 2006-07-27 11:17:00.874121755 +0100
+@@ -6004,8 +6004,10 @@
                    kernel_filename, kernel_cmdline, initrd_filename,
                    timeoffset);
  
@@ -17,8 +17,8 @@ Index: ioemu/vl.c
      if (use_gdbstub) {
 Index: ioemu/vnc.c
 ===================================================================
---- ioemu.orig/vnc.c   2006-07-26 14:29:04.479598804 +0100
-+++ ioemu/vnc.c        2006-07-26 14:31:22.669325883 +0100
+--- ioemu.orig/vnc.c   2006-07-27 11:17:00.722138579 +0100
++++ ioemu/vnc.c        2006-07-27 11:17:00.875121644 +0100
 @@ -3,6 +3,7 @@
   * 
   * Copyright (C) 2006 Anthony Liguori <anthony@xxxxxxxxxxxxx>
@@ -493,8 +493,8 @@ Index: ioemu/vnc.c
  }
 Index: ioemu/vl.h
 ===================================================================
---- ioemu.orig/vl.h    2006-07-26 14:31:22.669325883 +0100
-+++ ioemu/vl.h 2006-07-26 14:32:44.505279724 +0100
+--- ioemu.orig/vl.h    2006-07-27 11:17:00.311184072 +0100
++++ ioemu/vl.h 2006-07-27 11:17:00.875121644 +0100
 @@ -301,6 +301,7 @@
  int is_graphic_console(void);
  CharDriverState *text_console_init(DisplayState *ds);
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/vnc-start-vncviewer
--- a/tools/ioemu/patches/vnc-start-vncviewer   Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/vnc-start-vncviewer   Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/vnc.c
 Index: ioemu/vnc.c
 ===================================================================
---- ioemu.orig/vnc.c   2006-07-26 14:33:08.166663983 +0100
-+++ ioemu/vnc.c        2006-07-26 14:33:08.225657462 +0100
+--- ioemu.orig/vnc.c   2006-07-27 11:17:00.875121644 +0100
++++ ioemu/vnc.c        2006-07-27 11:17:01.032104266 +0100
 @@ -1002,3 +1002,25 @@
  
      vnc_dpy_resize(vs->ds, 640, 400);
@@ -30,8 +30,8 @@ Index: ioemu/vnc.c
 +}
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-26 14:33:08.165664094 +0100
-+++ ioemu/vl.c 2006-07-26 14:33:08.227657240 +0100
+--- ioemu.orig/vl.c    2006-07-27 11:17:00.874121755 +0100
++++ ioemu/vl.c 2006-07-27 11:17:01.035103934 +0100
 @@ -121,6 +121,7 @@
  int bios_size;
  static DisplayState display_state;
@@ -40,7 +40,7 @@ Index: ioemu/vl.c
  const char* keyboard_layout = NULL;
  int64_t ticks_per_sec;
  int boot_device = 'c';
-@@ -4801,6 +4802,7 @@
+@@ -4802,6 +4803,7 @@
  #endif
             "-loadvm file    start right away with a saved state (loadvm in 
monitor)\n"
           "-vnc display    start a VNC server on display\n"
@@ -48,7 +48,7 @@ Index: ioemu/vl.c
             "-timeoffset     time offset (in seconds) from local time\n"
             "\n"
             "During emulation, the following keys are useful:\n"
-@@ -4889,6 +4891,7 @@
+@@ -4890,6 +4892,7 @@
      QEMU_OPTION_usbdevice,
      QEMU_OPTION_smp,
      QEMU_OPTION_vnc,
@@ -56,7 +56,7 @@ Index: ioemu/vl.c
  
      QEMU_OPTION_d,
      QEMU_OPTION_vcpus,
-@@ -4964,6 +4967,7 @@
+@@ -4965,6 +4968,7 @@
      { "usbdevice", HAS_ARG, QEMU_OPTION_usbdevice },
      { "smp", HAS_ARG, QEMU_OPTION_smp },
      { "vnc", HAS_ARG, QEMU_OPTION_vnc },
@@ -64,7 +64,7 @@ Index: ioemu/vl.c
      
      /* temporary options */
      { "usb", 0, QEMU_OPTION_usb },
-@@ -5294,6 +5298,7 @@
+@@ -5295,6 +5299,7 @@
  #endif
      snapshot = 0;
      nographic = 0;
@@ -72,7 +72,7 @@ Index: ioemu/vl.c
      kernel_filename = NULL;
      kernel_cmdline = "";
  #ifdef TARGET_PPC
-@@ -5663,6 +5668,9 @@
+@@ -5664,6 +5669,9 @@
                    exit(1);
                }
                break;
@@ -82,7 +82,7 @@ Index: ioemu/vl.c
              case QEMU_OPTION_domainname:
                  strncat(domain_name, optarg, sizeof(domain_name) - 20);
                  break;
-@@ -5910,6 +5918,8 @@
+@@ -5911,6 +5919,8 @@
          dumb_display_init(ds);
      } else if (vnc_display != -1) {
        vnc_display_init(ds, vnc_display);
@@ -93,8 +93,8 @@ Index: ioemu/vl.c
          sdl_display_init(ds, full_screen);
 Index: ioemu/vl.h
 ===================================================================
---- ioemu.orig/vl.h    2006-07-26 14:33:08.167663873 +0100
-+++ ioemu/vl.h 2006-07-26 14:33:08.228657130 +0100
+--- ioemu.orig/vl.h    2006-07-27 11:17:00.875121644 +0100
++++ ioemu/vl.h 2006-07-27 11:17:01.036103823 +0100
 @@ -733,6 +733,7 @@
  
  /* vnc.c */
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/xen-domain-name
--- a/tools/ioemu/patches/xen-domain-name       Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/xen-domain-name       Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/sdl.c
 Index: ioemu/sdl.c
 ===================================================================
---- ioemu.orig/sdl.c   2006-07-12 11:33:54.665109493 +0100
-+++ ioemu/sdl.c        2006-07-12 11:35:01.450735012 +0100
+--- ioemu.orig/sdl.c   2006-07-27 11:16:53.590928008 +0100
++++ ioemu/sdl.c        2006-07-27 11:16:58.124426148 +0100
 @@ -268,14 +268,14 @@
  static void sdl_update_caption(void)
  {
@@ -21,8 +21,8 @@ Index: ioemu/sdl.c
  static void sdl_hide_cursor(void)
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-12 11:35:01.094779608 +0100
-+++ ioemu/vl.c 2006-07-12 11:35:01.453734636 +0100
+--- ioemu.orig/vl.c    2006-07-27 11:16:57.828458912 +0100
++++ ioemu/vl.c 2006-07-27 11:16:58.126425927 +0100
 @@ -159,6 +159,8 @@
  #define MAX_CPUS 1
  #endif
@@ -32,7 +32,7 @@ Index: ioemu/vl.c
  /***********************************************************/
  /* x86 ISA bus support */
  
-@@ -4698,6 +4700,7 @@
+@@ -4699,6 +4701,7 @@
             "-s              wait gdb connection to port %d\n"
             "-p port         change gdb connection port\n"
             "-l item1,...    output log to %s (use -d ? for a list of log 
items)\n"
@@ -40,7 +40,7 @@ Index: ioemu/vl.c
             "-hdachs c,h,s[,t]  force hard disk 0 physical geometry and the 
optional BIOS\n"
             "                translation (t=none or lba) (usually qemu can 
guess them)\n"
             "-L path         set the directory for the BIOS and VGA BIOS\n"
-@@ -4787,6 +4790,7 @@
+@@ -4788,6 +4791,7 @@
      QEMU_OPTION_g,
      QEMU_OPTION_std_vga,
      QEMU_OPTION_monitor,
@@ -48,7 +48,7 @@ Index: ioemu/vl.c
      QEMU_OPTION_serial,
      QEMU_OPTION_parallel,
      QEMU_OPTION_loadvm,
-@@ -4860,6 +4864,7 @@
+@@ -4861,6 +4865,7 @@
      { "localtime", 0, QEMU_OPTION_localtime },
      { "std-vga", 0, QEMU_OPTION_std_vga },
      { "monitor", 1, QEMU_OPTION_monitor },
@@ -56,7 +56,7 @@ Index: ioemu/vl.c
      { "serial", 1, QEMU_OPTION_serial },
      { "parallel", 1, QEMU_OPTION_parallel },
      { "loadvm", HAS_ARG, QEMU_OPTION_loadvm },
-@@ -5483,6 +5488,9 @@
+@@ -5484,6 +5489,9 @@
                    exit(1);
                }
                break;
@@ -68,8 +68,8 @@ Index: ioemu/vl.c
      }
 Index: ioemu/vl.h
 ===================================================================
---- ioemu.orig/vl.h    2006-07-12 11:35:00.955797021 +0100
-+++ ioemu/vl.h 2006-07-12 11:35:01.454734511 +0100
+--- ioemu.orig/vl.h    2006-07-27 11:16:57.682475072 +0100
++++ ioemu/vl.h 2006-07-27 11:16:58.127425816 +0100
 @@ -1094,4 +1094,5 @@
  
  void kqemu_record_dump(void);
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/xen-domid
--- a/tools/ioemu/patches/xen-domid     Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/xen-domid     Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,8 @@ diff -r 03705e837ce8 vl.c
-diff -r 03705e837ce8 vl.c
---- a/vl.c     Tue May 30 14:10:44 2006 +0100
-+++ b/vl.c     Tue May 30 14:11:16 2006 +0100
-@@ -160,6 +160,7 @@ int vnc_display = -1;
+Index: ioemu/vl.c
+===================================================================
+--- ioemu.orig/vl.c    2006-07-27 11:16:58.126425927 +0100
++++ ioemu/vl.c 2006-07-27 11:16:58.296407110 +0100
+@@ -160,6 +160,7 @@
  #endif
  
  char domain_name[1024] = { 'H','V', 'M', 'X', 'E', 'N', '-'};
@@ -9,7 +10,7 @@ diff -r 03705e837ce8 vl.c
  
  /***********************************************************/
  /* x86 ISA bus support */
-@@ -4700,6 +4701,7 @@ void help(void)
+@@ -4701,6 +4702,7 @@
             "-s              wait gdb connection to port %d\n"
             "-p port         change gdb connection port\n"
             "-l item1,...    output log to %s (use -d ? for a list of log 
items)\n"
@@ -17,7 +18,7 @@ diff -r 03705e837ce8 vl.c
             "-domain-name    domain name that we're serving\n"
             "-hdachs c,h,s[,t]  force hard disk 0 physical geometry and the 
optional BIOS\n"
             "                translation (t=none or lba) (usually qemu can 
guess them)\n"
-@@ -4803,6 +4805,8 @@ enum {
+@@ -4804,6 +4806,8 @@
      QEMU_OPTION_usbdevice,
      QEMU_OPTION_smp,
      QEMU_OPTION_vnc,
@@ -26,7 +27,7 @@ diff -r 03705e837ce8 vl.c
  };
  
  typedef struct QEMUOption {
-@@ -4878,6 +4882,8 @@ const QEMUOption qemu_options[] = {
+@@ -4879,6 +4883,8 @@
      /* temporary options */
      { "usb", 0, QEMU_OPTION_usb },
      { "cirrusvga", 0, QEMU_OPTION_cirrusvga },
@@ -35,7 +36,7 @@ diff -r 03705e837ce8 vl.c
      { NULL },
  };
  
-@@ -5491,6 +5497,10 @@ int main(int argc, char **argv)
+@@ -5492,6 +5498,10 @@
              case QEMU_OPTION_domainname:
                  strncat(domain_name, optarg, sizeof(domain_name) - 20);
                  break;
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/xen-mm
--- a/tools/ioemu/patches/xen-mm        Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/xen-mm        Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/hw/pc.c
 Index: ioemu/hw/pc.c
 ===================================================================
---- ioemu.orig/hw/pc.c 2006-07-14 15:55:59.489503600 +0100
-+++ ioemu/hw/pc.c      2006-07-14 15:56:00.354405169 +0100
+--- ioemu.orig/hw/pc.c 2006-07-27 11:16:57.678475515 +0100
++++ ioemu/hw/pc.c      2006-07-27 11:16:58.447390396 +0100
 @@ -639,7 +639,9 @@
      }
  
@@ -25,8 +25,8 @@ Index: ioemu/hw/pc.c
      isa_bios_size = bios_size;
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-14 15:56:00.271414614 +0100
-+++ ioemu/vl.c 2006-07-14 15:56:00.358404714 +0100
+--- ioemu.orig/vl.c    2006-07-27 11:16:58.296407110 +0100
++++ ioemu/vl.c 2006-07-27 11:16:58.450390064 +0100
 @@ -159,6 +159,8 @@
  #define MAX_CPUS 1
  #endif
@@ -36,7 +36,7 @@ Index: ioemu/vl.c
  char domain_name[1024] = { 'H','V', 'M', 'X', 'E', 'N', '-'};
  extern int domid;
  
-@@ -5105,6 +5107,9 @@
+@@ -5106,6 +5108,9 @@
      QEMUMachine *machine;
      char usb_devices[MAX_VM_USB_PORTS][128];
      int usb_devices_index;
@@ -46,7 +46,7 @@ Index: ioemu/vl.c
  
      char qemu_dm_logfilename[64];
  
-@@ -5341,11 +5346,13 @@
+@@ -5342,11 +5347,13 @@
                  ram_size = atol(optarg) * 1024 * 1024;
                  if (ram_size <= 0)
                      help();
@@ -60,7 +60,7 @@ Index: ioemu/vl.c
                  break;
              case QEMU_OPTION_l:
                  {
-@@ -5559,6 +5566,39 @@
+@@ -5560,6 +5567,39 @@
      /* init the memory */
      phys_ram_size = ram_size + vga_ram_size + bios_size;
  
@@ -100,7 +100,7 @@ Index: ioemu/vl.c
  #ifdef CONFIG_SOFTMMU
      phys_ram_base = qemu_vmalloc(phys_ram_size);
      if (!phys_ram_base) {
-@@ -5599,6 +5639,8 @@
+@@ -5600,6 +5640,8 @@
      }
  #endif
  
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/patches/xen-network
--- a/tools/ioemu/patches/xen-network   Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/patches/xen-network   Fri Jul 28 10:51:38 2006 +0100
@@ -1,7 +1,7 @@ Index: ioemu/vl.c
 Index: ioemu/vl.c
 ===================================================================
---- ioemu.orig/vl.c    2006-07-12 11:35:01.753697055 +0100
-+++ ioemu/vl.c 2006-07-12 11:35:02.126650330 +0100
+--- ioemu.orig/vl.c    2006-07-27 11:16:58.823348777 +0100
++++ ioemu/vl.c 2006-07-27 11:16:59.169310479 +0100
 @@ -89,6 +89,7 @@
  #include "exec-all.h"
  
@@ -40,7 +40,7 @@ Index: ioemu/vl.c
          int fd;
          if (get_param_value(buf, sizeof(buf), "fd", p) > 0) {
              fd = strtol(buf, NULL, 0);
-@@ -3212,7 +3215,10 @@
+@@ -3213,7 +3216,10 @@
              if (get_param_value(setup_script, sizeof(setup_script), "script", 
p) == 0) {
                  pstrcpy(setup_script, sizeof(setup_script), 
DEFAULT_NETWORK_SCRIPT);
              }
@@ -52,7 +52,7 @@ Index: ioemu/vl.c
          }
      } else
  #endif
-@@ -4671,7 +4677,7 @@
+@@ -4672,7 +4678,7 @@
             "-net tap[,vlan=n],ifname=name\n"
             "                connect the host TAP network interface to VLAN 
'n'\n"
  #else
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/target-i386-dm/exec-dm.c
--- a/tools/ioemu/target-i386-dm/exec-dm.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/target-i386-dm/exec-dm.c      Fri Jul 28 10:51:38 2006 +0100
@@ -64,6 +64,7 @@ uint8_t *code_gen_ptr;
 #endif /* !CONFIG_DM */
 
 uint64_t phys_ram_size;
+extern uint64_t ram_size;
 int phys_ram_fd;
 uint8_t *phys_ram_base;
 uint8_t *phys_ram_dirty;
@@ -422,7 +423,7 @@ void cpu_physical_memory_rw(target_phys_
             l = len;
        
         pd = page;
-        io_index = iomem_index(page);
+        io_index = iomem_index(addr);
         if (is_write) {
             if (io_index) {
                 if (l >= 4 && ((addr & 3) == 0)) {
@@ -467,7 +468,7 @@ void cpu_physical_memory_rw(target_phys_
                     stb_raw(buf, val);
                     l = 1;
                 }
-            } else {
+            } else if (addr < ram_size) {
                 /* RAM case */
                 ptr = phys_ram_base + (pd & TARGET_PAGE_MASK) + 
                     (addr & ~TARGET_PAGE_MASK);
@@ -475,6 +476,9 @@ void cpu_physical_memory_rw(target_phys_
 #ifdef __ia64__
                 sync_icache((unsigned long)ptr, l);
 #endif 
+            } else {
+                /* unreported MMIO space */
+                memset(buf, 0xff, len);
             }
         }
         len -= l;
diff -r 1eb42266de1b -r e5c84586c333 tools/ioemu/vl.c
--- a/tools/ioemu/vl.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/ioemu/vl.c  Fri Jul 28 10:51:38 2006 +0100
@@ -3284,6 +3284,7 @@ int net_client_init(const char *str)
             if (net_tap_fd_init(vlan, fd))
                 ret = 0;
         } else {
+            ifname[0] = '\0';
             get_param_value(ifname, sizeof(ifname), "ifname", p);
             if (get_param_value(setup_script, sizeof(setup_script), "script", 
p) == 0) {
                 pstrcpy(setup_script, sizeof(setup_script), 
DEFAULT_NETWORK_SCRIPT);
diff -r 1eb42266de1b -r e5c84586c333 tools/libxc/Makefile
--- a/tools/libxc/Makefile      Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/libxc/Makefile      Fri Jul 28 10:51:38 2006 +0100
@@ -31,9 +31,12 @@ GUEST_SRCS-y += xg_private.c
 GUEST_SRCS-y += xg_private.c
 GUEST_SRCS-$(CONFIG_POWERPC) += xc_ppc_linux_build.c
 GUEST_SRCS-$(CONFIG_X86) += xc_linux_build.c
-GUEST_SRCS-$(CONFIG_IA64) += xc_ia64_stubs.c xc_linux_build.c
+GUEST_SRCS-$(CONFIG_IA64) += xc_linux_build.c
 GUEST_SRCS-$(CONFIG_MIGRATE) += xc_linux_restore.c xc_linux_save.c
 GUEST_SRCS-$(CONFIG_HVM) += xc_hvm_build.c
+
+# This Makefile only adds files if CONFIG_IA64 is y.
+include ia64/Makefile
 
 CFLAGS   += -Werror
 CFLAGS   += -fno-strict-aliasing
@@ -99,6 +102,7 @@ TAGS:
 .PHONY: clean
 clean:
        rm -rf *.a *.so* *.o *.opic *.rpm $(LIB) *~ $(DEPS) xen
+       rm -rf ia64/*.o ia64/*.opic
 
 .PHONY: rpm
 rpm: build
diff -r 1eb42266de1b -r e5c84586c333 tools/libxc/xc_hvm_build.c
--- a/tools/libxc/xc_hvm_build.c        Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/libxc/xc_hvm_build.c        Fri Jul 28 10:51:38 2006 +0100
@@ -15,12 +15,6 @@
 
 #define HVM_LOADER_ENTR_ADDR  0x00100000
 
-#define L1_PROT (_PAGE_PRESENT|_PAGE_RW|_PAGE_ACCESSED|_PAGE_USER)
-#define L2_PROT (_PAGE_PRESENT|_PAGE_RW|_PAGE_ACCESSED|_PAGE_DIRTY|_PAGE_USER)
-#ifdef __x86_64__
-#define L3_PROT (_PAGE_PRESENT)
-#endif
-
 #define E820MAX     128
 
 #define E820_RAM          1
@@ -41,9 +35,6 @@ struct e820entry {
     uint32_t type;
 } __attribute__((packed));
 
-#define round_pgup(_p)    (((_p)+(PAGE_SIZE-1))&PAGE_MASK)
-#define round_pgdown(_p)  ((_p)&PAGE_MASK)
-
 static int
 parseelfimage(
     char *elfbase, unsigned long elfsize, struct domain_setup_info *dsi);
@@ -52,7 +43,7 @@ loadelfimage(
     char *elfbase, int xch, uint32_t dom, unsigned long *parray,
     struct domain_setup_info *dsi);
 
-static unsigned char build_e820map(void *e820_page, unsigned long long 
mem_size)
+static void build_e820map(void *e820_page, unsigned long long mem_size)
 {
     struct e820entry *e820entry =
         (struct e820entry *)(((unsigned char *)e820_page) + E820_MAP_OFFSET);
@@ -115,7 +106,7 @@ static unsigned char build_e820map(void 
     e820entry[nr_map].type = E820_IO;
     nr_map++;
 
-    return (*(((unsigned char *)e820_page) + E820_MAP_NR_OFFSET) = nr_map);
+    *(((unsigned char *)e820_page) + E820_MAP_NR_OFFSET) = nr_map;
 }
 
 static void set_hvm_info_checksum(struct hvm_info_table *t)
@@ -186,7 +177,6 @@ static int setup_guest(int xc_handle,
 
     shared_info_t *shared_info;
     void *e820_page;
-    unsigned char e820_map_nr;
 
     struct domain_setup_info dsi;
     uint64_t v_end;
@@ -261,7 +251,7 @@ static int setup_guest(int xc_handle,
               page_array[E820_MAP_PAGE >> PAGE_SHIFT])) == 0 )
         goto error_out;
     memset(e820_page, 0, PAGE_SIZE);
-    e820_map_nr = build_e820map(e820_page, v_end);
+    build_e820map(e820_page, v_end);
     munmap(e820_page, PAGE_SIZE);
 
     /* shared_info page starts its life empty. */
@@ -311,23 +301,7 @@ static int setup_guest(int xc_handle,
     /*
      * Initial register values:
      */
-    ctxt->user_regs.ds = 0;
-    ctxt->user_regs.es = 0;
-    ctxt->user_regs.fs = 0;
-    ctxt->user_regs.gs = 0;
-    ctxt->user_regs.ss = 0;
-    ctxt->user_regs.cs = 0;
     ctxt->user_regs.eip = dsi.v_kernentry;
-    ctxt->user_regs.edx = 0;
-    ctxt->user_regs.eax = 0;
-    ctxt->user_regs.esp = 0;
-    ctxt->user_regs.ebx = 0; /* startup_32 expects this to be 0 to signal boot 
cpu */
-    ctxt->user_regs.ecx = 0;
-    ctxt->user_regs.esi = 0;
-    ctxt->user_regs.edi = 0;
-    ctxt->user_regs.ebp = 0;
-
-    ctxt->user_regs.eflags = 0;
 
     return 0;
 
diff -r 1eb42266de1b -r e5c84586c333 tools/libxc/xc_linux_build.c
--- a/tools/libxc/xc_linux_build.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/libxc/xc_linux_build.c      Fri Jul 28 10:51:38 2006 +0100
@@ -2,6 +2,7 @@
  * xc_linux_build.c
  */
 
+#include <stddef.h>
 #include "xg_private.h"
 #include "xc_private.h"
 #include <xenctrl.h>
@@ -473,6 +474,11 @@ static int setup_guest(int xc_handle,
     unsigned long v_end;
     unsigned long start_page, pgnr;
     start_info_t *start_info;
+    unsigned long start_info_mpa;
+    struct xen_ia64_boot_param *bp;
+    shared_info_t *shared_info;
+    int i;
+    DECLARE_DOM0_OP;
     int rc;
 
     rc = probeimageformat(image, image_size, &load_funcs);
@@ -489,6 +495,18 @@ static int setup_guest(int xc_handle,
     vinitrd_start    = round_pgup(dsi.v_end);
     vinitrd_end      = vinitrd_start + initrd->len;
     v_end            = round_pgup(vinitrd_end);
+    start_info_mpa = (nr_pages - 3) << PAGE_SHIFT;
+
+    /* Build firmware.  */
+    memset(&op.u.domain_setup, 0, sizeof(op.u.domain_setup));
+    op.u.domain_setup.flags = 0;
+    op.u.domain_setup.domain = (domid_t)dom;
+    op.u.domain_setup.bp = start_info_mpa + sizeof (start_info_t);
+    op.u.domain_setup.maxmem = (nr_pages - 3) << PAGE_SHIFT;
+    
+    op.cmd = DOM0_DOMAIN_SETUP;
+    if ( xc_dom0_op(xc_handle, &op) )
+        goto error_out;
 
     start_page = dsi.v_start >> PAGE_SHIFT;
     pgnr = (v_end - dsi.v_start) >> PAGE_SHIFT;
@@ -539,7 +557,7 @@ static int setup_guest(int xc_handle,
     IPRINTF("start_info: 0x%lx at 0x%lx, "
            "store_mfn: 0x%lx at 0x%lx, "
            "console_mfn: 0x%lx at 0x%lx\n",
-           page_array[0], nr_pages,
+           page_array[0], nr_pages - 3,
            *store_mfn,    nr_pages - 2,
            *console_mfn,  nr_pages - 1);
 
@@ -554,22 +572,34 @@ static int setup_guest(int xc_handle,
     start_info->console_mfn   = nr_pages - 1;
     start_info->console_evtchn = console_evtchn;
     start_info->nr_pages       = nr_pages; // FIXME?: nr_pages - 2 ????
+
+    bp = (struct xen_ia64_boot_param *)(start_info + 1);
+    bp->command_line = start_info_mpa + offsetof(start_info_t, cmd_line);
+    if ( cmdline != NULL )
+    {
+        strncpy((char *)start_info->cmd_line, cmdline, MAX_GUEST_CMDLINE);
+        start_info->cmd_line[MAX_GUEST_CMDLINE - 1] = 0;
+    }
     if ( initrd->len != 0 )
     {
-        ctxt->initrd.start    = vinitrd_start;
-        ctxt->initrd.size     = initrd->len;
-    }
-    else
-    {
-        ctxt->initrd.start    = 0;
-        ctxt->initrd.size     = 0;
-    }
-    if ( cmdline != NULL )
-    {
-        strncpy((char *)ctxt->cmdline, cmdline, IA64_COMMAND_LINE_SIZE);
-        ctxt->cmdline[IA64_COMMAND_LINE_SIZE-1] = '\0';
-    }
+        bp->initrd_start    = vinitrd_start;
+        bp->initrd_size     = initrd->len;
+    }
+    ctxt->user_regs.r28 = start_info_mpa + sizeof (start_info_t);
     munmap(start_info, PAGE_SIZE);
+
+    /* shared_info page starts its life empty. */
+    shared_info = xc_map_foreign_range(
+        xc_handle, dom, PAGE_SIZE, PROT_READ|PROT_WRITE, shared_info_frame);
+    printf("shared_info = %p, err=%s frame=%lx\n",
+           shared_info, strerror (errno), shared_info_frame);
+    //memset(shared_info, 0, sizeof(shared_info_t));
+    /* Mask all upcalls... */
+    for ( i = 0; i < MAX_VIRT_CPUS; i++ )
+        shared_info->vcpu_info[i].evtchn_upcall_mask = 1;
+    shared_info->arch.start_info_pfn = nr_pages - 3;
+
+    munmap(shared_info, PAGE_SIZE);
 
     free(page_array);
     return 0;
@@ -1150,16 +1180,10 @@ static int xc_linux_build_internal(int x
 #ifdef __ia64__
     /* based on new_thread in xen/arch/ia64/domain.c */
     ctxt->flags = 0;
-    ctxt->shared.flags = flags;
-    ctxt->shared.start_info_pfn = nr_pages - 3; /* metaphysical */
     ctxt->user_regs.cr_ipsr = 0; /* all necessary bits filled by hypervisor */
     ctxt->user_regs.cr_iip = vkern_entry;
     ctxt->user_regs.cr_ifs = 1UL << 63;
     ctxt->user_regs.ar_fpsr = xc_ia64_fpsr_default();
-    /* currently done by hypervisor, should move here */
-    /* ctxt->regs.r28 = dom_fw_setup(); */
-    ctxt->privregs = 0;
-    ctxt->sys_pgnr = 3;
     i = 0; /* silence unused variable warning */
 #else /* x86 */
     /*
diff -r 1eb42266de1b -r e5c84586c333 tools/libxc/xc_private.c
--- a/tools/libxc/xc_private.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/libxc/xc_private.c  Fri Jul 28 10:51:38 2006 +0100
@@ -262,6 +262,7 @@ long long xc_domain_get_cpu_usage( int x
 }
 
 
+#ifndef __ia64__
 int xc_get_pfn_list(int xc_handle,
                     uint32_t domid,
                     xen_pfn_t *pfn_buf,
@@ -305,6 +306,7 @@ int xc_get_pfn_list(int xc_handle,
 
     return (ret < 0) ? -1 : op.u.getmemlist.num_pfns;
 }
+#endif
 
 long xc_get_tot_pages(int xc_handle, uint32_t domid)
 {
diff -r 1eb42266de1b -r e5c84586c333 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h     Thu Jul 27 17:44:14 2006 -0500
+++ b/tools/libxc/xenctrl.h     Fri Jul 28 10:51:38 2006 +0100
@@ -524,9 +524,6 @@ int xc_clear_domain_page(int xc_handle, 
 int xc_clear_domain_page(int xc_handle, uint32_t domid,
                          unsigned long dst_pfn);
 
-int xc_ia64_copy_to_domain_pages(int xc_handle, uint32_t domid,
-        void* src_page, unsigned long dst_pfn, int nr_pages);
-
 long xc_get_max_pages(int xc_handle, uint32_t domid);
 
 int xc_mmuext_op(int xc_handle, struct mmuext_op *op, unsigned int nr_ops,
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/Makefile
--- a/xen/arch/ia64/Makefile    Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/Makefile    Fri Jul 28 10:51:38 2006 +0100
@@ -50,22 +50,22 @@ asm-xsi-offsets.s: asm-xsi-offsets.c $(H
 $(BASEDIR)/include/asm-ia64/.offsets.h.stamp:
 # Need such symbol link to make linux headers available
        [ -e $(BASEDIR)/include/linux ] \
-        || ln -s $(BASEDIR)/include/xen $(BASEDIR)/include/linux
+        || ln -sf $(BASEDIR)/include/xen $(BASEDIR)/include/linux
        [ -e $(BASEDIR)/include/asm-ia64/xen ] \
-        || ln -s $(BASEDIR)/include/asm-ia64/linux 
$(BASEDIR)/include/asm-ia64/xen
+        || ln -sf $(BASEDIR)/include/asm-ia64/linux 
$(BASEDIR)/include/asm-ia64/xen
 # Link to HVM files in Xen for ia64/vti
        [ -e $(BASEDIR)/include/asm-ia64/hvm ] \
         || mkdir $(BASEDIR)/include/asm-ia64/hvm
        [ -e $(BASEDIR)/include/asm-ia64/hvm/support.h ] \
-        || ln -s ../../../include/asm-x86/hvm/support.h 
$(BASEDIR)/include/asm-ia64/hvm/support.h
+        || ln -sf ../../../include/asm-x86/hvm/support.h 
$(BASEDIR)/include/asm-ia64/hvm/support.h
        [ -e $(BASEDIR)/include/asm-ia64/hvm/io.h ] \
-        || ln -s ../../../include/asm-x86/hvm/io.h 
$(BASEDIR)/include/asm-ia64/hvm/io.h
+        || ln -sf ../../../include/asm-x86/hvm/io.h 
$(BASEDIR)/include/asm-ia64/hvm/io.h
        [ -e $(BASEDIR)/include/asm-ia64/hvm/vpic.h ] \
-        || ln -s ../../../include/asm-x86/hvm/vpic.h 
$(BASEDIR)/include/asm-ia64/hvm/vpic.h
+        || ln -sf ../../../include/asm-x86/hvm/vpic.h 
$(BASEDIR)/include/asm-ia64/hvm/vpic.h
        [ -e $(BASEDIR)/include/asm-ia64/hvm/vioapic.h ] \
-        || ln -s ../../../include/asm-x86/hvm/vioapic.h 
$(BASEDIR)/include/asm-ia64/hvm/vioapic.h
+        || ln -sf ../../../include/asm-x86/hvm/vioapic.h 
$(BASEDIR)/include/asm-ia64/hvm/vioapic.h
        [ -e $(BASEDIR)/arch/ia64/vmx/hvm_vioapic.c ] \
-        || ln -s ../../../arch/x86/hvm/vioapic.c 
$(BASEDIR)/arch/ia64/vmx/hvm_vioapic.c
+        || ln -sf ../../../arch/x86/hvm/vioapic.c 
$(BASEDIR)/arch/ia64/vmx/hvm_vioapic.c
 
 # I'm sure a Makefile wizard would know a better way to do this
 xen.lds.s: xen/xen.lds.S
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/asm-offsets.c
--- a/xen/arch/ia64/asm-offsets.c       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/asm-offsets.c       Fri Jul 28 10:51:38 2006 +0100
@@ -8,6 +8,7 @@
 #include <xen/sched.h>
 #include <asm/processor.h>
 #include <asm/ptrace.h>
+#include <asm/mca.h>
 #include <public/xen.h>
 #include <asm/tlb.h>
 #include <asm/regs.h>
@@ -31,6 +32,9 @@ void foo(void)
        DEFINE(IA64_CPU_SIZE, sizeof (struct cpuinfo_ia64));
        DEFINE(UNW_FRAME_INFO_SIZE, sizeof (struct unw_frame_info));
        DEFINE(SHARED_INFO_SIZE, sizeof (struct shared_info));
+
+       BLANK();
+       DEFINE(IA64_MCA_CPU_INIT_STACK_OFFSET, offsetof (struct ia64_mca_cpu, 
init_stack));
 
        BLANK();
 #ifdef   VTI_DEBUG
@@ -61,6 +65,11 @@ void foo(void)
        DEFINE(IA64_VCPU_DTLB_OFFSET, offsetof (struct vcpu, arch.dtlb));
 
        BLANK();
+
+       DEFINE(IA64_DOMAIN_SHADOW_BITMAP_OFFSET, offsetof (struct domain, 
arch.shadow_bitmap));
+
+       BLANK();
+
        DEFINE(IA64_CPUINFO_ITM_NEXT_OFFSET, offsetof (struct cpuinfo_ia64, 
itm_next));
        DEFINE(IA64_CPUINFO_KSOFTIRQD_OFFSET, offsetof (struct cpuinfo_ia64, 
ksoftirqd));
 
@@ -123,7 +132,6 @@ void foo(void)
        DEFINE(IA64_PT_REGS_R6_OFFSET, offsetof (struct pt_regs, r6));
        DEFINE(IA64_PT_REGS_R7_OFFSET, offsetof (struct pt_regs, r7));
        DEFINE(IA64_PT_REGS_EML_UNAT_OFFSET, offsetof (struct pt_regs, 
eml_unat));
-       DEFINE(IA64_PT_REGS_RFI_PFS_OFFSET, offsetof (struct pt_regs, rfi_pfs));
        DEFINE(IA64_VCPU_IIPA_OFFSET, offsetof (struct vcpu, 
arch.arch_vmx.cr_iipa));
        DEFINE(IA64_VCPU_ISR_OFFSET, offsetof (struct vcpu, 
arch.arch_vmx.cr_isr));
        DEFINE(IA64_VCPU_CAUSE_OFFSET, offsetof (struct vcpu, 
arch.arch_vmx.cause));
@@ -180,6 +188,7 @@ void foo(void)
        BLANK();
 
        DEFINE(IA64_VPD_BASE_OFFSET, offsetof (struct vcpu, arch.privregs));
+       DEFINE(IA64_VPD_VIFS_OFFSET, offsetof (mapped_regs_t, ifs));
        DEFINE(IA64_VLSAPIC_INSVC_BASE_OFFSET, offsetof (struct vcpu, 
arch.insvc[0]));
        DEFINE(IA64_VPD_CR_VPTA_OFFSET, offsetof (cr_t, pta));
        DEFINE(XXX_THASH_SIZE, sizeof (thash_data_t));
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/linux-xen/Makefile
--- a/xen/arch/ia64/linux-xen/Makefile  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/linux-xen/Makefile  Fri Jul 28 10:51:38 2006 +0100
@@ -1,6 +1,8 @@ obj-y += efi.o
 obj-y += efi.o
 obj-y += entry.o
 obj-y += irq_ia64.o
+obj-y += mca.o
+obj-y += mca_asm.o
 obj-y += mm_contig.o
 obj-y += pal.o
 obj-y += process-linux-xen.o
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/linux-xen/README.origin
--- a/xen/arch/ia64/linux-xen/README.origin     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/linux-xen/README.origin     Fri Jul 28 10:51:38 2006 +0100
@@ -11,6 +11,8 @@ head.S                        -> linux/arch/ia64/kernel/head.
 head.S                 -> linux/arch/ia64/kernel/head.S
 hpsim_ssc.h            -> linux/arch/ia64/hp/sim/hpsim_ssc.h
 irq_ia64.c             -> linux/arch/ia64/kernel/irq_ia64.c
+mca.c                  -> linux/arch/ia64/kernel/mca.c
+mca_asm.S              -> linux/arch/ia64/kernel/mca_asm.S
 minstate.h             -> linux/arch/ia64/kernel/minstate.h
 mm_contig.c            -> linux/arch/ia64/mm/contig.c
 pal.S                  -> linux/arch/ia64/kernel/pal.S
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/linux-xen/entry.S
--- a/xen/arch/ia64/linux-xen/entry.S   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/linux-xen/entry.S   Fri Jul 28 10:51:38 2006 +0100
@@ -652,17 +652,8 @@ GLOBAL_ENTRY(ia64_ret_from_clone)
     ld8 r16 = [r16]
     ;;
     cmp.ne p6,p7 = r16, r0
- (p6) br.cond.spnt ia64_leave_hypervisor
- (p7) br.cond.spnt ia64_leave_kernel
-    ;;
-//    adds r16 = IA64_VCPU_FLAGS_OFFSET, r13
-//    ;;
-//    ld8 r16 = [r16]
-//    ;;
-//    cmp.ne p6,p7 = r16, r0
-//     (p6) br.cond.spnt ia64_leave_hypervisor
-//     (p7) br.cond.spnt ia64_leave_kernel
-//    ;;
+ (p6) br.cond.spnt ia64_leave_hypervisor       /* VTi */
+ (p7) br.cond.spnt ia64_leave_kernel           /* !VTi */
 #else
 .ret8:
        adds r2=TI_FLAGS+IA64_TASK_SIZE,r13
@@ -901,7 +892,7 @@ GLOBAL_ENTRY(ia64_leave_kernel)
 #ifdef XEN
        ;;
 (pUStk) ssm psr.i
-(pUStk)    br.call.sptk.many b0=process_soft_irq
+(pUStk)    br.call.sptk.many b0=do_softirq
 (pUStk) rsm psr.i
     ;;
        alloc loc0=ar.pfs,0,1,1,0
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/linux-xen/iosapic.c
--- a/xen/arch/ia64/linux-xen/iosapic.c Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/linux-xen/iosapic.c Fri Jul 28 10:51:38 2006 +0100
@@ -1155,7 +1155,7 @@ int iosapic_guest_read(unsigned long phy
 
 int iosapic_guest_write(unsigned long physbase, unsigned int reg, u32 val)
 {
-       unsigned int id, gsi, vec, dest, high32;
+       unsigned int id, gsi, vec, xen_vec, dest, high32;
        char rte_index;
        struct iosapic *ios;
        struct iosapic_intr_info *info;
@@ -1185,13 +1185,17 @@ int iosapic_guest_write(unsigned long ph
 
        /* Sanity check. Vector should be allocated before this update */
        if ((rte_index > ios->num_rte) ||
-           test_bit(vec, ia64_xen_vector) ||
            ((vec > IA64_FIRST_DEVICE_VECTOR) &&
             (vec < IA64_LAST_DEVICE_VECTOR) &&
             (!test_bit(vec - IA64_FIRST_DEVICE_VECTOR, ia64_vector_mask))))
            return -EINVAL;
 
        gsi = ios->gsi_base + rte_index;
+       xen_vec = gsi_to_vector(gsi);
+       if (xen_vec >= 0 && test_bit(xen_vec, ia64_xen_vector)) {
+               printk("WARN: GSI %d in use by Xen.\n", gsi);
+               return -EINVAL;
+       }
        info = &iosapic_intr_info[vec];
        spin_lock_irqsave(&irq_descp(vec)->lock, flags);
        spin_lock(&iosapic_lock);
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/linux-xen/minstate.h
--- a/xen/arch/ia64/linux-xen/minstate.h        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/linux-xen/minstate.h        Fri Jul 28 10:51:38 2006 +0100
@@ -36,7 +36,31 @@
  * For mca_asm.S we want to access the stack physically since the state is 
saved before we
  * go virtual and don't want to destroy the iip or ipsr.
  */
-#define MINSTATE_START_SAVE_MIN_PHYS                                           
                \
+#ifdef XEN
+# define MINSTATE_START_SAVE_MIN_PHYS                                          
                \
+(pKStk)        movl r3=THIS_CPU(ia64_mca_data);;                               
                        \
+(pKStk)        tpa r3 = r3;;                                                   
                        \
+(pKStk)        ld8 r3 = [r3];;                                                 
                        \
+(pKStk)        addl r3=IA64_MCA_CPU_INIT_STACK_OFFSET,r3;;                     
                        \
+(pKStk)        addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3;                   
                        \
+(pUStk)        mov ar.rsc=0;           /* set enforced lazy mode, pl 0, 
little-endian, loadrs=0 */     \
+(pUStk)        addl r22=IA64_RBS_OFFSET,r1;            /* compute base of 
register backing store */    \
+       ;;                                                                      
                \
+(pUStk)        mov r24=ar.rnat;                                                
                        \
+(pUStk)        addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r1;   /* compute base 
of memory stack */      \
+(pUStk)        mov r23=ar.bspstore;                            /* save 
ar.bspstore */                  \
+(pUStk)        dep r22=-1,r22,60,4;                    /* compute Xen virtual 
addr of RBS */   \
+       ;;                                                                      
                \
+(pUStk)        mov ar.bspstore=r22;                    /* switch to Xen RBS */ 
                \
+       ;;                                                                      
                \
+(pUStk)        mov r18=ar.bsp;                                                 
                        \
+(pUStk)        mov ar.rsc=0x3;  /* set eager mode, pl 0, little-endian, 
loadrs=0 */                    \
+
+# define MINSTATE_END_SAVE_MIN_PHYS                                            
                \
+       dep r12=-1,r12,60,4;        /* make sp a Xen virtual address */         
        \
+       ;;
+#else
+# define MINSTATE_START_SAVE_MIN_PHYS                                          
                \
 (pKStk) mov r3=IA64_KR(PER_CPU_DATA);;                                         
                \
 (pKStk) addl r3=THIS_CPU(ia64_mca_data),r3;;                                   
                \
 (pKStk) ld8 r3 = [r3];;                                                        
                        \
@@ -55,15 +79,17 @@
 (pUStk)        mov r18=ar.bsp;                                                 
                        \
 (pUStk)        mov ar.rsc=0x3;         /* set eager mode, pl 0, little-endian, 
loadrs=0 */             \
 
-#define MINSTATE_END_SAVE_MIN_PHYS                                             
                \
+# define MINSTATE_END_SAVE_MIN_PHYS                                            
                \
        dep r12=-1,r12,61,3;            /* make sp a kernel virtual address */  
                \
        ;;
+#endif /* XEN */
 
 #ifdef MINSTATE_VIRT
 #ifdef XEN
 # define MINSTATE_GET_CURRENT(reg)                                     \
                movl reg=THIS_CPU(cpu_kr)+IA64_KR_CURRENT_OFFSET;;      \
                ld8 reg=[reg]
+# define MINSTATE_GET_CURRENT_VIRT(reg)        MINSTATE_GET_CURRENT(reg)
 #else
 # define MINSTATE_GET_CURRENT(reg)     mov reg=IA64_KR(CURRENT)
 #endif
@@ -72,7 +98,19 @@
 #endif
 
 #ifdef MINSTATE_PHYS
+# ifdef XEN
+# define MINSTATE_GET_CURRENT(reg)                                     \
+       movl reg=THIS_CPU(cpu_kr)+IA64_KR_CURRENT_OFFSET;;              \
+       tpa reg=reg;;                                                   \
+       ld8 reg=[reg];;                                                 \
+       tpa reg=reg;;
+# define MINSTATE_GET_CURRENT_VIRT(reg)                                        
\
+       movl reg=THIS_CPU(cpu_kr)+IA64_KR_CURRENT_OFFSET;;              \
+       tpa reg=reg;;                                                   \
+       ld8 reg=[reg];;
+#else
 # define MINSTATE_GET_CURRENT(reg)     mov reg=IA64_KR(CURRENT);; tpa reg=reg
+#endif /* XEN */
 # define MINSTATE_START_SAVE_MIN       MINSTATE_START_SAVE_MIN_PHYS
 # define MINSTATE_END_SAVE_MIN         MINSTATE_END_SAVE_MIN_PHYS
 #endif
@@ -175,8 +213,8 @@
        ;;                                                                      
                \
 .mem.offset 0,0; st8.spill [r16]=r13,16;                                       
                \
 .mem.offset 8,0; st8.spill [r17]=r21,16;       /* save ar.fpsr */              
                \
-       /* XEN mov r13=IA64_KR(CURRENT);*/      /* establish `current' */       
                        \
-       MINSTATE_GET_CURRENT(r13);              /* XEN establish `current' */   
                        \
+       /* XEN mov r13=IA64_KR(CURRENT);*/      /* establish `current' */       
                \
+       MINSTATE_GET_CURRENT_VIRT(r13);         /* XEN establish `current' */   
                \
        ;;                                                                      
                \
 .mem.offset 0,0; st8.spill [r16]=r15,16;                                       
                \
 .mem.offset 8,0; st8.spill [r17]=r14,16;                                       
                \
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/linux-xen/tlb.c
--- a/xen/arch/ia64/linux-xen/tlb.c     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/linux-xen/tlb.c     Fri Jul 28 10:51:38 2006 +0100
@@ -173,7 +173,11 @@ void __devinit
 void __devinit
 ia64_tlb_init (void)
 {
+#ifndef XEN
        ia64_ptce_info_t ptce_info;
+#else
+       ia64_ptce_info_t ptce_info = { 0 };
+#endif
        unsigned long tr_pgbits;
        long status;
 
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/linux-xen/unwind.c
--- a/xen/arch/ia64/linux-xen/unwind.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/linux-xen/unwind.c  Fri Jul 28 10:51:38 2006 +0100
@@ -2056,6 +2056,28 @@ init_frame_info (struct unw_frame_info *
 }
 
 void
+unw_init_from_interruption (struct unw_frame_info *info, struct task_struct *t,
+                           struct pt_regs *pt, struct switch_stack *sw)
+{
+       unsigned long sof;
+
+       init_frame_info(info, t, sw, pt->r12);
+       info->cfm_loc = &pt->cr_ifs;
+       info->unat_loc = &pt->ar_unat;
+       info->pfs_loc = &pt->ar_pfs;
+       sof = *info->cfm_loc & 0x7f;
+       info->bsp = (unsigned long) ia64_rse_skip_regs((unsigned long *) 
info->regstk.top, -sof);
+       info->ip = pt->cr_iip + ia64_psr(pt)->ri;
+       info->pt = (unsigned long) pt;
+       UNW_DPRINT(3, "unwind.%s:\n"
+                  "  bsp    0x%lx\n"
+                  "  sof    0x%lx\n"
+                  "  ip     0x%lx\n",
+                  __FUNCTION__, info->bsp, sof, info->ip);
+       find_save_locs(info);
+}
+
+void
 unw_init_frame_info (struct unw_frame_info *info, struct task_struct *t, 
struct switch_stack *sw)
 {
        unsigned long sol;
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/tools/README.RunVT
--- a/xen/arch/ia64/tools/README.RunVT  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/tools/README.RunVT  Fri Jul 28 10:51:38 2006 +0100
@@ -1,59 +1,46 @@ INSTRUCTIONS FOR Running IPF/Xen with VT
 INSTRUCTIONS FOR Running IPF/Xen with VT-enabled Tiger4 pltform
 
-Note: the Domain0 must be an unmodified Linux
+1. Install a Linux Disk, VT_Disk, to be used by VT
+2. Setup the target VT_Disk
+       1. Boot VT_Disk
+       2. modify following files of VT_Disk
+               /boot/efi/efi/redhat/elilo.conf -
+                       modify "append=" line to have "root=/dev/hda3"
+                       ** note /dev/hda3 must reflect VT_Disk /root partition
 
-1) Perform operations in README.xenia64 to get a flattened Xen IPF source tree
+               /etc/fstab -
+                       LABEL=/     /     ext3    DEFAULTS  1   1
+                 to
+                       /dev/hda3   /     ext3    DEFAULTS  1   1
+                  and other entries accordingly
+3. Install Xen and boot XenLinux on your standard Linux disk
+        1. modify /boot/efi/efi/redhat/elilo.conf -
+                       "append=" entry to have "root=/dev/sda3"
+       2. modify /etc/fstab -
+                        LABEL=/     /     ext3    DEFAULTS  1   1
+                  to
+                        /dev/sda3   /     ext3    DEFAULTS  1   1
+                  and other entries accordingly
+4. Reboot XenLinux with VT_Disk in /dev/sdb slot
+       1. copy Guest_Firmware.bin into /usr/lib/xen/boot/guest_firmware.bin
+       2. modify /etc/xen/xmexample.vti
+               disk = [ 'phy:/dev/sdb,ioemu:hda,w' ]
+          and make sure
+               kernel=/usr/lib/xen/boot/guest_firmware.bin
+5. Make sure XenLinux has SDL installed by
+       > rpm -q -a | grep SDL
+               SDL-1.2.7-8 SDL-devel-1.2.7-8 
+6. Start vncserver from XenLinux
+       1. ifconfig  to get XenLinux IP address
+       2. vncserver
+7. Start VT Domain
+       1. From a remote system connect to XenLinux through vnc viewer
+       2. On vnc windows
+               > xend start
+               > xm create /etc/xen/xmexample.vti
+          an EFI shell will popup
+               > fs0:
+               fs0:> cd efi\redhat
+               fs0:> elilo linux
 
-2) Build an unmodified Linux 2.6 kernel
-       a) tar xvfz  linux-2.6.11.tar.gz
-        b) cp arch/ia64/configs/tiger_defconfig .config
-       c) Build linux.
-               1) yes "" | make oldconfig
-               2) make
 
-3) Build IPF VT-enabled Xen image
-       edit xen/arch/ia64/Rules.mk for
-               CONFIG_VTI      ?= y    to enable VT-enable build
-4) Setup ELILO.CONF
-       image=xen
-               label=xen
-               initrd=vmlinux2.6.11            // unmodified Linux kernel image
-               read-only
-               append="nomca root=/dev/sda3"
-
-STATUS as 4/28/05 - Features implemented for Domain0
-
-0. Runs unmodified Linux kernel as Domain0
-    Validated with Linux 2.6.11 to run Xwindow and NIC on UP logical processor
-
-1. Take advantage of VT-enabled processor
-   a. Processor intercepts guest privileged instruction and deliver 
Opcode/Cause to Hypervisor
-   b. One VPD (Virtual Processor Descriptor) per Virtual Processor
-   c. Domains are in a different virtual address space from hypervisor. 
Domains have one less VA bit than hypervisor, where hypervisor runs in 
0xF00000... address protected by the processor from Domains.
-
-2. vTLB and guest_VHPT
-   a. vTLB extending machine TLB entries through hypervisor internal data 
structure
-      vTLB caches Domains installed TR's and TC's, and then installs TC's for 
Domains instead.
-      vTLB implements collision chains
-   b. Processor walks hypervisor internal VHPT, not the domain VHPT.  On TLB 
miss, vTLB is consulted first to put hypervisor cached entry into VHPT without 
inject TLB miss to domain.
-
-3. Region ID fix-partitioning
-   a. currently hard partition 24bits of RIDs into 16 partitions by using top 
4bit.
-   b. Hypervisor uses the very last partition RIDs, i.e., 0xFxxxxx RIDs
-   c. Effectively supports Domain0 and 14 other DomainN
-
-4. HyperVisor is mapped with 2 sets of RIDs during runtime, its own RIDs and 
the active Domain RIDs
-   a. Domain RIDs are used by processor to access guest_VHPT during Domain 
runtime
-   b. Hypervisor RIDs are used when Hypervisor is running
-   c. Implies there are some Region registers transition on entering/exiting 
hypervisor
-
-5. Linux styled pt_regs with minor modification for VT and instruction 
emulation
-   a. Part of Domain registers are saved/restored from VPD
-   b. Extended pt_regs to include r4~r7 and Domain's iipa & isr for possible 
instruction emulation, so no need to save a complete set of switch_stack on IVT 
entry
-
-6. Linux styled per virtual processor memory/RSE stacks, which is the same as 
non-VT domain0
-
-7. Handles splitted I/DCache design
-   Newer IPF processors has split I/Dcaches.  The design takes this into 
consideration when Xen recopy Domain0 to target address for execution
-
-
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/mmio.c
--- a/xen/arch/ia64/vmx/mmio.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/mmio.c  Fri Jul 28 10:51:38 2006 +0100
@@ -27,7 +27,7 @@
 #include <asm/gcc_intrin.h>
 #include <linux/interrupt.h>
 #include <asm/vmx_vcpu.h>
-#include <asm/privop.h>
+#include <asm/bundle.h>
 #include <asm/types.h>
 #include <public/hvm/ioreq.h>
 #include <asm/mm.h>
@@ -386,20 +386,16 @@ static void write_ipi (VCPU *vcpu, uint6
         struct pt_regs *targ_regs = vcpu_regs (targ);
         struct vcpu_guest_context c;
 
-        printf ("arch_boot_vcpu: %p %p\n",
-                (void *)d->arch.boot_rdv_ip,
-                (void *)d->arch.boot_rdv_r1);
         memset (&c, 0, sizeof (c));
 
-        c.flags = VGCF_VMX_GUEST;
         if (arch_set_info_guest (targ, &c) != 0) {
             printf ("arch_boot_vcpu: failure\n");
             return;
         }
         /* First or next rendez-vous: set registers.  */
         vcpu_init_regs (targ);
-        targ_regs->cr_iip = d->arch.boot_rdv_ip;
-        targ_regs->r1 = d->arch.boot_rdv_r1;
+        targ_regs->cr_iip = d->arch.sal_data->boot_rdv_ip;
+        targ_regs->r1 = d->arch.sal_data->boot_rdv_r1;
 
         if (test_and_clear_bit(_VCPUF_down,&targ->vcpu_flags)) {
             vcpu_wake(targ);
@@ -425,7 +421,6 @@ static void write_ipi (VCPU *vcpu, uint6
    dir 1: read 0:write
     inst_type 0:integer 1:floating point
  */
-extern IA64_BUNDLE __vmx_get_domain_bundle(u64 iip);
 #define SL_INTEGER  0        // store/load interger
 #define SL_FLOATING    1       // store/load floating
 
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/pal_emul.c
--- a/xen/arch/ia64/vmx/pal_emul.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/pal_emul.c      Fri Jul 28 10:51:38 2006 +0100
@@ -24,18 +24,39 @@
 #include <asm/dom_fw.h>
 #include <asm/tlb.h>
 #include <asm/vmx_mm_def.h>
+#include <xen/hypercall.h>
+#include <public/sched.h>
+
+/*
+ * Handy macros to make sure that the PAL return values start out
+ * as something meaningful.
+ */
+#define INIT_PAL_STATUS_UNIMPLEMENTED(x)               \
+       {                                               \
+               x.status = PAL_STATUS_UNIMPLEMENTED;    \
+               x.v0 = 0;                               \
+               x.v1 = 0;                               \
+               x.v2 = 0;                               \
+       }
+
+#define INIT_PAL_STATUS_SUCCESS(x)                     \
+       {                                               \
+               x.status = PAL_STATUS_SUCCESS;          \
+               x.v0 = 0;                               \
+               x.v1 = 0;                               \
+               x.v2 = 0;                               \
+       }
 
 static void
-get_pal_parameters (VCPU *vcpu, UINT64 *gr29,
-                       UINT64 *gr30, UINT64 *gr31) {
-
-       vcpu_get_gr_nat(vcpu,29,gr29);
-       vcpu_get_gr_nat(vcpu,30,gr30); 
-       vcpu_get_gr_nat(vcpu,31,gr31);
+get_pal_parameters(VCPU *vcpu, UINT64 *gr29, UINT64 *gr30, UINT64 *gr31) {
+
+       vcpu_get_gr_nat(vcpu,29,gr29);
+       vcpu_get_gr_nat(vcpu,30,gr30); 
+       vcpu_get_gr_nat(vcpu,31,gr31);
 }
 
 static void
-set_pal_result (VCPU *vcpu,struct ia64_pal_retval result) {
+set_pal_result(VCPU *vcpu,struct ia64_pal_retval result) {
 
        vcpu_set_gr(vcpu,8, result.status,0);
        vcpu_set_gr(vcpu,9, result.v0,0);
@@ -44,58 +65,60 @@ set_pal_result (VCPU *vcpu,struct ia64_p
 }
 
 static void
-set_sal_result (VCPU *vcpu,struct sal_ret_values result) {
+set_sal_result(VCPU *vcpu,struct sal_ret_values result) {
 
        vcpu_set_gr(vcpu,8, result.r8,0);
        vcpu_set_gr(vcpu,9, result.r9,0);
        vcpu_set_gr(vcpu,10, result.r10,0);
        vcpu_set_gr(vcpu,11, result.r11,0);
 }
-static struct ia64_pal_retval
-pal_cache_flush (VCPU *vcpu) {
+
+static struct ia64_pal_retval
+pal_cache_flush(VCPU *vcpu) {
        UINT64 gr28,gr29, gr30, gr31;
        struct ia64_pal_retval result;
 
-       get_pal_parameters (vcpu, &gr29, &gr30, &gr31);
-       vcpu_get_gr_nat(vcpu,28,&gr28);
+       get_pal_parameters(vcpu, &gr29, &gr30, &gr31);
+       vcpu_get_gr_nat(vcpu, 28, &gr28);
 
        /* Always call Host Pal in int=1 */
-       gr30 = gr30 &(~(0x2UL));
-
-       /* call Host PAL cache flush */
-       result=ia64_pal_call_static(gr28 ,gr29, gr30,gr31,1);  // Clear psr.ic 
when call PAL_CACHE_FLUSH
+       gr30 = gr30 & ~0x2UL;
+
+       /*
+        * Call Host PAL cache flush
+        * Clear psr.ic when call PAL_CACHE_FLUSH
+        */
+       result = ia64_pal_call_static(gr28 ,gr29, gr30, gr31, 1);
 
        /* If host PAL call is interrupted, then loop to complete it */
-//     while (result.status == 1) {
-//             ia64_pal_call_static(gr28 ,gr29, gr30, 
-//                             result.v1,1LL);
-//     }
-       if(result.status != 0) {
-               panic_domain(vcpu_regs(vcpu),"PAL_CACHE_FLUSH ERROR, status 
%ld", result.status);
-       }
-
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_vm_tr_read (VCPU *vcpu ) {
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-
-       return result;
-}
-
-
-static struct ia64_pal_retval
-pal_prefetch_visibility (VCPU *vcpu)  {
+//     while (result.status == 1)
+//             ia64_pal_call_static(gr28 ,gr29, gr30, result.v1, 1LL);
+//
+       if (result.status != 0)
+               panic_domain(vcpu_regs(vcpu), "PAL_CACHE_FLUSH ERROR, "
+                            "status %ld", result.status);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_vm_tr_read(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_prefetch_visibility(VCPU *vcpu) {
        /* Due to current MM virtualization algorithm,
         * We do not allow guest to change mapping attribute.
         * Thus we will not support PAL_PREFETCH_VISIBILITY
         */
        struct ia64_pal_retval result;
 
-       result.status= -1; //unimplemented
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
 
        return result;
 }
@@ -104,288 +127,315 @@ pal_platform_addr(VCPU *vcpu) {
 pal_platform_addr(VCPU *vcpu) {
        struct ia64_pal_retval result;
 
-       result.status= 0; //success
-
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_halt (VCPU *vcpu) {
+       INIT_PAL_STATUS_SUCCESS(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_halt(VCPU *vcpu) {
        //bugbug: to be implement. 
        struct ia64_pal_retval result;
 
-       result.status= -1; //unimplemented
-
-       return result;
-}
-
-
-static struct ia64_pal_retval
-pal_halt_light (VCPU *vcpu) {
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_cache_read (VCPU *vcpu) {
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_cache_write (VCPU *vcpu) {
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_bus_get_features(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_cache_summary(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_cache_init(VCPU *vcpu){
-       struct ia64_pal_retval result;
-       result.status=0;
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_cache_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_cache_prot_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_mem_attrib(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_debug_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_fixed_addr(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_freq_base(VCPU *vcpu){
-    struct ia64_pal_retval result;
-    struct ia64_sal_retval isrv;
-
-    PAL_CALL(result,PAL_FREQ_BASE, 0, 0, 0);
-    if(result.v0 == 0){ //PAL_FREQ_BASE may not be implemented in some 
platforms, call SAL instead.
-        SAL_CALL(isrv, SAL_FREQ_BASE, 
-                SAL_FREQ_BASE_PLATFORM, 0, 0, 0, 0, 0, 0);
-        result.status = isrv.status;
-        result.v0 = isrv.v0;
-        result.v1 = result.v2 =0;
-    }
-    return result;
-}
-
-static struct ia64_pal_retval
-pal_freq_ratios(VCPU *vcpu){
-    struct ia64_pal_retval result;
-
-    PAL_CALL(result,PAL_FREQ_RATIOS, 0, 0, 0);
-    return result;
-}
-
-static struct ia64_pal_retval
-pal_halt_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_logical_to_physica(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_perf_mon_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_proc_get_features(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_ptce_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_register_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_rse_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-static struct ia64_pal_retval
-pal_test_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_vm_summary(VCPU *vcpu){
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_halt_light(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+       
+       if (!is_unmasked_irq(vcpu))
+               do_sched_op_compat(SCHEDOP_block, 0);
+           
+       INIT_PAL_STATUS_SUCCESS(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_cache_read(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_cache_write(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_bus_get_features(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_cache_summary(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_cache_init(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_SUCCESS(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_cache_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_cache_prot_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_mem_attrib(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_debug_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_fixed_addr(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_freq_base(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+       struct ia64_sal_retval isrv;
+
+       PAL_CALL(result,PAL_FREQ_BASE, 0, 0, 0);
+       /*
+        * PAL_FREQ_BASE may not be implemented in some platforms,
+        * call SAL instead.
+        */
+       if (result.v0 == 0) {
+               SAL_CALL(isrv, SAL_FREQ_BASE, 
+                        SAL_FREQ_BASE_PLATFORM, 0, 0, 0, 0, 0, 0);
+               result.status = isrv.status;
+               result.v0 = isrv.v0;
+               result.v1 = result.v2 = 0;
+       }
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_freq_ratios(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       PAL_CALL(result, PAL_FREQ_RATIOS, 0, 0, 0);
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_halt_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_logical_to_physica(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_perf_mon_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_proc_get_features(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_ptce_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_register_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_rse_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_test_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_vm_summary(VCPU *vcpu) {
        pal_vm_info_1_u_t vminfo1;
        pal_vm_info_2_u_t vminfo2;      
        struct ia64_pal_retval result;
        
-       PAL_CALL(result,PAL_VM_SUMMARY,0,0,0);
-       if(!result.status){
+       PAL_CALL(result, PAL_VM_SUMMARY, 0, 0, 0);
+       if (!result.status) {
                vminfo1.pvi1_val = result.v0;
                vminfo1.pal_vm_info_1_s.max_itr_entry = NITRS -1;
                vminfo1.pal_vm_info_1_s.max_dtr_entry = NDTRS -1;
                result.v0 = vminfo1.pvi1_val;
                vminfo2.pal_vm_info_2_s.impl_va_msb = GUEST_IMPL_VA_MSB;
-               vminfo2.pal_vm_info_2_s.rid_size = 
current->domain->arch.rid_bits;
+               vminfo2.pal_vm_info_2_s.rid_size =
+                                            current->domain->arch.rid_bits;
                result.v1 = vminfo2.pvi2_val;
        } 
        return result;
 }
 
 static struct ia64_pal_retval
-pal_vm_info(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
-
-static struct ia64_pal_retval
-pal_vm_page_size(VCPU *vcpu){
-       struct ia64_pal_retval result;
-
-       result.status= -1; //unimplemented
-       return result;
-}
+pal_vm_info(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
+static struct ia64_pal_retval
+pal_vm_page_size(VCPU *vcpu) {
+       struct ia64_pal_retval result;
+
+       INIT_PAL_STATUS_UNIMPLEMENTED(result);
+
+       return result;
+}
+
 void
-pal_emul( VCPU *vcpu) {
+pal_emul(VCPU *vcpu) {
        UINT64 gr28;
        struct ia64_pal_retval result;
-
 
        vcpu_get_gr_nat(vcpu,28,&gr28);  //bank1
 
        switch (gr28) {
                case PAL_CACHE_FLUSH:
-                       result = pal_cache_flush (vcpu);
+                       result = pal_cache_flush(vcpu);
                        break;
 
                case PAL_PREFETCH_VISIBILITY:
-                       result = pal_prefetch_visibility (vcpu);
+                       result = pal_prefetch_visibility(vcpu);
                        break;
 
                case PAL_VM_TR_READ:
-                       result = pal_vm_tr_read (vcpu);
+                       result = pal_vm_tr_read(vcpu);
                        break;
 
                case PAL_HALT:
-                       result = pal_halt (vcpu);
+                       result = pal_halt(vcpu);
                        break;
 
                case PAL_HALT_LIGHT:
-                       result = pal_halt_light (vcpu);
+                       result = pal_halt_light(vcpu);
                        break;
 
                case PAL_CACHE_READ:
-                       result = pal_cache_read (vcpu);
+                       result = pal_cache_read(vcpu);
                        break;
 
                case PAL_CACHE_WRITE:
-                       result = pal_cache_write (vcpu);
+                       result = pal_cache_write(vcpu);
                        break;
 
                case PAL_PLATFORM_ADDR:
-                       result = pal_platform_addr (vcpu);
+                       result = pal_platform_addr(vcpu);
                        break;
 
                case PAL_FREQ_RATIOS:
-                       result = pal_freq_ratios (vcpu);
+                       result = pal_freq_ratios(vcpu);
                        break;
 
                case PAL_FREQ_BASE:
-                       result = pal_freq_base (vcpu);
+                       result = pal_freq_base(vcpu);
                        break;
 
                case PAL_BUS_GET_FEATURES :
-                       result = pal_bus_get_features (vcpu);
+                       result = pal_bus_get_features(vcpu);
                        break;
 
                case PAL_CACHE_SUMMARY :
-                       result = pal_cache_summary (vcpu);
+                       result = pal_cache_summary(vcpu);
                        break;
 
                case PAL_CACHE_INIT :
@@ -457,17 +507,18 @@ pal_emul( VCPU *vcpu) {
                        break;
 
                default:
-                       panic_domain(vcpu_regs(vcpu),"pal_emul(): guest call 
unsupported pal" );
-  }
-               set_pal_result (vcpu, result);
+                       panic_domain(vcpu_regs(vcpu),"pal_emul(): guest "
+                                    "call unsupported pal" );
+       }
+       set_pal_result(vcpu, result);
 }
 
 void
 sal_emul(VCPU *v) {
        struct sal_ret_values result;
-       result = sal_emulator(vcpu_get_gr(v,32),vcpu_get_gr(v,33),
-                             vcpu_get_gr(v,34),vcpu_get_gr(v,35),
-                             vcpu_get_gr(v,36),vcpu_get_gr(v,37),
-                             vcpu_get_gr(v,38),vcpu_get_gr(v,39));
+       result = sal_emulator(vcpu_get_gr(v, 32), vcpu_get_gr(v, 33),
+                             vcpu_get_gr(v, 34), vcpu_get_gr(v, 35),
+                             vcpu_get_gr(v, 36), vcpu_get_gr(v, 37),
+                             vcpu_get_gr(v, 38), vcpu_get_gr(v, 39));
        set_sal_result(v, result);      
 }
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vlsapic.c
--- a/xen/arch/ia64/vmx/vlsapic.c       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vlsapic.c       Fri Jul 28 10:51:38 2006 +0100
@@ -103,6 +103,7 @@ static void vtm_timer_fn(void *data)
     vitv = VCPU(vcpu, itv);
     if ( !ITV_IRQ_MASK(vitv) ){
         vmx_vcpu_pend_interrupt(vcpu, vitv & 0xff);
+        vcpu_unblock(vcpu);
     }
     vtm=&(vcpu->arch.arch_vmx.vtm);
     cur_itc = now_itc(vtm);
@@ -290,7 +291,7 @@ static void update_vhpi(VCPU *vcpu, int 
         vhpi = 16;
     }
     else {
-        vhpi = vec / 16;
+        vhpi = vec >> 4;
     }
 
     VCPU(vcpu,vhpi) = vhpi;
@@ -437,7 +438,7 @@ static int highest_inservice_irq(VCPU *v
  */
 static int is_higher_irq(int pending, int inservice)
 {
-    return ( (pending >> 4) > (inservice>>4) || 
+    return ( (pending > inservice) || 
                 ((pending != NULL_VECTOR) && (inservice == NULL_VECTOR)) );
 }
 
@@ -461,7 +462,6 @@ _xirq_masked(VCPU *vcpu, int h_pending, 
 _xirq_masked(VCPU *vcpu, int h_pending, int h_inservice)
 {
     tpr_t    vtpr;
-    uint64_t    mmi;
     
     vtpr.val = VCPU(vcpu, tpr);
 
@@ -475,9 +475,9 @@ _xirq_masked(VCPU *vcpu, int h_pending, 
     if ( h_inservice == ExtINT_VECTOR ) {
         return IRQ_MASKED_BY_INSVC;
     }
-    mmi = vtpr.mmi;
+
     if ( h_pending == ExtINT_VECTOR ) {
-        if ( mmi ) {
+        if ( vtpr.mmi ) {
             // mask all external IRQ
             return IRQ_MASKED_BY_VTPR;
         }
@@ -487,7 +487,7 @@ _xirq_masked(VCPU *vcpu, int h_pending, 
     }
 
     if ( is_higher_irq(h_pending, h_inservice) ) {
-        if ( !mmi && is_higher_class(h_pending, vtpr.mic) ) {
+        if ( is_higher_class(h_pending, vtpr.mic + (vtpr.mmi << 4)) ) {
             return IRQ_NO_MASKED;
         }
         else {
@@ -551,8 +551,7 @@ void vmx_vcpu_pend_batch_interrupt(VCPU 
  * it into the guest. Otherwise, we set the VHPI if vac.a_int=1 so that when 
  * the interrupt becomes unmasked, it gets injected.
  * RETURN:
- *  TRUE:   Interrupt is injected.
- *  FALSE:  Not injected but may be in VHPI when vac.a_int=1
+ *    the highest unmasked interrupt.
  *
  * Optimization: We defer setting the VHPI until the EOI time, if a higher 
  *               priority interrupt is in-service. The idea is to reduce the 
@@ -562,23 +561,26 @@ int vmx_check_pending_irq(VCPU *vcpu)
 {
     uint64_t  spsr, mask;
     int     h_pending, h_inservice;
-    int injected=0;
     uint64_t    isr;
     IA64_PSR    vpsr;
     REGS *regs=vcpu_regs(vcpu);
     local_irq_save(spsr);
     h_pending = highest_pending_irq(vcpu);
-    if ( h_pending == NULL_VECTOR ) goto chk_irq_exit;
+    if ( h_pending == NULL_VECTOR ) {
+        h_pending = SPURIOUS_VECTOR;
+        goto chk_irq_exit;
+    }
     h_inservice = highest_inservice_irq(vcpu);
 
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
     mask = irq_masked(vcpu, h_pending, h_inservice);
     if (  vpsr.i && IRQ_NO_MASKED == mask ) {
         isr = vpsr.val & IA64_PSR_RI;
         if ( !vpsr.ic )
             panic_domain(regs,"Interrupt when IC=0\n");
+        if (VCPU(vcpu, vhpi))
+            update_vhpi(vcpu, NULL_VECTOR);
         vmx_reflect_interruption(0,isr,0, 12, regs ); // EXT IRQ
-        injected = 1;
     }
     else if ( mask == IRQ_MASKED_BY_INSVC ) {
         // cann't inject VHPI
@@ -591,7 +593,7 @@ int vmx_check_pending_irq(VCPU *vcpu)
 
 chk_irq_exit:
     local_irq_restore(spsr);
-    return injected;
+    return h_pending;
 }
 
 /*
@@ -613,6 +615,20 @@ void guest_write_eoi(VCPU *vcpu)
 //    vmx_check_pending_irq(vcpu);
 }
 
+int is_unmasked_irq(VCPU *vcpu)
+{
+    int h_pending, h_inservice;
+
+    h_pending = highest_pending_irq(vcpu);
+    h_inservice = highest_inservice_irq(vcpu);
+    if ( h_pending == NULL_VECTOR || 
+        irq_masked(vcpu, h_pending, h_inservice) != IRQ_NO_MASKED ) {
+        return 0;
+    }
+    else
+        return 1;
+}
+
 uint64_t guest_read_vivr(VCPU *vcpu)
 {
     int vec, h_inservice;
@@ -629,7 +645,8 @@ uint64_t guest_read_vivr(VCPU *vcpu)
  
     VLSAPIC_INSVC(vcpu,vec>>6) |= (1UL <<(vec&63));
     VCPU(vcpu, irr[vec>>6]) &= ~(1UL <<(vec&63));
-    update_vhpi(vcpu, NULL_VECTOR);     // clear VHPI till EOI or IRR write
+    if (VCPU(vcpu, vhpi))
+        update_vhpi(vcpu, NULL_VECTOR); // clear VHPI till EOI or IRR write
     local_irq_restore(spsr);
     return (uint64_t)vec;
 }
@@ -639,7 +656,7 @@ static void generate_exirq(VCPU *vcpu)
     IA64_PSR    vpsr;
     uint64_t    isr;
     REGS *regs=vcpu_regs(vcpu);
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
     update_vhpi(vcpu, NULL_VECTOR);
     isr = vpsr.val & IA64_PSR_RI;
     if ( !vpsr.ic )
@@ -653,7 +670,7 @@ void vhpi_detection(VCPU *vcpu)
     tpr_t       vtpr;
     IA64_PSR    vpsr;
     
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
     vtpr.val = VCPU(vcpu, tpr);
 
     threshold = ((!vpsr.i) << 5) | (vtpr.mmi << 4) | vtpr.mic;
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmmu.c
--- a/xen/arch/ia64/vmx/vmmu.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmmu.c  Fri Jul 28 10:51:38 2006 +0100
@@ -268,7 +268,7 @@ int vhpt_enabled(VCPU *vcpu, uint64_t va
     PTA   vpta;
     IA64_PSR  vpsr; 
 
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
     vcpu_get_rr(vcpu, vadr, &vrr.rrval);
     vmx_vcpu_get_pta(vcpu,&vpta.val);
 
@@ -290,6 +290,7 @@ int vhpt_enabled(VCPU *vcpu, uint64_t va
 
 int unimplemented_gva(VCPU *vcpu,u64 vadr)
 {
+#if 0
     int bit=vcpu->domain->arch.imp_va_msb;
     u64 ladr =(vadr<<3)>>(3+bit);
     if(!ladr||ladr==(1U<<(61-bit))-1){
@@ -297,6 +298,9 @@ int unimplemented_gva(VCPU *vcpu,u64 vad
     }else{
         return 1;
     }
+#else
+    return 0;
+#endif
 }
 
 
@@ -618,7 +622,7 @@ IA64FAULT vmx_vcpu_tpa(VCPU *vcpu, UINT6
     visr.val=0;
     visr.ei=pt_isr.ei;
     visr.ir=pt_isr.ir;
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
     if(vpsr.ic==0){
         visr.ni=1;
     }
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_entry.S
--- a/xen/arch/ia64/vmx/vmx_entry.S     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_entry.S     Fri Jul 28 10:51:38 2006 +0100
@@ -163,24 +163,39 @@ END(ia64_leave_nested)
 
 
 
-GLOBAL_ENTRY(ia64_leave_hypervisor)
+GLOBAL_ENTRY(ia64_leave_hypervisor_prepare)
     PT_REGS_UNWIND_INFO(0)
     /*
      * work.need_resched etc. mustn't get changed by this CPU before it 
returns to
     ;;
      * user- or fsys-mode, hence we disable interrupts early on:
      */
+    adds r2 = PT(R4)+16,r12
+    adds r3 = PT(R5)+16,r12
+    adds r8 = PT(EML_UNAT)+16,r12
+    ;;
+    ld8 r8 = [r8]
+    ;;
+    mov ar.unat=r8
+    ;;
+    ld8.fill r4=[r2],16    //load r4
+    ld8.fill r5=[r3],16    //load r5
+    ;;
+    ld8.fill r6=[r2]    //load r6
+    ld8.fill r7=[r3]    //load r7
+    ;;
+END(ia64_leave_hypervisor_prepare)
+//fall through
+GLOBAL_ENTRY(ia64_leave_hypervisor)
+    PT_REGS_UNWIND_INFO(0)
     rsm psr.i
     ;;
     alloc loc0=ar.pfs,0,1,1,0
+    ;;
     adds out0=16,r12
-    adds r7 = PT(EML_UNAT)+16,r12
-    ;;
-    ld8 r7 = [r7]
     br.call.sptk.many b0=leave_hypervisor_tail
     ;;
     mov ar.pfs=loc0
-    mov ar.unat=r7
     adds r20=PT(PR)+16,r12
     ;;
     lfetch [r20],PT(CR_IPSR)-PT(PR)
@@ -245,12 +260,6 @@ GLOBAL_ENTRY(ia64_leave_hypervisor)
     ldf.fill f10=[r2],32
     ldf.fill f11=[r3],24
     ;;
-    ld8.fill r4=[r2],16    //load r4
-    ld8.fill r5=[r3],16    //load r5
-    ;;
-    ld8.fill r6=[r2]    //load r6
-    ld8.fill r7=[r3]    //load r7
-    ;;
     srlz.i          // ensure interruption collection is off
     ;;
     bsw.0
@@ -283,8 +292,8 @@ GLOBAL_ENTRY(ia64_leave_hypervisor)
     ld8 r19=[r16],PT(R3)-PT(AR_FPSR)    //load ar_fpsr
     ld8.fill r2=[r17],PT(AR_CCV)-PT(R2)    //load r2
     ;;
-    ld8.fill r3=[r16]    //load r3
-    ld8 r18=[r17],PT(RFI_PFS)-PT(AR_CCV)           //load ar_ccv
+    ld8.fill r3=[r16]  //load r3
+    ld8 r18=[r17]      //load ar_ccv
     ;;
     mov ar.fpsr=r19
     mov ar.ccv=r18
@@ -348,7 +357,6 @@ vmx_rse_clear_invalid:
     ;;
     mov ar.bspstore=r24
     ;;
-    ld8 r24=[r17]       //load rfi_pfs
     mov ar.unat=r28
     mov ar.rnat=r25
     mov ar.rsc=r26
@@ -356,10 +364,6 @@ vmx_rse_clear_invalid:
     mov cr.ipsr=r31
     mov cr.iip=r30
     mov cr.ifs=r29
-    cmp.ne p6,p0=r24,r0
-(p6)br.sptk vmx_dorfirfi
-    ;;
-vmx_dorfirfi_back:
     mov ar.pfs=r27
     adds r18=IA64_VPD_BASE_OFFSET,r21
     ;;
@@ -370,20 +374,19 @@ vmx_dorfirfi_back:
     adds r19=VPD(VPSR),r18
     ;;
     ld8 r19=[r19]        //vpsr
+    movl r20=__vsa_base
+    ;;
 //vsa_sync_write_start
-    movl r20=__vsa_base
-    ;;
     ld8 r20=[r20]       // read entry point
     mov r25=r18
     ;;
+    movl r24=ia64_vmm_entry  // calculate return address
     add r16=PAL_VPS_SYNC_WRITE,r20
-    movl r24=switch_rr7  // calculate return address
     ;;
     mov b0=r16
     br.cond.sptk b0         // call the service
     ;;
 END(ia64_leave_hypervisor)
-switch_rr7:
 // fall through
 GLOBAL_ENTRY(ia64_vmm_entry)
 /*
@@ -416,23 +419,6 @@ ia64_vmm_entry_out:
     br.cond.sptk b0             // call pal service
 END(ia64_vmm_entry)
 
-//r24 rfi_pfs
-//r17 address of rfi_pfs
-GLOBAL_ENTRY(vmx_dorfirfi)
-    mov r16=ar.ec
-    movl r20 = vmx_dorfirfi_back
-       ;;
-// clean rfi_pfs
-    st8 [r17]=r0
-    mov b0=r20
-// pfs.pec=ar.ec
-    dep r24 = r16, r24, 52, 6
-    ;;
-    mov ar.pfs=r24
-       ;;
-    br.ret.sptk b0
-       ;;
-END(vmx_dorfirfi)
 
 #ifdef XEN_DBL_MAPPING  /* will be removed */
 
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_init.c
--- a/xen/arch/ia64/vmx/vmx_init.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_init.c      Fri Jul 28 10:51:38 2006 +0100
@@ -156,6 +156,7 @@ static vpd_t *alloc_vpd(void)
        int i;
        cpuid3_t cpuid3;
        vpd_t *vpd;
+       mapped_regs_t *mregs;
 
        vpd = alloc_xenheap_pages(get_order(VPD_SIZE));
        if (!vpd) {
@@ -165,23 +166,26 @@ static vpd_t *alloc_vpd(void)
 
        printk("vpd base: 0x%p, vpd size:%ld\n", vpd, sizeof(vpd_t));
        memset(vpd, 0, VPD_SIZE);
+       mregs = &vpd->vpd_low;
+
        /* CPUID init */
        for (i = 0; i < 5; i++)
-               vpd->vcpuid[i] = ia64_get_cpuid(i);
+               mregs->vcpuid[i] = ia64_get_cpuid(i);
 
        /* Limit the CPUID number to 5 */
-       cpuid3.value = vpd->vcpuid[3];
+       cpuid3.value = mregs->vcpuid[3];
        cpuid3.number = 4;      /* 5 - 1 */
-       vpd->vcpuid[3] = cpuid3.value;
-
-    vpd->vac.a_from_int_cr = 1;
-    vpd->vac.a_to_int_cr = 1;
-    vpd->vac.a_from_psr = 1;
-    vpd->vac.a_from_cpuid = 1;
-    vpd->vac.a_cover = 1;
-    vpd->vac.a_bsw = 1;
-
-       vpd->vdc.d_vmsw = 1;
+       mregs->vcpuid[3] = cpuid3.value;
+
+       mregs->vac.a_from_int_cr = 1;
+       mregs->vac.a_to_int_cr = 1;
+       mregs->vac.a_from_psr = 1;
+       mregs->vac.a_from_cpuid = 1;
+       mregs->vac.a_cover = 1;
+       mregs->vac.a_bsw = 1;
+       mregs->vac.a_int = 1;
+       
+       mregs->vdc.d_vmsw = 1;
 
        return vpd;
 }
@@ -201,7 +205,7 @@ vmx_create_vp(struct vcpu *v)
 vmx_create_vp(struct vcpu *v)
 {
        u64 ret;
-       vpd_t *vpd = v->arch.privregs;
+       vpd_t *vpd = (vpd_t *)v->arch.privregs;
        u64 ivt_base;
     extern char vmx_ia64_ivt;
        /* ia64_ivt is function pointer, so need this tranlation */
@@ -271,13 +275,11 @@ vmx_final_setup_guest(struct vcpu *v)
 {
        vpd_t *vpd;
 
-       free_xenheap_pages(v->arch.privregs, get_order(sizeof(mapped_regs_t)));
-
        vpd = alloc_vpd();
        ASSERT(vpd);
 
-       v->arch.privregs = vpd;
-       vpd->virt_env_vaddr = vm_buffer;
+       v->arch.privregs = (mapped_regs_t *)vpd;
+       vpd->vpd_low.virt_env_vaddr = vm_buffer;
 
        /* Per-domain vTLB and vhpt implementation. Now vmx domain will stick
         * to this solution. Maybe it can be deferred until we know created
@@ -298,6 +300,8 @@ vmx_final_setup_guest(struct vcpu *v)
 
        /* One more step to enable interrupt assist */
        set_bit(ARCH_VMX_INTR_ASSIST, &v->arch.arch_vmx.flags);
+       /* Set up guest 's indicator for VTi domain*/
+       set_bit(ARCH_VMX_DOMAIN, &v->arch.arch_vmx.flags);
 }
 
 void
@@ -317,7 +321,7 @@ typedef struct io_range {
        unsigned long type;
 } io_range_t;
 
-io_range_t io_ranges[] = {
+static const io_range_t io_ranges[] = {
        {VGA_IO_START, VGA_IO_SIZE, GPFN_FRAME_BUFFER},
        {MMIO_START, MMIO_SIZE, GPFN_LOW_MMIO},
        {LEGACY_IO_START, LEGACY_IO_SIZE, GPFN_LEGACY_IO},
@@ -325,24 +329,22 @@ io_range_t io_ranges[] = {
        {PIB_START, PIB_SIZE, GPFN_PIB},
 };
 
+/* Reseve 1 page for shared I/O and 1 page for xenstore.  */
 #define VMX_SYS_PAGES  (2 + (GFW_SIZE >> PAGE_SHIFT))
 #define VMX_CONFIG_PAGES(d) ((d)->max_pages - VMX_SYS_PAGES)
 
-int vmx_build_physmap_table(struct domain *d)
+static void vmx_build_physmap_table(struct domain *d)
 {
        unsigned long i, j, start, tmp, end, mfn;
-       struct vcpu *v = d->vcpu[0];
        struct list_head *list_ent = d->page_list.next;
 
-       ASSERT(!d->arch.physmap_built);
-       ASSERT(!test_bit(ARCH_VMX_CONTIG_MEM, &v->arch.arch_vmx.flags));
        ASSERT(d->max_pages == d->tot_pages);
 
        /* Mark I/O ranges */
        for (i = 0; i < (sizeof(io_ranges) / sizeof(io_range_t)); i++) {
            for (j = io_ranges[i].start;
-                j < io_ranges[i].start + io_ranges[i].size;
-                j += PAGE_SIZE)
+               j < io_ranges[i].start + io_ranges[i].size;
+               j += PAGE_SIZE)
                __assign_domain_page(d, j, io_ranges[i].type, ASSIGN_writable);
        }
 
@@ -362,21 +364,19 @@ int vmx_build_physmap_table(struct domai
        if (unlikely(end > MMIO_START)) {
            start = 4 * MEM_G;
            end = start + (end - 3 * MEM_G);
-           for (i = start; (i < end) &&
-                (list_ent != &d->page_list); i += PAGE_SIZE) {
-               mfn = page_to_mfn(list_entry(
-                   list_ent, struct page_info, list));
+           for (i = start;
+                (i < end) && (list_ent != &d->page_list); i += PAGE_SIZE) {
+               mfn = page_to_mfn(list_entry(list_ent, struct page_info, list));
                assign_domain_page(d, i, mfn << PAGE_SHIFT);
                list_ent = mfn_to_page(mfn)->list.next;
            }
            ASSERT(list_ent != &d->page_list);
-        }
+       }
         
        /* Map guest firmware */
        for (i = GFW_START; (i < GFW_START + GFW_SIZE) &&
                (list_ent != &d->page_list); i += PAGE_SIZE) {
-           mfn = page_to_mfn(list_entry(
-               list_ent, struct page_info, list));
+           mfn = page_to_mfn(list_entry(list_ent, struct page_info, list));
            assign_domain_page(d, i, mfn << PAGE_SHIFT);
            list_ent = mfn_to_page(mfn)->list.next;
        }
@@ -393,24 +393,21 @@ int vmx_build_physmap_table(struct domai
        list_ent = mfn_to_page(mfn)->list.next;
        ASSERT(list_ent == &d->page_list);
 
-       d->arch.max_pfn = end >> PAGE_SHIFT;
-       d->arch.physmap_built = 1;
-       set_bit(ARCH_VMX_CONTIG_MEM, &v->arch.arch_vmx.flags);
-       return 0;
-}
-
-void vmx_setup_platform(struct domain *d, struct vcpu_guest_context *c)
+}
+
+void vmx_setup_platform(struct domain *d)
 {
        ASSERT(d != dom0); /* only for non-privileged vti domain */
 
-       if (!d->arch.physmap_built)
-           vmx_build_physmap_table(d);
+       vmx_build_physmap_table(d);
 
        d->arch.vmx_platform.shared_page_va =
                (unsigned long)__va(__gpa_to_mpa(d, IO_PAGE_START));
        /* TEMP */
        d->arch.vmx_platform.pib_base = 0xfee00000UL;
 
+       d->arch.sal_data = xmalloc(struct xen_sal_data);
+
        /* Only open one port for I/O and interrupt emulation */
        memset(&d->shared_info->evtchn_mask[0], 0xff,
            sizeof(d->shared_info->evtchn_mask));
@@ -430,8 +427,7 @@ void vmx_do_launch(struct vcpu *v)
            domain_crash_synchronous();
        }
 
-       clear_bit(iopacket_port(v),
-               &v->domain->shared_info->evtchn_mask[0]);
+       clear_bit(iopacket_port(v), &v->domain->shared_info->evtchn_mask[0]);
 
        vmx_load_all_rr(v);
 }
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_interrupt.c
--- a/xen/arch/ia64/vmx/vmx_interrupt.c Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_interrupt.c Fri Jul 28 10:51:38 2006 +0100
@@ -117,7 +117,7 @@ set_ifa_itir_iha (VCPU *vcpu, u64 vadr,
 {
     IA64_PSR vpsr;
     u64 value;
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
     /* Vol2, Table 8-1 */
     if ( vpsr.ic ) {
         if ( set_ifa){
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_ivt.S
--- a/xen/arch/ia64/vmx/vmx_ivt.S       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_ivt.S       Fri Jul 28 10:51:38 2006 +0100
@@ -58,6 +58,7 @@
 #include <asm/thread_info.h>
 #include <asm/unistd.h>
 #include <asm/vhpt.h>
+#include <asm/virt_event.h>
 
 #ifdef VTI_DEBUG
   /*
@@ -200,7 +201,7 @@ vmx_itlb_loop:
     ;;
 vmx_itlb_out:
     mov r19 = 1
-    br.sptk vmx_dispatch_tlb_miss
+    br.sptk vmx_dispatch_itlb_miss
     VMX_FAULT(1);
 END(vmx_itlb_miss)
 
@@ -274,7 +275,7 @@ vmx_dtlb_loop:
     ;;
 vmx_dtlb_out:
     mov r19 = 2
-    br.sptk vmx_dispatch_tlb_miss
+    br.sptk vmx_dispatch_dtlb_miss
     VMX_FAULT(2);
 END(vmx_dtlb_miss)
 
@@ -787,6 +788,22 @@ ENTRY(vmx_virtualization_fault)
     st8 [r16] = r24
     st8 [r17] = r25
     ;;
+    cmp.ne p6,p0=EVENT_RFI, r24
+    (p6) br.sptk vmx_dispatch_virtualization_fault
+    ;;
+    adds r18=IA64_VPD_BASE_OFFSET,r21
+    ;;
+    ld8 r18=[r18]
+    ;;
+    adds r18=IA64_VPD_VIFS_OFFSET,r18
+    ;;
+    ld8 r18=[r18]
+    ;;
+    tbit.z p6,p0=r18,63
+    (p6) br.sptk vmx_dispatch_virtualization_fault
+    ;;
+    //if vifs.v=1 desert current register frame
+    alloc r18=ar.pfs,0,0,0,0
     br.sptk vmx_dispatch_virtualization_fault
 END(vmx_virtualization_fault)
 
@@ -1024,9 +1041,10 @@ ENTRY(vmx_dispatch_virtualization_fault)
     srlz.i                  // guarantee that interruption collection is on
     ;;
     (p15) ssm psr.i               // restore psr.i
-    movl r14=ia64_leave_hypervisor
+    movl r14=ia64_leave_hypervisor_prepare
     ;;
     VMX_SAVE_REST
+    VMX_SAVE_EXTRA
     mov rp=r14
     ;;
     adds out1=16,sp         //regs
@@ -1053,7 +1071,7 @@ ENTRY(vmx_dispatch_vexirq)
     br.call.sptk.many b6=vmx_vexirq
 END(vmx_dispatch_vexirq)
 
-ENTRY(vmx_dispatch_tlb_miss)
+ENTRY(vmx_dispatch_itlb_miss)
     VMX_SAVE_MIN_WITH_COVER_R19
     alloc r14=ar.pfs,0,0,3,0
     mov out0=cr.ifa
@@ -1072,8 +1090,29 @@ ENTRY(vmx_dispatch_tlb_miss)
     ;;
     adds out2=16,r12
     br.call.sptk.many b6=vmx_hpw_miss
-END(vmx_dispatch_tlb_miss)
-
+END(vmx_dispatch_itlb_miss)
+
+ENTRY(vmx_dispatch_dtlb_miss)
+    VMX_SAVE_MIN_WITH_COVER_R19
+    alloc r14=ar.pfs,0,0,3,0
+    mov out0=cr.ifa
+    mov out1=r15
+    adds r3=8,r2                // set up second base pointer
+    ;;
+    ssm psr.ic
+    ;;
+    srlz.i                  // guarantee that interruption collection is on
+    ;;
+    (p15) ssm psr.i               // restore psr.i
+    movl r14=ia64_leave_hypervisor_prepare
+    ;;
+    VMX_SAVE_REST
+    VMX_SAVE_EXTRA
+    mov rp=r14
+    ;;
+    adds out2=16,r12
+    br.call.sptk.many b6=vmx_hpw_miss
+END(vmx_dispatch_dtlb_miss)
 
 ENTRY(vmx_dispatch_break_fault)
     VMX_SAVE_MIN_WITH_COVER_R19
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_minstate.h
--- a/xen/arch/ia64/vmx/vmx_minstate.h  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_minstate.h  Fri Jul 28 10:51:38 2006 +0100
@@ -57,8 +57,8 @@
     ;;
 
 
-#define PAL_VSA_SYNC_READ_CLEANUP_PSR_PL           \
-    /* begin to call pal vps sync_read and cleanup psr.pl */     \
+#define PAL_VSA_SYNC_READ           \
+    /* begin to call pal vps sync_read */     \
     add r25=IA64_VPD_BASE_OFFSET, r21;       \
     movl r20=__vsa_base;     \
     ;;          \
@@ -68,31 +68,17 @@
     add r20=PAL_VPS_SYNC_READ,r20;  \
     ;;  \
 { .mii;  \
-    add r22=VPD(VPSR),r25;   \
+    nop 0x0;   \
     mov r24=ip;        \
     mov b0=r20;     \
     ;;      \
 };           \
 { .mmb;      \
     add r24 = 0x20, r24;    \
-    mov r16 = cr.ipsr;  /* Temp workaround since psr.ic is off */ \
+    nop 0x0;            \
     br.cond.sptk b0;        /*  call the service */ \
     ;;              \
 };           \
-    ld8 r17=[r22];   \
-    /* deposite ipsr bit cpl into vpd.vpsr, since epc will change */    \
-    extr.u r30=r16, IA64_PSR_CPL0_BIT, 2;   \
-    ;;      \
-    dep r17=r30, r17, IA64_PSR_CPL0_BIT, 2;   \
-    extr.u r30=r16, IA64_PSR_BE_BIT, 5;   \
-    ;;      \
-    dep r17=r30, r17, IA64_PSR_BE_BIT, 5;   \
-    extr.u r30=r16, IA64_PSR_RI_BIT, 2;   \
-    ;;      \
-    dep r17=r30, r17, IA64_PSR_RI_BIT, 2;   \
-    ;;      \
-    st8 [r22]=r17;      \
-    ;;
 
 
 
@@ -219,7 +205,7 @@
     movl r11=FPSR_DEFAULT;   /* L-unit */                           \
     movl r1=__gp;       /* establish kernel global pointer */               \
     ;;                                          \
-    PAL_VSA_SYNC_READ_CLEANUP_PSR_PL           \
+    PAL_VSA_SYNC_READ           \
     VMX_MINSTATE_END_SAVE_MIN
 
 /*
@@ -274,24 +260,27 @@
     stf.spill [r3]=f9,32;           \
     ;;                  \
     stf.spill [r2]=f10,32;         \
-    stf.spill [r3]=f11,24;         \
-    ;;                  \
+    stf.spill [r3]=f11;         \
+    adds r25=PT(B7)-PT(F11),r3;     \
+    ;;                  \
+    st8 [r24]=r18,16;       /* b6 */    \
+    st8 [r25]=r19,16;       /* b7 */    \
+    adds r3=PT(R5)-PT(F11),r3;     \
+    ;;                  \
+    st8 [r24]=r9;           /* ar.csd */    \
+    st8 [r25]=r10;          /* ar.ssd */    \
+    ;;
+
+#define VMX_SAVE_EXTRA               \
 .mem.offset 0,0; st8.spill [r2]=r4,16;     \
 .mem.offset 8,0; st8.spill [r3]=r5,16;     \
     ;;                  \
 .mem.offset 0,0; st8.spill [r2]=r6,16;      \
 .mem.offset 8,0; st8.spill [r3]=r7;      \
-    adds r25=PT(B7)-PT(R7),r3;     \
-    ;;                  \
-    st8 [r24]=r18,16;       /* b6 */    \
-    st8 [r25]=r19,16;       /* b7 */    \
-    ;;                  \
-    st8 [r24]=r9;           /* ar.csd */    \
-    mov r26=ar.unat;            \
-    ;;      \
-    st8 [r25]=r10;          /* ar.ssd */    \
+    ;;                 \
+    mov r26=ar.unat;    \
+    ;;                 \
     st8 [r2]=r26;       /* eml_unat */ \
-    ;;
 
 #define VMX_SAVE_MIN_WITH_COVER   VMX_DO_SAVE_MIN(cover, mov r30=cr.ifs,)
 #define VMX_SAVE_MIN_WITH_COVER_R19 VMX_DO_SAVE_MIN(cover, mov r30=cr.ifs, mov 
r15=r19)
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_phy_mode.c
--- a/xen/arch/ia64/vmx/vmx_phy_mode.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_phy_mode.c  Fri Jul 28 10:51:38 2006 +0100
@@ -110,10 +110,8 @@ physical_tlb_miss(VCPU *vcpu, u64 vadr)
 physical_tlb_miss(VCPU *vcpu, u64 vadr)
 {
     u64 pte;
-    IA64_PSR vpsr;
-    vpsr.val=vmx_vcpu_get_psr(vcpu);
     pte =  vadr& _PAGE_PPN_MASK;
-    pte = pte|(vpsr.cpl<<7)|PHY_PAGE_WB;
+    pte = pte | PHY_PAGE_WB;
     thash_purge_and_insert(vcpu, pte, (PAGE_SHIFT<<2), vadr);
     return;
 }
@@ -204,23 +202,7 @@ vmx_load_all_rr(VCPU *vcpu)
        ia64_srlz_i();
 }
 
-void
-vmx_load_rr7_and_pta(VCPU *vcpu)
-{
-       unsigned long psr;
-
-       local_irq_save(psr);
-
-       vmx_switch_rr7(vrrtomrr(vcpu,VMX(vcpu, vrr[VRN7])),
-                       (void *)vcpu->domain->shared_info,
-                       (void *)vcpu->arch.privregs,
-                       (void *)vcpu->arch.vhpt.hash, pal_vaddr );
-       ia64_set_pta(vcpu->arch.arch_vmx.mpta);
-
-       ia64_srlz_d();
-       local_irq_restore(psr);
-       ia64_srlz_i();
-}
+
 
 void
 switch_to_physical_rid(VCPU *vcpu)
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_process.c
--- a/xen/arch/ia64/vmx/vmx_process.c   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_process.c   Fri Jul 28 10:51:38 2006 +0100
@@ -35,7 +35,7 @@
 #include <asm/io.h>
 #include <asm/processor.h>
 #include <asm/desc.h>
-//#include <asm/ldt.h>
+#include <asm/vlsapic.h>
 #include <xen/irq.h>
 #include <xen/event.h>
 #include <asm/regionreg.h>
@@ -82,7 +82,7 @@ void vmx_reflect_interruption(UINT64 ifa
      UINT64 vector,REGS *regs)
 {
     VCPU *vcpu = current;
-    UINT64 vpsr = vmx_vcpu_get_psr(vcpu);
+    UINT64 vpsr = VCPU(vcpu, vpsr);
     vector=vec2off[vector];
     if(!(vpsr&IA64_PSR_IC)&&(vector!=IA64_DATA_NESTED_TLB_VECTOR)){
         panic_domain(regs, "Guest nested fault vector=%lx!\n", vector);
@@ -156,7 +156,7 @@ void save_banked_regs_to_vpd(VCPU *v, RE
     IA64_PSR vpsr;
     src=&regs->r16;
     sunat=&regs->eml_unat;
-    vpsr.val = vmx_vcpu_get_psr(v);
+    vpsr.val = VCPU(v, vpsr);
     if(vpsr.bn){
         dst = &VCPU(v, vgr[0]);
         dunat =&VCPU(v, vnat);
@@ -188,14 +188,13 @@ void leave_hypervisor_tail(struct pt_reg
     struct vcpu *v = current;
     // FIXME: Will this work properly if doing an RFI???
     if (!is_idle_domain(d) ) { // always comes from guest
-        extern void vmx_dorfirfi(void);
-        struct pt_regs *user_regs = vcpu_regs(current);
-        if (local_softirq_pending())
-            do_softirq();
+//        struct pt_regs *user_regs = vcpu_regs(current);
+        local_irq_enable();
+        do_softirq();
         local_irq_disable();
 
-        if (user_regs != regs)
-            printk("WARNING: checking pending interrupt in nested 
interrupt!!!\n");
+//        if (user_regs != regs)
+//            printk("WARNING: checking pending interrupt in nested 
interrupt!!!\n");
 
         /* VMX Domain N has other interrupt source, saying DM  */
         if (test_bit(ARCH_VMX_INTR_ASSIST, &v->arch.arch_vmx.flags))
@@ -216,12 +215,18 @@ void leave_hypervisor_tail(struct pt_reg
 
         if ( v->arch.irq_new_pending ) {
             v->arch.irq_new_pending = 0;
+            v->arch.irq_new_condition = 0;
             vmx_check_pending_irq(v);
-        }
-//        if (VCPU(v,vac).a_bsw){
-//            save_banked_regs_to_vpd(v,regs);
-//        }
-
+            return;
+        }
+        if (VCPU(v, vac).a_int) {
+            vhpi_detection(v);
+            return;
+        }
+        if (v->arch.irq_new_condition) {
+            v->arch.irq_new_condition = 0;
+            vhpi_detection(v);
+        }
     }
 }
 
@@ -248,7 +253,7 @@ vmx_hpw_miss(u64 vadr , u64 vec, REGS* r
     check_vtlb_sanity(vtlb);
     dump_vtlb(vtlb);
 #endif
-    vpsr.val = vmx_vcpu_get_psr(v);
+    vpsr.val = VCPU(v, vpsr);
     misr.val=VMX(v,cr_isr);
 
     if(is_physical_mode(v)&&(!(vadr<<1>>62))){
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_support.c
--- a/xen/arch/ia64/vmx/vmx_support.c   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_support.c   Fri Jul 28 10:51:38 2006 +0100
@@ -58,7 +58,7 @@ void vmx_wait_io(void)
     if (d->shared_info->evtchn_pending[port / BITS_PER_LONG])
         set_bit(port / BITS_PER_LONG, &v->vcpu_info->evtchn_pending_sel);
 
-    if (&v->vcpu_info->evtchn_pending_sel)
+    if (v->vcpu_info->evtchn_pending_sel)
         set_bit(0, &v->vcpu_info->evtchn_upcall_pending);
 }
 
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_utility.c
--- a/xen/arch/ia64/vmx/vmx_utility.c   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_utility.c   Fri Jul 28 10:51:38 2006 +0100
@@ -381,7 +381,7 @@ set_isr_ei_ni (VCPU *vcpu)
 
     visr.val = 0;
 
-    vpsr.val = vmx_vcpu_get_psr (vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
 
     if (!vpsr.ic == 1 ) {
         /* Set ISR.ni */
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_vcpu.c
--- a/xen/arch/ia64/vmx/vmx_vcpu.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_vcpu.c      Fri Jul 28 10:51:38 2006 +0100
@@ -67,6 +67,8 @@
 #include <asm/vmx_pal_vsa.h>
 #include <asm/kregs.h>
 //unsigned long last_guest_rsm = 0x0;
+
+#ifdef VTI_DEBUG
 struct guest_psr_bundle{
     unsigned long ip;
     unsigned long psr;
@@ -74,6 +76,7 @@ struct guest_psr_bundle{
 
 struct guest_psr_bundle guest_psr_buf[100];
 unsigned long guest_psr_index = 0;
+#endif
 
 void
 vmx_vcpu_set_psr(VCPU *vcpu, unsigned long value)
@@ -82,7 +85,7 @@ vmx_vcpu_set_psr(VCPU *vcpu, unsigned lo
     UINT64 mask;
     REGS *regs;
     IA64_PSR old_psr, new_psr;
-    old_psr.val=vmx_vcpu_get_psr(vcpu);
+    old_psr.val=VCPU(vcpu, vpsr);
 
     regs=vcpu_regs(vcpu);
     /* We only support guest as:
@@ -108,7 +111,8 @@ vmx_vcpu_set_psr(VCPU *vcpu, unsigned lo
         // vpsr.i 0->1
         vcpu->arch.irq_new_condition = 1;
     }
-    new_psr.val=vmx_vcpu_get_psr(vcpu);
+    new_psr.val=VCPU(vcpu, vpsr);
+#ifdef VTI_DEBUG    
     {
     struct pt_regs *regs = vcpu_regs(vcpu);
     guest_psr_buf[guest_psr_index].ip = regs->cr_iip;
@@ -116,6 +120,7 @@ vmx_vcpu_set_psr(VCPU *vcpu, unsigned lo
     if (++guest_psr_index >= 100)
         guest_psr_index = 0;
     }
+#endif    
 #if 0
     if (old_psr.i != new_psr.i) {
     if (old_psr.i)
@@ -149,24 +154,14 @@ IA64FAULT vmx_vcpu_increment_iip(VCPU *v
 {
     // TODO: trap_bounce?? Eddie
     REGS *regs = vcpu_regs(vcpu);
-    IA64_PSR vpsr;
     IA64_PSR *ipsr = (IA64_PSR *)&regs->cr_ipsr;
 
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
-    if (vpsr.ri == 2) {
-    vpsr.ri = 0;
-    regs->cr_iip += 16;
+    if (ipsr->ri == 2) {
+        ipsr->ri = 0;
+        regs->cr_iip += 16;
     } else {
-    vpsr.ri++;
-    }
-
-    ipsr->ri = vpsr.ri;
-    vpsr.val &=
-            (~ (IA64_PSR_ID |IA64_PSR_DA | IA64_PSR_DD |
-                IA64_PSR_SS | IA64_PSR_ED | IA64_PSR_IA
-            ));
-
-    VCPU(vcpu, vpsr) = vpsr.val;
+        ipsr->ri++;
+    }
 
     ipsr->val &=
             (~ (IA64_PSR_ID |IA64_PSR_DA | IA64_PSR_DD |
@@ -181,7 +176,7 @@ IA64FAULT vmx_vcpu_cover(VCPU *vcpu)
 {
     REGS *regs = vcpu_regs(vcpu);
     IA64_PSR vpsr;
-    vpsr.val = vmx_vcpu_get_psr(vcpu);
+    vpsr.val = VCPU(vcpu, vpsr);
 
     if(!vpsr.ic)
         VCPU(vcpu,ifs) = regs->cr_ifs;
@@ -280,21 +275,12 @@ IA64FAULT vmx_vcpu_rfi(VCPU *vcpu)
     vcpu_bsw1(vcpu);
     vmx_vcpu_set_psr(vcpu,psr);
     ifs=VCPU(vcpu,ifs);
-    if((ifs>>63)&&(ifs<<1)){
-        ifs=(regs->cr_ifs)&0x7f;
-        regs->rfi_pfs = (ifs<<7)|ifs;
-        regs->cr_ifs = VCPU(vcpu,ifs);
-    }
+    if(ifs>>63)
+        regs->cr_ifs = ifs;
     regs->cr_iip = VCPU(vcpu,iip);
     return (IA64_NO_FAULT);
 }
 
-
-UINT64
-vmx_vcpu_get_psr(VCPU *vcpu)
-{
-    return VCPU(vcpu,vpsr);
-}
 
 #if 0
 IA64FAULT
@@ -393,6 +379,20 @@ vmx_vcpu_set_gr(VCPU *vcpu, unsigned reg
 
 #endif
 
+/*
+    VPSR can't keep track of below bits of guest PSR
+    This function gets guest PSR
+ */
+
+UINT64 vmx_vcpu_get_psr(VCPU *vcpu)
+{
+    UINT64 mask;
+    REGS *regs = vcpu_regs(vcpu);
+    mask = IA64_PSR_BE | IA64_PSR_UP | IA64_PSR_AC | IA64_PSR_MFL |
+           IA64_PSR_MFH | IA64_PSR_CPL | IA64_PSR_RI;
+    return (VCPU(vcpu, vpsr) & ~mask) | (regs->cr_ipsr & mask);
+}
+
 IA64FAULT vmx_vcpu_reset_psr_sm(VCPU *vcpu, UINT64 imm24)
 {
     UINT64 vpsr;
@@ -415,6 +415,7 @@ IA64FAULT vmx_vcpu_set_psr_sm(VCPU *vcpu
 
 IA64FAULT vmx_vcpu_set_psr_l(VCPU *vcpu, UINT64 val)
 {
+    val = (val & MASK(0, 32)) | (vmx_vcpu_get_psr(vcpu) & MASK(32, 32));
     vmx_vcpu_set_psr(vcpu, val);
     return IA64_NO_FAULT;
 }
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/vmx/vmx_virt.c
--- a/xen/arch/ia64/vmx/vmx_virt.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/vmx/vmx_virt.c      Fri Jul 28 10:51:38 2006 +0100
@@ -20,10 +20,7 @@
  *  Shaofan Li (Susue Li) <susie.li@xxxxxxxxx>
  *  Xuefei Xu (Anthony Xu) (Anthony.xu@xxxxxxxxx)
  */
-
-
-
-#include <asm/privop.h>
+#include <asm/bundle.h>
 #include <asm/vmx_vcpu.h>
 #include <asm/processor.h>
 #include <asm/delay.h> // Debug only
@@ -33,8 +30,6 @@
 #include <asm/vmx.h>
 #include <asm/virt_event.h>
 #include <asm/vmx_phy_mode.h>
-extern UINT64 privop_trace;
-extern void vhpi_detection(VCPU *vcpu);//temporarily place here,need a header 
file.
 
 void
 ia64_priv_decoder(IA64_SLOT_TYPE slot_type, INST64 inst, UINT64  * cause)
@@ -159,7 +154,6 @@ IA64FAULT vmx_emul_ssm(VCPU *vcpu, INST6
     return vmx_vcpu_set_psr_sm(vcpu,imm24);
 }
 
-unsigned long last_guest_psr = 0x0;
 IA64FAULT vmx_emul_mov_from_psr(VCPU *vcpu, INST64 inst)
 {
     UINT64 tgt = inst.M33.r1;
@@ -172,7 +166,6 @@ IA64FAULT vmx_emul_mov_from_psr(VCPU *vc
     */
     val = vmx_vcpu_get_psr(vcpu);
     val = (val & MASK(0, 32)) | (val & MASK(35, 2));
-    last_guest_psr = val;
     return vcpu_set_gr(vcpu, tgt, val, 0);
 }
 
@@ -186,14 +179,7 @@ IA64FAULT vmx_emul_mov_to_psr(VCPU *vcpu
     if(vcpu_get_gr_nat(vcpu, inst.M35.r2, &val) != IA64_NO_FAULT)
        panic_domain(vcpu_regs(vcpu),"get_psr nat bit fault\n");
 
-       val = (val & MASK(0, 32)) | (VCPU(vcpu, vpsr) & MASK(32, 32));
-#if 0
-       if (last_mov_from_psr && (last_guest_psr != (val & MASK(0,32))))
-               while(1);
-       else
-               last_mov_from_psr = 0;
-#endif
-        return vmx_vcpu_set_psr_l(vcpu,val);
+    return vmx_vcpu_set_psr_l(vcpu, val);
 }
 
 
@@ -261,6 +247,7 @@ IA64FAULT vmx_emul_ptc_l(VCPU *vcpu, INS
 IA64FAULT vmx_emul_ptc_l(VCPU *vcpu, INST64 inst)
 {
     u64 r2,r3;
+#ifdef  VMAL_NO_FAULT_CHECK
     IA64_PSR  vpsr;
 
     vpsr.val=vmx_vcpu_get_psr(vcpu);
@@ -270,6 +257,7 @@ IA64FAULT vmx_emul_ptc_l(VCPU *vcpu, INS
         privilege_op (vcpu);
         return IA64_FAULT;
     }
+#endif // VMAL_NO_FAULT_CHECK
     
if(vcpu_get_gr_nat(vcpu,inst.M45.r3,&r3)||vcpu_get_gr_nat(vcpu,inst.M45.r2,&r2)){
 #ifdef  VMAL_NO_FAULT_CHECK
         ISR isr;
@@ -293,10 +281,10 @@ IA64FAULT vmx_emul_ptc_e(VCPU *vcpu, INS
 IA64FAULT vmx_emul_ptc_e(VCPU *vcpu, INST64 inst)
 {
     u64 r3;
+#ifdef  VMAL_NO_FAULT_CHECK
     IA64_PSR  vpsr;
 
     vpsr.val=vmx_vcpu_get_psr(vcpu);
-#ifdef  VMAL_NO_FAULT_CHECK
     ISR isr;
     if ( vpsr.cpl != 0) {
         /* Inject Privileged Operation fault into guest */
@@ -579,6 +567,7 @@ IA64FAULT vmx_emul_itr_d(VCPU *vcpu, INS
 IA64FAULT vmx_emul_itr_d(VCPU *vcpu, INST64 inst)
 {
     UINT64 itir, ifa, pte, slot;
+#ifdef  VMAL_NO_FAULT_CHECK
     IA64_PSR  vpsr;
     vpsr.val=vmx_vcpu_get_psr(vcpu);
     if ( vpsr.ic ) {
@@ -586,7 +575,6 @@ IA64FAULT vmx_emul_itr_d(VCPU *vcpu, INS
         illegal_op(vcpu);
         return IA64_FAULT;
     }
-#ifdef  VMAL_NO_FAULT_CHECK
     ISR isr;
     if ( vpsr.cpl != 0) {
         /* Inject Privileged Operation fault into guest */
@@ -638,7 +626,6 @@ IA64FAULT vmx_emul_itr_i(VCPU *vcpu, INS
     UINT64 itir, ifa, pte, slot;
 #ifdef  VMAL_NO_FAULT_CHECK
     ISR isr;
-#endif
     IA64_PSR  vpsr;
     vpsr.val=vmx_vcpu_get_psr(vcpu);
     if ( vpsr.ic ) {
@@ -646,7 +633,6 @@ IA64FAULT vmx_emul_itr_i(VCPU *vcpu, INS
         illegal_op(vcpu);
         return IA64_FAULT;
     }
-#ifdef  VMAL_NO_FAULT_CHECK
     if ( vpsr.cpl != 0) {
         /* Inject Privileged Operation fault into guest */
         set_privileged_operation_isr (vcpu, 0);
@@ -694,9 +680,10 @@ IA64FAULT vmx_emul_itr_i(VCPU *vcpu, INS
 
 IA64FAULT itc_fault_check(VCPU *vcpu, INST64 inst, u64 *itir, u64 *ifa,u64 
*pte)
 {
+    IA64FAULT  ret1;
+
+#ifdef  VMAL_NO_FAULT_CHECK
     IA64_PSR  vpsr;
-    IA64FAULT  ret1;
-
     vpsr.val=vmx_vcpu_get_psr(vcpu);
     if ( vpsr.ic ) {
         set_illegal_op_isr(vcpu);
@@ -704,7 +691,6 @@ IA64FAULT itc_fault_check(VCPU *vcpu, IN
         return IA64_FAULT;
     }
 
-#ifdef  VMAL_NO_FAULT_CHECK
     UINT64 fault;
     ISR isr;
     if ( vpsr.cpl != 0) {
@@ -1346,14 +1332,6 @@ IA64FAULT vmx_emul_mov_from_cr(VCPU *vcp
 }
 
 
-static void post_emulation_action(VCPU *vcpu)
-{
-    if ( vcpu->arch.irq_new_condition ) {
-        vcpu->arch.irq_new_condition = 0;
-        vhpi_detection(vcpu);
-    }
-}
-
 //#define  BYPASS_VMAL_OPCODE
 extern IA64_SLOT_TYPE  slot_types[0x20][3];
 IA64_BUNDLE __vmx_get_domain_bundle(u64 iip)
@@ -1381,15 +1359,6 @@ vmx_emulate(VCPU *vcpu, REGS *regs)
     cause = VMX(vcpu,cause);
     opcode = VMX(vcpu,opcode);
 
-/*
-    if (privop_trace) {
-        static long i = 400;
-        //if (i > 0) printf("privop @%p\n",iip);
-        if (i > 0) printf("priv_handle_op: @%p, itc=%lx, itm=%lx\n",
-            iip,ia64_get_itc(),ia64_get_itm());
-        i--;
-    }
-*/
 #ifdef  VTLB_DEBUG
     check_vtlb_sanity(vmx_vcpu_get_vtlb(vcpu));
     dump_vtlb(vmx_vcpu_get_vtlb(vcpu));
@@ -1565,8 +1534,6 @@ if ( (cause == 0xff && opcode == 0x1e000
     }
 
     recover_if_physical_mode(vcpu);
-    post_emulation_action (vcpu);
-//TODO    set_irq_check(v);
     return;
 
 }
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/Makefile
--- a/xen/arch/ia64/xen/Makefile        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/Makefile        Fri Jul 28 10:51:38 2006 +0100
@@ -24,5 +24,6 @@ obj-y += xensetup.o
 obj-y += xensetup.o
 obj-y += xentime.o
 obj-y += flushd.o
+obj-y += privop_stat.o
 
 obj-$(crash_debug) += gdbstub.o
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/dom0_ops.c
--- a/xen/arch/ia64/xen/dom0_ops.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/dom0_ops.c      Fri Jul 28 10:51:38 2006 +0100
@@ -19,6 +19,11 @@
 #include <xen/guest_access.h>
 #include <public/sched_ctl.h>
 #include <asm/vmx.h>
+#include <asm/dom_fw.h>
+#include <xen/iocap.h>
+
+void build_physmap_table(struct domain *d);
+
 extern unsigned long total_pages;
 long arch_do_dom0_op(dom0_op_t *op, XEN_GUEST_HANDLE(dom0_op_t) u_dom0_op)
 {
@@ -154,52 +159,37 @@ long arch_do_dom0_op(dom0_op_t *op, XEN_
 
     case DOM0_GETMEMLIST:
     {
-        unsigned long i = 0;
+        unsigned long i;
         struct domain *d = find_domain_by_id(op->u.getmemlist.domain);
         unsigned long start_page = op->u.getmemlist.max_pfns >> 32;
         unsigned long nr_pages = op->u.getmemlist.max_pfns & 0xffffffff;
         unsigned long mfn;
-        struct list_head *list_ent;
-
-        ret = -EINVAL;
-        if ( d != NULL )
-        {
-            ret = 0;
-
-            list_ent = d->page_list.next;
-            while ( (i != start_page) && (list_ent != &d->page_list)) {
-                mfn = page_to_mfn(list_entry(
-                    list_ent, struct page_info, list));
-                i++;
-                list_ent = mfn_to_page(mfn)->list.next;
-            }
-
-            if (i == start_page)
-            {
-                while((i < (start_page + nr_pages)) &&
-                      (list_ent != &d->page_list))
-                {
-                    mfn = page_to_mfn(list_entry(
-                        list_ent, struct page_info, list));
-
-                    if ( copy_to_guest_offset(op->u.getmemlist.buffer,
-                                          i - start_page, &mfn, 1) )
-                    {
-                        ret = -EFAULT;
-                        break;
-                    }
-                    i++;
-                    list_ent = mfn_to_page(mfn)->list.next;
-                }
-            } else
-                ret = -ENOMEM;
-
-            op->u.getmemlist.num_pfns = i - start_page;
-            if (copy_to_guest(u_dom0_op, op, 1))
-                ret = -EFAULT;
-            
-            put_domain(d);
-        }
+
+        if ( d == NULL ) {
+            ret = -EINVAL;
+            break;
+        }
+        for (i = 0 ; i < nr_pages ; i++) {
+            pte_t *pte;
+
+            pte = (pte_t *)lookup_noalloc_domain_pte(d,
+                                               (start_page + i) << PAGE_SHIFT);
+            if (pte && pte_present(*pte))
+                mfn = pte_pfn(*pte);
+            else
+                mfn = INVALID_MFN;
+
+            if ( copy_to_guest_offset(op->u.getmemlist.buffer, i, &mfn, 1) ) {
+                    ret = -EFAULT;
+                    break;
+            }
+        }
+
+        op->u.getmemlist.num_pfns = i;
+        if (copy_to_guest(u_dom0_op, op, 1))
+            ret = -EFAULT;
+
+        put_domain(d);
     }
     break;
 
@@ -225,6 +215,95 @@ long arch_do_dom0_op(dom0_op_t *op, XEN_
     }
     break;
 
+    case DOM0_DOMAIN_SETUP:
+    {
+        dom0_domain_setup_t *ds = &op->u.domain_setup;
+        struct domain *d = find_domain_by_id(ds->domain);
+
+        if ( d == NULL) {
+            ret = -EINVAL;
+            break;
+        }
+
+        if (ds->flags & XEN_DOMAINSETUP_query) {
+            /* Set flags.  */
+            if (d->arch.is_vti)
+                ds->flags |= XEN_DOMAINSETUP_hvm_guest;
+            /* Set params.  */
+            ds->bp = 0;                /* unknown.  */
+            ds->maxmem = 0; /* unknown.  */
+            ds->xsi_va = d->arch.shared_info_va;
+            ds->hypercall_imm = d->arch.breakimm;
+            /* Copy back.  */
+            if ( copy_to_guest(u_dom0_op, op, 1) )
+                ret = -EFAULT;
+        }
+        else {
+            if (ds->flags & XEN_DOMAINSETUP_hvm_guest) {
+                if (!vmx_enabled) {
+                    printk("No VMX hardware feature for vmx domain.\n");
+                    ret = -EINVAL;
+                    break;
+                }
+                d->arch.is_vti = 1;
+                vmx_setup_platform(d);
+            }
+            else {
+                build_physmap_table(d);
+                dom_fw_setup(d, ds->bp, ds->maxmem);
+                if (ds->xsi_va)
+                    d->arch.shared_info_va = ds->xsi_va;
+                if (ds->hypercall_imm) {
+                    struct vcpu *v;
+                    d->arch.breakimm = ds->hypercall_imm;
+                    for_each_vcpu (d, v)
+                        v->arch.breakimm = d->arch.breakimm;
+                }
+            }
+        }
+
+        put_domain(d);
+    }
+    break;
+
+    case DOM0_SHADOW_CONTROL:
+    {
+        struct domain *d; 
+        ret = -ESRCH;
+        d = find_domain_by_id(op->u.shadow_control.domain);
+        if ( d != NULL )
+        {
+            ret = shadow_mode_control(d, &op->u.shadow_control);
+            put_domain(d);
+            copy_to_guest(u_dom0_op, op, 1);
+        } 
+    }
+    break;
+
+    case DOM0_IOPORT_PERMISSION:
+    {
+        struct domain *d;
+        unsigned int fp = op->u.ioport_permission.first_port;
+        unsigned int np = op->u.ioport_permission.nr_ports;
+        unsigned int lp = fp + np - 1;
+
+        ret = -ESRCH;
+        d = find_domain_by_id(op->u.ioport_permission.domain);
+        if (unlikely(d == NULL))
+            break;
+
+        if (np == 0)
+            ret = 0;
+        else {
+            if (op->u.ioport_permission.allow_access)
+                ret = ioports_permit_access(d, fp, lp);
+            else
+                ret = ioports_deny_access(d, fp, lp);
+        }
+
+        put_domain(d);
+    }
+    break;
     default:
         printf("arch_do_dom0_op: unrecognized dom0 op: %d!!!\n",op->cmd);
         ret = -ENOSYS;
@@ -235,6 +314,24 @@ long arch_do_dom0_op(dom0_op_t *op, XEN_
 }
 
 #ifdef CONFIG_XEN_IA64_DOM0_VP
+static unsigned long
+dom0vp_ioremap(struct domain *d, unsigned long mpaddr, unsigned long size)
+{
+    unsigned long end;
+
+    /* Linux may use a 0 size!  */
+    if (size == 0)
+        size = PAGE_SIZE;
+
+    end = PAGE_ALIGN(mpaddr + size);
+
+    if (!iomem_access_permitted(d, mpaddr >> PAGE_SHIFT,
+                                (end >> PAGE_SHIFT) - 1))
+        return -EPERM;
+
+    return assign_domain_mmio_page(d, mpaddr, size);
+}
+
 unsigned long
 do_dom0vp_op(unsigned long cmd,
              unsigned long arg0, unsigned long arg1, unsigned long arg2,
@@ -245,7 +342,7 @@ do_dom0vp_op(unsigned long cmd,
 
     switch (cmd) {
     case IA64_DOM0VP_ioremap:
-        ret = assign_domain_mmio_page(d, arg0, arg1);
+        ret = dom0vp_ioremap(d, arg0, arg1);
         break;
     case IA64_DOM0VP_phystomach:
         ret = ____lookup_domain_mpa(d, arg0 << PAGE_SHIFT);
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/dom_fw.c
--- a/xen/arch/ia64/xen/dom_fw.c        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/dom_fw.c        Fri Jul 28 10:51:38 2006 +0100
@@ -23,15 +23,18 @@
 #include <xen/acpi.h>
 
 #include <asm/dom_fw.h>
-
-static struct ia64_boot_param *dom_fw_init(struct domain *, const char 
*,int,char *,int);
+#include <asm/bundle.h>
+
+static void dom_fw_init (struct domain *d, struct ia64_boot_param *bp, char 
*fw_mem, int fw_mem_size, unsigned long maxmem);
+
 extern struct domain *dom0;
 extern unsigned long dom0_start;
 
 extern unsigned long running_on_sim;
 
-unsigned long dom_fw_base_mpa = -1;
-unsigned long imva_fw_base = -1;
+/* Note: two domains cannot be created simulteanously!  */
+static unsigned long dom_fw_base_mpa = -1;
+static unsigned long imva_fw_base = -1;
 
 #define FW_VENDOR 
"X\0e\0n\0/\0i\0a\0\066\0\064\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
 
@@ -82,6 +85,83 @@ dom_pa(unsigned long imva)
         }                                           \
     } while (0)
 
+/**************************************************************************
+Hypercall bundle creation
+**************************************************************************/
+
+static void build_hypercall_bundle(UINT64 *imva, UINT64 brkimm, UINT64 hypnum, 
UINT64 ret)
+{
+       INST64_A5 slot0;
+       INST64_I19 slot1;
+       INST64_B4 slot2;
+       IA64_BUNDLE bundle;
+
+       // slot1: mov r2 = hypnum (low 20 bits)
+       slot0.inst = 0;
+       slot0.qp = 0; slot0.r1 = 2; slot0.r3 = 0; slot0.major = 0x9;
+       slot0.imm7b = hypnum; slot0.imm9d = hypnum >> 7;
+       slot0.imm5c = hypnum >> 16; slot0.s = 0;
+       // slot1: break brkimm
+       slot1.inst = 0;
+       slot1.qp = 0; slot1.x6 = 0; slot1.x3 = 0; slot1.major = 0x0;
+       slot1.imm20 = brkimm; slot1.i = brkimm >> 20;
+       // if ret slot2: br.ret.sptk.many rp
+       // else slot2: br.cond.sptk.many rp
+       slot2.inst = 0; slot2.qp = 0; slot2.p = 1; slot2.b2 = 0;
+       slot2.wh = 0; slot2.d = 0; slot2.major = 0x0;
+       if (ret) {
+               slot2.btype = 4; slot2.x6 = 0x21;
+       }
+       else {
+               slot2.btype = 0; slot2.x6 = 0x20;
+       }
+       
+       bundle.i64[0] = 0; bundle.i64[1] = 0;
+       bundle.template = 0x11;
+       bundle.slot0 = slot0.inst; bundle.slot2 = slot2.inst;
+       bundle.slot1a = slot1.inst; bundle.slot1b = slot1.inst >> 18;
+       
+       imva[0] = bundle.i64[0]; imva[1] = bundle.i64[1];
+       ia64_fc(imva);
+       ia64_fc(imva + 1);
+}
+
+static void build_pal_hypercall_bundles(UINT64 *imva, UINT64 brkimm, UINT64 
hypnum)
+{
+       extern unsigned long pal_call_stub[];
+       IA64_BUNDLE bundle;
+       INST64_A5 slot_a5;
+       INST64_M37 slot_m37;
+
+       /* The source of the hypercall stub is the pal_call_stub function
+          defined in xenasm.S.  */
+
+       /* Copy the first bundle and patch the hypercall number.  */
+       bundle.i64[0] = pal_call_stub[0];
+       bundle.i64[1] = pal_call_stub[1];
+       slot_a5.inst = bundle.slot0;
+       slot_a5.imm7b = hypnum;
+       slot_a5.imm9d = hypnum >> 7;
+       slot_a5.imm5c = hypnum >> 16;
+       bundle.slot0 = slot_a5.inst;
+       imva[0] = bundle.i64[0];
+       imva[1] = bundle.i64[1];
+       ia64_fc(imva);
+       ia64_fc(imva + 1);
+       
+       /* Copy the second bundle and patch the hypercall vector.  */
+       bundle.i64[0] = pal_call_stub[2];
+       bundle.i64[1] = pal_call_stub[3];
+       slot_m37.inst = bundle.slot0;
+       slot_m37.imm20a = brkimm;
+       slot_m37.i = brkimm >> 20;
+       bundle.slot0 = slot_m37.inst;
+       imva[2] = bundle.i64[0];
+       imva[3] = bundle.i64[1];
+       ia64_fc(imva + 2);
+       ia64_fc(imva + 3);
+}
+
 // builds a hypercall bundle at domain physical address
 static void dom_fpswa_hypercall_patch(struct domain *d)
 {
@@ -138,21 +218,22 @@ static void dom_fw_pal_hypercall_patch(s
 }
 
 
-// FIXME: This is really a hack: Forcing the boot parameter block
-// at domain mpaddr 0 page, then grabbing only the low bits of the
-// Xen imva, which is the offset into the page
-unsigned long dom_fw_setup(struct domain *d, const char *args, int arglen)
+void dom_fw_setup(struct domain *d, unsigned long bp_mpa, unsigned long maxmem)
 {
        struct ia64_boot_param *bp;
 
        dom_fw_base_mpa = 0;
 #ifndef CONFIG_XEN_IA64_DOM0_VP
-       if (d == dom0) dom_fw_base_mpa += dom0_start;
+       if (d == dom0) {
+               dom_fw_base_mpa += dom0_start;
+               bp_mpa += dom0_start;
+       }
 #endif
        ASSIGN_NEW_DOMAIN_PAGE_IF_DOM0(d, dom_fw_base_mpa);
        imva_fw_base = (unsigned long) domain_mpa_to_imva(d, dom_fw_base_mpa);
-       bp = dom_fw_init(d, args, arglen, (char *) imva_fw_base, PAGE_SIZE);
-       return dom_pa((unsigned long) bp);
+       ASSIGN_NEW_DOMAIN_PAGE_IF_DOM0(d, bp_mpa);
+       bp = domain_mpa_to_imva(d, bp_mpa);
+       dom_fw_init(d, bp, (char *) imva_fw_base, PAGE_SIZE, maxmem);
 }
 
 
@@ -525,8 +606,8 @@ efi_mdt_cmp(const void *a, const void *b
        return 0;
 }
 
-static struct ia64_boot_param *
-dom_fw_init (struct domain *d, const char *args, int arglen, char *fw_mem, int 
fw_mem_size)
+static void
+dom_fw_init (struct domain *d, struct ia64_boot_param *bp, char *fw_mem, int 
fw_mem_size, unsigned long maxmem)
 {
        efi_system_table_t *efi_systab;
        efi_runtime_services_t *efi_runtime;
@@ -536,12 +617,11 @@ dom_fw_init (struct domain *d, const cha
        struct ia64_sal_desc_ap_wakeup *sal_wakeup;
        fpswa_interface_t *fpswa_inf;
        efi_memory_desc_t *efi_memmap, *md;
-       struct ia64_boot_param *bp;
+       struct xen_sal_data *sal_data;
        unsigned long *pfn;
        unsigned char checksum = 0;
-       char *cp, *cmd_line, *fw_vendor;
+       char *cp, *fw_vendor;
        int num_mds, j, i = 0;
-       unsigned long maxmem = (d->max_pages - d->arch.sys_pgnr) * PAGE_SIZE;
 #ifdef CONFIG_XEN_IA64_DOM0_VP
        const unsigned long start_mpaddr = 0;
 #else
@@ -566,33 +646,23 @@ dom_fw_init (struct domain *d, const cha
        sal_wakeup  = (void *) cp; cp += sizeof(*sal_wakeup);
        fpswa_inf   = (void *) cp; cp += sizeof(*fpswa_inf);
        efi_memmap  = (void *) cp; cp += NUM_MEM_DESCS*sizeof(*efi_memmap);
-       bp          = (void *) cp; cp += sizeof(*bp);
        pfn         = (void *) cp; cp += NFUNCPTRS * 2 * sizeof(pfn);
-       cmd_line    = (void *) cp;
+       sal_data    = (void *) cp; cp += sizeof(*sal_data);
 
        /* Initialise for EFI_SET_VIRTUAL_ADDRESS_MAP emulation */
        d->arch.efi_runtime = efi_runtime;
        d->arch.fpswa_inf   = fpswa_inf;
-
-       if (args) {
-               if (arglen >= 1024)
-                       arglen = 1023;
-               memcpy(cmd_line, args, arglen);
-       } else {
-               arglen = 0;
-       }
-       cmd_line[arglen] = '\0';
+       d->arch.sal_data    = sal_data;
 
        memset(efi_systab, 0, sizeof(efi_systab));
        efi_systab->hdr.signature = EFI_SYSTEM_TABLE_SIGNATURE;
        efi_systab->hdr.revision  = EFI_SYSTEM_TABLE_REVISION;
        efi_systab->hdr.headersize = sizeof(efi_systab->hdr);
-       cp = fw_vendor = &cmd_line[arglen] + (2-(arglen&1)); // round to 16-bit 
boundary
+       fw_vendor = cp;
        cp += sizeof(FW_VENDOR) + (8-((unsigned long)cp & 7)); // round to 
64-bit boundary
 
        memcpy(fw_vendor,FW_VENDOR,sizeof(FW_VENDOR));
        efi_systab->fw_vendor = dom_pa((unsigned long) fw_vendor);
-       
        efi_systab->fw_revision = 1;
        efi_systab->runtime = (void *) dom_pa((unsigned long) efi_runtime);
        efi_systab->nr_tables = NUM_EFI_SYS_TABLES;
@@ -694,20 +764,20 @@ dom_fw_init (struct domain *d, const cha
        dom_fw_hypercall_patch (d, sal_ed->sal_proc, FW_HYPERCALL_SAL_CALL, 1);
        sal_ed->gp = 0;  // will be ignored
 
+       /* Fill an AP wakeup descriptor.  */
+       sal_wakeup->type = SAL_DESC_AP_WAKEUP;
+       sal_wakeup->mechanism = IA64_SAL_AP_EXTERNAL_INT;
+       sal_wakeup->vector = XEN_SAL_BOOT_RENDEZ_VEC;
+
+       /* Compute checksum.  */
+       for (cp = (char *) sal_systab; cp < (char *) efi_memmap; ++cp)
+               checksum += *cp;
+       sal_systab->checksum = -checksum;
+
        /* SAL return point.  */
        d->arch.sal_return_addr = FW_HYPERCALL_SAL_RETURN_PADDR + start_mpaddr;
        dom_fw_hypercall_patch (d, d->arch.sal_return_addr,
                                FW_HYPERCALL_SAL_RETURN, 0);
-
-       /* Fill an AP wakeup descriptor.  */
-       sal_wakeup->type = SAL_DESC_AP_WAKEUP;
-       sal_wakeup->mechanism = IA64_SAL_AP_EXTERNAL_INT;
-       sal_wakeup->vector = XEN_SAL_BOOT_RENDEZ_VEC;
-
-       for (cp = (char *) sal_systab; cp < (char *) efi_memmap; ++cp)
-               checksum += *cp;
-
-       sal_systab->checksum = -checksum;
 
        /* Fill in the FPSWA interface: */
        fpswa_inf->revision = fpswa_interface->revision;
@@ -784,15 +854,18 @@ dom_fw_init (struct domain *d, const cha
                else MAKE_MD(EFI_RESERVED_TYPE,0,0,0,0);
        } else {
 #ifndef CONFIG_XEN_IA64_DOM0_VP
-               MAKE_MD(EFI_LOADER_DATA,EFI_MEMORY_WB,0*MB,1*MB, 1);
-               
MAKE_MD(EFI_CONVENTIONAL_MEMORY,EFI_MEMORY_WB,HYPERCALL_END,maxmem, 1);
-#endif
-               /* hypercall patches live here, masquerade as reserved PAL 
memory */
-               
MAKE_MD(EFI_PAL_CODE,EFI_MEMORY_WB|EFI_MEMORY_RUNTIME,HYPERCALL_START,HYPERCALL_END,
 1);
-               /* Create a dummy entry for IO ports, so that IO accesses are
-                  trapped by Xen.  */
-               MAKE_MD(EFI_MEMORY_MAPPED_IO_PORT_SPACE,EFI_MEMORY_UC,
-                       0x00000ffffc000000, 0x00000fffffffffff, 1);
+               /* Dom0 maps legacy mmio in first MB.  */
+               MAKE_MD(EFI_LOADER_DATA, EFI_MEMORY_WB, 0*MB, 1*MB, 1);
+               MAKE_MD(EFI_CONVENTIONAL_MEMORY, EFI_MEMORY_WB,
+                       HYPERCALL_END, maxmem, 1);
+#endif
+               /* hypercall patches live here, masquerade as reserved
+                  PAL memory */
+               MAKE_MD(EFI_PAL_CODE, EFI_MEMORY_WB | EFI_MEMORY_RUNTIME,
+                       HYPERCALL_START, HYPERCALL_END, 1);
+               /* Create an entry for IO ports.  */
+               MAKE_MD(EFI_MEMORY_MAPPED_IO_PORT_SPACE, EFI_MEMORY_UC,
+                       IO_PORTS_PADDR, IO_PORTS_PADDR + IO_PORTS_SIZE, 1);
                MAKE_MD(EFI_RESERVED_TYPE,0,0,0,0);
        }
 
@@ -848,7 +921,7 @@ dom_fw_init (struct domain *d, const cha
        bp->efi_memmap_size = i * sizeof(efi_memory_desc_t);
        bp->efi_memdesc_size = sizeof(efi_memory_desc_t);
        bp->efi_memdesc_version = EFI_MEMDESC_VERSION;
-       bp->command_line = dom_pa((unsigned long) cmd_line);
+       bp->command_line = 0;
        bp->console_info.num_cols = 80;
        bp->console_info.num_rows = 25;
        bp->console_info.orig_x = 0;
@@ -858,12 +931,6 @@ dom_fw_init (struct domain *d, const cha
                int j;
                u64 addr;
 
-               // XXX CONFIG_XEN_IA64_DOM0_VP
-               // initrd_start address is hard coded in construct_dom0()
-               bp->initrd_start = (dom0_start+dom0_size) -
-                 (PAGE_ALIGN(ia64_boot_param->initrd_size) + 4*1024*1024);
-               bp->initrd_size = ia64_boot_param->initrd_size;
-
                // dom0 doesn't need build_physmap_table()
                // see arch_set_info_guest()
                // instead we allocate pages manually.
@@ -899,17 +966,9 @@ dom_fw_init (struct domain *d, const cha
                        if (efi_mmio(addr, PAGE_SIZE))
                                assign_domain_mmio_page(d, addr, PAGE_SIZE);
                }
-               d->arch.physmap_built = 1;
-       }
-       else {
-               bp->initrd_start = d->arch.initrd_start;
-               bp->initrd_size  = d->arch.initrd_len;
        }
        for (i = 0 ; i < bp->efi_memmap_size/sizeof(efi_memory_desc_t) ; i++) {
                md = efi_memmap + i;
                print_md(md);
        }
-       printf(" initrd start 0x%lx", bp->initrd_start);
-       printf(" initrd size 0x%lx\n", bp->initrd_size);
-       return bp;
-}
+}
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/domain.c
--- a/xen/arch/ia64/xen/domain.c        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/domain.c        Fri Jul 28 10:51:38 2006 +0100
@@ -25,26 +25,15 @@
 #include <xen/mm.h>
 #include <xen/iocap.h>
 #include <asm/asm-xsi-offsets.h>
-#include <asm/ptrace.h>
 #include <asm/system.h>
 #include <asm/io.h>
 #include <asm/processor.h>
-#include <asm/desc.h>
-#include <asm/hw_irq.h>
-#include <asm/setup.h>
-//#include <asm/mpspec.h>
-#include <xen/irq.h>
 #include <xen/event.h>
-//#include <xen/shadow.h>
 #include <xen/console.h>
 #include <xen/compile.h>
-
 #include <xen/elf.h>
-//#include <asm/page.h>
 #include <asm/pgalloc.h>
-
 #include <asm/offsets.h>  /* for IA64_THREAD_INFO_SIZE */
-
 #include <asm/vcpu.h>   /* for function declarations */
 #include <public/arch-ia64.h>
 #include <xen/domain.h>
@@ -52,13 +41,13 @@
 #include <asm/vmx_vcpu.h>
 #include <asm/vmx_vpd.h>
 #include <asm/vmx_phy_mode.h>
-#include <asm/pal.h>
 #include <asm/vhpt.h>
-#include <public/hvm/ioreq.h>
 #include <public/arch-ia64.h>
 #include <asm/tlbflush.h>
 #include <asm/regionreg.h>
 #include <asm/dom_fw.h>
+#include <asm/shadow.h>
+#include <asm/privop_stat.h>
 
 #ifndef CONFIG_XEN_IA64_DOM0_VP
 #define CONFIG_DOMAIN0_CONTIGUOUS
@@ -79,11 +68,8 @@ extern void serial_input_init(void);
 extern void serial_input_init(void);
 static void init_switch_stack(struct vcpu *v);
 extern void vmx_do_launch(struct vcpu *);
-void build_physmap_table(struct domain *d);
 
 /* this belongs in include/asm, but there doesn't seem to be a suitable place 
*/
-unsigned long context_switch_count = 0;
-
 extern struct vcpu *ia64_switch_to (struct vcpu *next_task);
 
 /* Address of vpsr.i (in fact evtchn_upcall_mask) of current vcpu.
@@ -92,6 +78,36 @@ DEFINE_PER_CPU(int *, current_psr_ic_add
 DEFINE_PER_CPU(int *, current_psr_ic_addr);
 
 #include <xen/sched-if.h>
+
+static void flush_vtlb_for_context_switch(struct vcpu* vcpu)
+{
+       int cpu = smp_processor_id();
+       int last_vcpu_id = vcpu->domain->arch.last_vcpu[cpu].vcpu_id;
+       int last_processor = vcpu->arch.last_processor;
+
+       if (is_idle_domain(vcpu->domain))
+               return;
+       
+       vcpu->domain->arch.last_vcpu[cpu].vcpu_id = vcpu->vcpu_id;
+       vcpu->arch.last_processor = cpu;
+
+       if ((last_vcpu_id != vcpu->vcpu_id &&
+            last_vcpu_id != INVALID_VCPU_ID) ||
+           (last_vcpu_id == vcpu->vcpu_id &&
+            last_processor != cpu &&
+            last_processor != INVALID_PROCESSOR)) {
+
+               // if the vTLB implementation was changed,
+               // the followings must be updated either.
+               if (VMX_DOMAIN(vcpu)) {
+                       // currently vTLB for vt-i domian is per vcpu.
+                       // so any flushing isn't needed.
+               } else {
+                       vhpt_flush();
+               }
+               local_flush_tlb_all();
+       }
+}
 
 void schedule_tail(struct vcpu *prev)
 {
@@ -111,6 +127,7 @@ void schedule_tail(struct vcpu *prev)
                __ia64_per_cpu_var(current_psr_ic_addr) = (int *)
                  (current->domain->arch.shared_info_va + XSI_PSR_IC_OFS);
        }
+       flush_vtlb_for_context_switch(current);
 }
 
 void context_switch(struct vcpu *prev, struct vcpu *next)
@@ -176,6 +193,7 @@ if (!i--) { i = 1000000; printk("+"); }
                __ia64_per_cpu_var(current_psr_ic_addr) = NULL;
         }
     }
+    flush_vtlb_for_context_switch(current);
     local_irq_restore(spsr);
     context_saved(prev);
 }
@@ -187,16 +205,14 @@ void continue_running(struct vcpu *same)
 
 static void default_idle(void)
 {
-       int cpu = smp_processor_id();
        local_irq_disable();
-       if ( !softirq_pending(cpu))
+       if ( !softirq_pending(smp_processor_id()) )
                safe_halt();
        local_irq_enable();
 }
 
 static void continue_cpu_idle_loop(void)
 {
-       int cpu = smp_processor_id();
        for ( ; ; )
        {
 #ifdef IA64
@@ -204,12 +220,10 @@ static void continue_cpu_idle_loop(void)
 #else
            irq_stat[cpu].idle_timestamp = jiffies;
 #endif
-           while ( !softirq_pending(cpu) )
+           while ( !softirq_pending(smp_processor_id()) )
                default_idle();
-           add_preempt_count(SOFTIRQ_OFFSET);
            raise_softirq(SCHEDULE_SOFTIRQ);
            do_softirq();
-           sub_preempt_count(SOFTIRQ_OFFSET);
        }
 }
 
@@ -246,14 +260,15 @@ struct vcpu *alloc_vcpu_struct(struct do
        }
 
        if (!is_idle_domain(d)) {
-           v->arch.privregs = 
-               alloc_xenheap_pages(get_order(sizeof(mapped_regs_t)));
-           BUG_ON(v->arch.privregs == NULL);
-           memset(v->arch.privregs, 0, PAGE_SIZE);
-
-           if (!vcpu_id)
-               memset(&d->shared_info->evtchn_mask[0], 0xff,
-                   sizeof(d->shared_info->evtchn_mask));
+           if (!d->arch.is_vti) {
+               /* Create privregs page only if not VTi.  */
+               v->arch.privregs = 
+                   alloc_xenheap_pages(get_order(sizeof(mapped_regs_t)));
+               BUG_ON(v->arch.privregs == NULL);
+               memset(v->arch.privregs, 0, PAGE_SIZE);
+               share_xen_page_with_guest(virt_to_page(v->arch.privregs),
+                                         d, XENSHARE_writable);
+           }
 
            v->arch.metaphysical_rr0 = d->arch.metaphysical_rr0;
            v->arch.metaphysical_rr4 = d->arch.metaphysical_rr4;
@@ -274,6 +289,7 @@ struct vcpu *alloc_vcpu_struct(struct do
            v->arch.starting_rid = d->arch.starting_rid;
            v->arch.ending_rid = d->arch.ending_rid;
            v->arch.breakimm = d->arch.breakimm;
+           v->arch.last_processor = INVALID_PROCESSOR;
        }
 
        return v;
@@ -285,7 +301,8 @@ void free_vcpu_struct(struct vcpu *v)
                vmx_relinquish_vcpu_resources(v);
        else {
                if (v->arch.privregs != NULL)
-                       free_xenheap_pages(v->arch.privregs, 
get_order(sizeof(mapped_regs_t)));
+                       free_xenheap_pages(v->arch.privregs,
+                                     get_order_from_shift(XMAPPEDREGS_SHIFT));
        }
 
        free_xenheap_pages(v, KERNEL_STACK_SIZE_ORDER);
@@ -310,16 +327,25 @@ static void init_switch_stack(struct vcp
 
 int arch_domain_create(struct domain *d)
 {
+       int i;
+       
        // the following will eventually need to be negotiated dynamically
        d->arch.shared_info_va = DEFAULT_SHAREDINFO_ADDR;
        d->arch.breakimm = 0x1000;
+       for (i = 0; i < NR_CPUS; i++) {
+               d->arch.last_vcpu[i].vcpu_id = INVALID_VCPU_ID;
+       }
 
        if (is_idle_domain(d))
            return 0;
 
-       if ((d->shared_info = (void *)alloc_xenheap_page()) == NULL)
+       d->shared_info = alloc_xenheap_pages(get_order_from_shift(XSI_SHIFT));
+       if (d->shared_info == NULL)
            goto fail_nomem;
-       memset(d->shared_info, 0, PAGE_SIZE);
+       memset(d->shared_info, 0, XSI_SIZE);
+       for (i = 0; i < XSI_SIZE; i += PAGE_SIZE)
+           share_xen_page_with_guest(virt_to_page((char *)d->shared_info + i),
+                                     d, XENSHARE_writable);
 
        d->max_pages = (128UL*1024*1024)/PAGE_SIZE; // 128MB default // FIXME
        /* We may also need emulation rid for region4, though it's unlikely
@@ -328,13 +354,14 @@ int arch_domain_create(struct domain *d)
         */
        if (!allocate_rid_range(d,0))
                goto fail_nomem;
-       d->arch.sys_pgnr = 0;
 
        memset(&d->arch.mm, 0, sizeof(d->arch.mm));
 
-       d->arch.physmap_built = 0;
        if ((d->arch.mm.pgd = pgd_alloc(&d->arch.mm)) == NULL)
            goto fail_nomem;
+
+       d->arch.ioport_caps = rangeset_new(d, "I/O Ports",
+                                          RANGESETF_prettyprint_hex);
 
        printf ("arch_domain_create: domain=%p\n", d);
        return 0;
@@ -343,7 +370,7 @@ fail_nomem:
        if (d->arch.mm.pgd != NULL)
            pgd_free(d->arch.mm.pgd);
        if (d->shared_info != NULL)
-           free_xenheap_page(d->shared_info);
+           free_xenheap_pages(d->shared_info, get_order_from_shift(XSI_SHIFT));
        return -ENOMEM;
 }
 
@@ -351,80 +378,85 @@ void arch_domain_destroy(struct domain *
 {
        BUG_ON(d->arch.mm.pgd != NULL);
        if (d->shared_info != NULL)
-               free_xenheap_page(d->shared_info);
-
-       domain_flush_destroy (d);
+           free_xenheap_pages(d->shared_info, get_order_from_shift(XSI_SHIFT));
+       if (d->arch.shadow_bitmap != NULL)
+               xfree(d->arch.shadow_bitmap);
+
+       /* Clear vTLB for the next domain.  */
+       domain_flush_tlb_vhpt(d);
 
        deallocate_rid_range(d);
 }
 
 void arch_getdomaininfo_ctxt(struct vcpu *v, struct vcpu_guest_context *c)
 {
+       int i;
+       struct vcpu_extra_regs *er = &c->extra_regs;
+
        c->user_regs = *vcpu_regs (v);
-       c->shared = v->domain->shared_info->arch;
+       c->privregs_pfn = virt_to_maddr(v->arch.privregs) >> PAGE_SHIFT;
+
+       /* Fill extra regs.  */
+       for (i = 0; i < 8; i++) {
+               er->itrs[i].pte = v->arch.itrs[i].pte.val;
+               er->itrs[i].itir = v->arch.itrs[i].itir;
+               er->itrs[i].vadr = v->arch.itrs[i].vadr;
+               er->itrs[i].rid = v->arch.itrs[i].rid;
+       }
+       for (i = 0; i < 8; i++) {
+               er->dtrs[i].pte = v->arch.dtrs[i].pte.val;
+               er->dtrs[i].itir = v->arch.dtrs[i].itir;
+               er->dtrs[i].vadr = v->arch.dtrs[i].vadr;
+               er->dtrs[i].rid = v->arch.dtrs[i].rid;
+       }
+       er->event_callback_ip = v->arch.event_callback_ip;
+       er->dcr = v->arch.dcr;
+       er->iva = v->arch.iva;
 }
 
 int arch_set_info_guest(struct vcpu *v, struct vcpu_guest_context *c)
 {
        struct pt_regs *regs = vcpu_regs (v);
        struct domain *d = v->domain;
-       unsigned long cmdline_addr;
-
-       if ( test_bit(_VCPUF_initialised, &v->vcpu_flags) )
-            return 0;
-       if (c->flags & VGCF_VMX_GUEST) {
-           if (!vmx_enabled) {
-               printk("No VMX hardware feature for vmx domain.\n");
-               return -EINVAL;
-           }
-
-           if (v == d->vcpu[0])
-               vmx_setup_platform(d, c);
-
-           vmx_final_setup_guest(v);
-       } else if (!d->arch.physmap_built)
-           build_physmap_table(d);
-
+       
        *regs = c->user_regs;
-       cmdline_addr = 0;
-       if (v == d->vcpu[0]) {
-           /* Only for first vcpu.  */
-           d->arch.sys_pgnr = c->sys_pgnr;
-           d->arch.initrd_start = c->initrd.start;
-           d->arch.initrd_len   = c->initrd.size;
-           d->arch.cmdline      = c->cmdline;
-           d->shared_info->arch = c->shared;
-
-           if (!VMX_DOMAIN(v)) {
-                   const char *cmdline = d->arch.cmdline;
-                   int len;
-
-                   if (*cmdline == 0) {
-#define DEFAULT_CMDLINE "nomca nosmp xencons=tty0 console=tty0 root=/dev/hda1"
-                           cmdline = DEFAULT_CMDLINE;
-                           len = sizeof (DEFAULT_CMDLINE);
-                           printf("domU command line defaulted to"
-                                  DEFAULT_CMDLINE "\n");
-                   }
-                   else
-                           len = IA64_COMMAND_LINE_SIZE;
-                   cmdline_addr = dom_fw_setup (d, cmdline, len);
-           }
-
-           /* Cache synchronization seems to be done by the linux kernel
-              during mmap/unmap operation.  However be conservative.  */
-           domain_cache_flush (d, 1);
-       }
-       vcpu_init_regs (v);
-       regs->r28 = cmdline_addr;
-
-       if ( c->privregs && copy_from_user(v->arch.privregs,
-                          c->privregs, sizeof(mapped_regs_t))) {
-           printk("Bad ctxt address in arch_set_info_guest: %p\n",
-                  c->privregs);
-           return -EFAULT;
-       }
-
+       
+       if (!d->arch.is_vti) {
+               /* domain runs at PL2/3 */
+               regs->cr_ipsr |= 2UL << IA64_PSR_CPL0_BIT;
+               regs->ar_rsc |= (2 << 2); /* force PL2/3 */
+       }
+
+       if (c->flags & VGCF_EXTRA_REGS) {
+               int i;
+               struct vcpu_extra_regs *er = &c->extra_regs;
+
+               for (i = 0; i < 8; i++) {
+                       vcpu_set_itr(v, i, er->itrs[i].pte,
+                                    er->itrs[i].itir,
+                                    er->itrs[i].vadr,
+                                    er->itrs[i].rid);
+               }
+               for (i = 0; i < 8; i++) {
+                       vcpu_set_dtr(v, i,
+                                    er->dtrs[i].pte,
+                                    er->dtrs[i].itir,
+                                    er->dtrs[i].vadr,
+                                    er->dtrs[i].rid);
+               }
+               v->arch.event_callback_ip = er->event_callback_ip;
+               v->arch.dcr = er->dcr;
+               v->arch.iva = er->iva;
+       }
+       
+       if ( test_bit(_VCPUF_initialised, &v->vcpu_flags) )
+               return 0;
+       if (d->arch.is_vti)
+               vmx_final_setup_guest(v);
+       
+       /* This overrides some registers.  */
+       vcpu_init_regs(v);
+  
        /* Don't redo final setup */
        set_bit(_VCPUF_initialised, &v->vcpu_flags);
        return 0;
@@ -502,6 +534,9 @@ void domain_relinquish_resources(struct 
 
     relinquish_memory(d, &d->xenpage_list);
     relinquish_memory(d, &d->page_list);
+
+    if (d->arch.is_vti && d->arch.sal_data)
+           xfree(d->arch.sal_data);
 }
 
 void build_physmap_table(struct domain *d)
@@ -509,7 +544,6 @@ void build_physmap_table(struct domain *
        struct list_head *list_ent = d->page_list.next;
        unsigned long mfn, i = 0;
 
-       ASSERT(!d->arch.physmap_built);
        while(list_ent != &d->page_list) {
            mfn = page_to_mfn(list_entry(
                list_ent, struct page_info, list));
@@ -518,7 +552,6 @@ void build_physmap_table(struct domain *
            i++;
            list_ent = mfn_to_page(mfn)->list.next;
        }
-       d->arch.physmap_built = 1;
 }
 
 unsigned long
@@ -555,6 +588,148 @@ domain_set_shared_info_va (unsigned long
        return 0;
 }
 
+/* Transfer and clear the shadow bitmap in 1kB chunks for L1 cache. */
+#define SHADOW_COPY_CHUNK (1024 / sizeof (unsigned long))
+
+int shadow_mode_control(struct domain *d, dom0_shadow_control_t *sc)
+{
+       unsigned int op = sc->op;
+       int          rc = 0;
+       int i;
+       //struct vcpu *v;
+
+       if (unlikely(d == current->domain)) {
+               DPRINTK("Don't try to do a shadow op on yourself!\n");
+               return -EINVAL;
+       }   
+
+       domain_pause(d);
+
+       switch (op)
+       {
+       case DOM0_SHADOW_CONTROL_OP_OFF:
+               if (shadow_mode_enabled (d)) {
+                       u64 *bm = d->arch.shadow_bitmap;
+
+                       /* Flush vhpt and tlb to restore dirty bit usage.  */
+                       domain_flush_tlb_vhpt(d);
+
+                       /* Free bitmap.  */
+                       d->arch.shadow_bitmap_size = 0;
+                       d->arch.shadow_bitmap = NULL;
+                       xfree(bm);
+               }
+               break;
+
+       case DOM0_SHADOW_CONTROL_OP_ENABLE_TEST:
+       case DOM0_SHADOW_CONTROL_OP_ENABLE_TRANSLATE:
+               rc = -EINVAL;
+               break;
+
+       case DOM0_SHADOW_CONTROL_OP_ENABLE_LOGDIRTY:
+               if (shadow_mode_enabled(d)) {
+                       rc = -EINVAL;
+                       break;
+               }
+
+               atomic64_set(&d->arch.shadow_fault_count, 0);
+               atomic64_set(&d->arch.shadow_dirty_count, 0);
+
+               d->arch.shadow_bitmap_size = (d->max_pages + BITS_PER_LONG-1) &
+                                            ~(BITS_PER_LONG-1);
+               d->arch.shadow_bitmap = xmalloc_array(unsigned long,
+                                  d->arch.shadow_bitmap_size / BITS_PER_LONG);
+               if (d->arch.shadow_bitmap == NULL) {
+                       d->arch.shadow_bitmap_size = 0;
+                       rc = -ENOMEM;
+               }
+               else {
+                       memset(d->arch.shadow_bitmap, 0, 
+                              d->arch.shadow_bitmap_size / 8);
+                       
+                       /* Flush vhtp and tlb to enable dirty bit
+                          virtualization.  */
+                       domain_flush_tlb_vhpt(d);
+               }
+               break;
+
+       case DOM0_SHADOW_CONTROL_OP_FLUSH:
+               atomic64_set(&d->arch.shadow_fault_count, 0);
+               atomic64_set(&d->arch.shadow_dirty_count, 0);
+               break;
+   
+       case DOM0_SHADOW_CONTROL_OP_CLEAN:
+         {
+               int nbr_longs;
+
+               sc->stats.fault_count = 
atomic64_read(&d->arch.shadow_fault_count);
+               sc->stats.dirty_count = 
atomic64_read(&d->arch.shadow_dirty_count);
+
+               atomic64_set(&d->arch.shadow_fault_count, 0);
+               atomic64_set(&d->arch.shadow_dirty_count, 0);
+ 
+               if (guest_handle_is_null(sc->dirty_bitmap) ||
+                   (d->arch.shadow_bitmap == NULL)) {
+                       rc = -EINVAL;
+                       break;
+               }
+
+               if (sc->pages > d->arch.shadow_bitmap_size)
+                       sc->pages = d->arch.shadow_bitmap_size; 
+
+               nbr_longs = (sc->pages + BITS_PER_LONG - 1) / BITS_PER_LONG;
+
+               for (i = 0; i < nbr_longs; i += SHADOW_COPY_CHUNK) {
+                       int size = (nbr_longs - i) > SHADOW_COPY_CHUNK ?
+                                  SHADOW_COPY_CHUNK : nbr_longs - i;
+     
+                       if (copy_to_guest_offset(sc->dirty_bitmap, i,
+                                                d->arch.shadow_bitmap + i,
+                                                size)) {
+                               rc = -EFAULT;
+                               break;
+                       }
+
+                       memset(d->arch.shadow_bitmap + i,
+                              0, size * sizeof(unsigned long));
+               }
+               
+               break;
+         }
+
+       case DOM0_SHADOW_CONTROL_OP_PEEK:
+       {
+               unsigned long size;
+
+               sc->stats.fault_count = 
atomic64_read(&d->arch.shadow_fault_count);
+               sc->stats.dirty_count = 
atomic64_read(&d->arch.shadow_dirty_count);
+
+               if (guest_handle_is_null(sc->dirty_bitmap) ||
+                   (d->arch.shadow_bitmap == NULL)) {
+                       rc = -EINVAL;
+                       break;
+               }
+ 
+               if (sc->pages > d->arch.shadow_bitmap_size)
+                       sc->pages = d->arch.shadow_bitmap_size; 
+
+               size = (sc->pages + BITS_PER_LONG - 1) / BITS_PER_LONG;
+               if (copy_to_guest(sc->dirty_bitmap, 
+                                 d->arch.shadow_bitmap, size)) {
+                       rc = -EFAULT;
+                       break;
+               }
+               break;
+       }
+       default:
+               rc = -EINVAL;
+               break;
+       }
+       
+       domain_unpause(d);
+       
+       return rc;
+}
 
 // remove following line if not privifying in memory
 //#define HAVE_PRIVIFY_MEMORY
@@ -713,6 +888,8 @@ static void physdev_init_dom0(struct dom
                BUG();
        if (irqs_permit_access(d, 0, NR_IRQS-1))
                BUG();
+       if (ioports_permit_access(d, 0, 0xffff))
+               BUG();
 }
 
 int construct_dom0(struct domain *d, 
@@ -733,8 +910,9 @@ int construct_dom0(struct domain *d,
        unsigned long pkern_end;
        unsigned long pinitrd_start = 0;
        unsigned long pstart_info;
-       unsigned long cmdline_addr;
        struct page_info *start_info_page;
+       unsigned long bp_mpa;
+       struct ia64_boot_param *bp;
 
 #ifdef VALIDATE_VT
        unsigned int vmx_dom0 = 0;
@@ -884,8 +1062,6 @@ int construct_dom0(struct domain *d,
        //if ( initrd_len != 0 )
        //    memcpy((void *)vinitrd_start, initrd_start, initrd_len);
 
-       d->shared_info->arch.flags = SIF_INITDOMAIN|SIF_PRIVILEGED;
-
        /* Set up start info area. */
        d->shared_info->arch.start_info_pfn = pstart_info >> PAGE_SHIFT;
        start_info_page = assign_new_domain_page(d, pstart_info);
@@ -895,8 +1071,7 @@ int construct_dom0(struct domain *d,
        memset(si, 0, PAGE_SIZE);
        sprintf(si->magic, "xen-%i.%i-ia64", XEN_VERSION, XEN_SUBVERSION);
        si->nr_pages     = max_pages;
-
-       console_endboot();
+       si->flags = SIF_INITDOMAIN|SIF_PRIVILEGED;
 
        printk("Dom0: 0x%lx\n", (u64)dom0);
 
@@ -910,15 +1085,38 @@ int construct_dom0(struct domain *d,
 
        set_bit(_VCPUF_initialised, &v->vcpu_flags);
 
-       cmdline_addr = dom_fw_setup(d, dom0_command_line, COMMAND_LINE_SIZE);
+       /* Build firmware.
+          Note: Linux kernel reserve memory used by start_info, so there is
+          no need to remove it from MDT.  */
+       bp_mpa = pstart_info + sizeof(struct start_info);
+       dom_fw_setup(d, bp_mpa, max_pages * PAGE_SIZE);
+
+       /* Fill boot param.  */
+       strncpy((char *)si->cmd_line, dom0_command_line, sizeof(si->cmd_line));
+       si->cmd_line[sizeof(si->cmd_line)-1] = 0;
+
+       bp = (struct ia64_boot_param *)(si + 1);
+       bp->command_line = pstart_info + offsetof (start_info_t, cmd_line);
+
+       /* We assume console has reached the last line!  */
+       bp->console_info.num_cols = ia64_boot_param->console_info.num_cols;
+       bp->console_info.num_rows = ia64_boot_param->console_info.num_rows;
+       bp->console_info.orig_x = 0;
+       bp->console_info.orig_y = bp->console_info.num_rows == 0 ?
+                                 0 : bp->console_info.num_rows - 1;
+
+       bp->initrd_start = (dom0_start+dom0_size) -
+         (PAGE_ALIGN(ia64_boot_param->initrd_size) + 4*1024*1024);
+       bp->initrd_size = ia64_boot_param->initrd_size;
 
        vcpu_init_regs (v);
+
+       vcpu_regs(v)->r28 = bp_mpa;
 
 #ifdef CONFIG_DOMAIN0_CONTIGUOUS
        pkern_entry += dom0_start;
 #endif
        vcpu_regs (v)->cr_iip = pkern_entry;
-       vcpu_regs (v)->r28 = cmdline_addr;
 
        physdev_init_dom0(d);
 
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/faults.c
--- a/xen/arch/ia64/xen/faults.c        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/faults.c        Fri Jul 28 10:51:38 2006 +0100
@@ -1,4 +1,3 @@
-
 /*
  * Miscellaneous process/domain related routines
  * 
@@ -26,7 +25,10 @@
 #include <asm/vhpt.h>
 #include <asm/debugger.h>
 #include <asm/fpswa.h>
+#include <asm/bundle.h>
+#include <asm/privop_stat.h>
 #include <asm/asm-xsi-offsets.h>
+#include <asm/shadow.h>
 
 extern void die_if_kernel(char *str, struct pt_regs *regs, long err);
 /* FIXME: where these declarations shold be there ? */
@@ -49,41 +51,7 @@ extern IA64FAULT ia64_hypercall(struct p
 
 extern void do_ssc(unsigned long ssc, struct pt_regs *regs);
 
-unsigned long slow_reflect_count[0x80] = { 0 };
-unsigned long fast_reflect_count[0x80] = { 0 };
-
 #define inc_slow_reflect_count(vec) slow_reflect_count[vec>>8]++;
-
-void zero_reflect_counts(void)
-{
-       int i;
-       for (i=0; i<0x80; i++) slow_reflect_count[i] = 0;
-       for (i=0; i<0x80; i++) fast_reflect_count[i] = 0;
-}
-
-int dump_reflect_counts(char *buf)
-{
-       int i,j,cnt;
-       char *s = buf;
-
-       s += sprintf(s,"Slow reflections by vector:\n");
-       for (i = 0, j = 0; i < 0x80; i++) {
-               if ( (cnt = slow_reflect_count[i]) != 0 ) {
-                       s += sprintf(s,"0x%02x00:%10d, ",i,cnt);
-                       if ((j++ & 3) == 3) s += sprintf(s,"\n");
-               }
-       }
-       if (j & 3) s += sprintf(s,"\n");
-       s += sprintf(s,"Fast reflections by vector:\n");
-       for (i = 0, j = 0; i < 0x80; i++) {
-               if ( (cnt = fast_reflect_count[i]) != 0 ) {
-                       s += sprintf(s,"0x%02x00:%10d, ",i,cnt);
-                       if ((j++ & 3) == 3) s += sprintf(s,"\n");
-               }
-       }
-       if (j & 3) s += sprintf(s,"\n");
-       return s - buf;
-}
 
 // should never panic domain... if it does, stack may have been overrun
 void check_bad_nested_interruption(unsigned long isr, struct pt_regs *regs, 
unsigned long vector)
@@ -194,7 +162,6 @@ void deliver_pending_interrupt(struct pt
                        ++pending_false_positive;
        }
 }
-unsigned long lazy_cover_count = 0;
 
 static int
 handle_lazy_cover(struct vcpu *v, struct pt_regs *regs)
@@ -241,8 +208,7 @@ void ia64_do_page_fault (unsigned long a
                    p2m_entry_retry(&entry)) {
                        /* dtlb has been purged in-between.  This dtlb was
                           matching.  Undo the work.  */
-                       vcpu_flush_tlb_vhpt_range(address & ((1 << logps) - 1),
-                                                 logps);
+                       vcpu_flush_tlb_vhpt_range(address, logps);
 
                        // the stale entry which we inserted above
                        // may remains in tlb cache.
@@ -348,7 +314,6 @@ handle_fpu_swa (int fp_fault, struct pt_
 {
        struct vcpu *v = current;
        IA64_BUNDLE bundle;
-       IA64_BUNDLE __get_domain_bundle(UINT64);
        unsigned long fault_ip;
        fpswa_ret_t ret;
 
@@ -359,7 +324,12 @@ handle_fpu_swa (int fp_fault, struct pt_
         */
        if (!fp_fault && (ia64_psr(regs)->ri == 0))
                fault_ip -= 16;
-       bundle = __get_domain_bundle(fault_ip);
+
+       if (VMX_DOMAIN(current))
+               bundle = __vmx_get_domain_bundle(fault_ip);
+       else 
+               bundle = __get_domain_bundle(fault_ip);
+
        if (!bundle.i64[0] && !bundle.i64[1]) {
                printk("%s: floating-point bundle at 0x%lx not mapped\n",
                       __FUNCTION__, fault_ip);
@@ -678,3 +648,92 @@ ia64_handle_reflection (unsigned long if
        reflect_interruption(isr,regs,vector);
 }
 
+void
+ia64_shadow_fault(unsigned long ifa, unsigned long itir,
+                  unsigned long isr, struct pt_regs *regs)
+{
+       struct vcpu *v = current;
+       struct domain *d = current->domain;
+       unsigned long gpfn;
+       unsigned long pte = 0;
+       struct vhpt_lf_entry *vlfe;
+
+       /* There are 2 jobs to do:
+          -  marking the page as dirty (the metaphysical address must be
+             extracted to do that).
+          -  reflecting or not the fault (the virtual Dirty bit must be
+             extracted to decide).
+          Unfortunatly these informations are not immediatly available!
+       */
+
+       /* Extract the metaphysical address.
+          Try to get it from VHPT and M2P as we need the flags.  */
+       vlfe = (struct vhpt_lf_entry *)ia64_thash(ifa);
+       pte = vlfe->page_flags;
+       if (vlfe->ti_tag == ia64_ttag(ifa)) {
+               /* The VHPT entry is valid.  */
+               gpfn = get_gpfn_from_mfn((pte & _PAGE_PPN_MASK) >> PAGE_SHIFT);
+               BUG_ON(gpfn == INVALID_M2P_ENTRY);
+       }
+       else {
+               unsigned long itir, iha;
+               IA64FAULT fault;
+
+               /* The VHPT entry is not valid.  */
+               vlfe = NULL;
+
+               /* FIXME: gives a chance to tpa, as the TC was valid.  */
+
+               fault = vcpu_translate(v, ifa, 1, &pte, &itir, &iha);
+
+               /* Try again!  */
+               if (fault != IA64_NO_FAULT) {
+                       /* This will trigger a dtlb miss.  */
+                       ia64_ptcl(ifa, PAGE_SHIFT << 2);
+                       return;
+               }
+               gpfn = ((pte & _PAGE_PPN_MASK) >> PAGE_SHIFT);
+               if (pte & _PAGE_D)
+                       pte |= _PAGE_VIRT_D;
+       }
+
+       /* Set the dirty bit in the bitmap.  */
+       shadow_mark_page_dirty (d, gpfn);
+
+       /* Update the local TC/VHPT and decides wether or not the fault should
+          be reflected.
+          SMP note: we almost ignore the other processors.  The shadow_bitmap
+          has been atomically updated.  If the dirty fault happen on another
+          processor, it will do its job.
+       */
+
+       if (pte != 0) {
+               /* We will know how to handle the fault.  */
+
+               if (pte & _PAGE_VIRT_D) {
+                       /* Rewrite VHPT entry.
+                          There is no race here because only the
+                          cpu VHPT owner can write page_flags.  */
+                       if (vlfe)
+                               vlfe->page_flags = pte | _PAGE_D;
+                       
+                       /* Purge the TC locally.
+                          It will be reloaded from the VHPT iff the
+                          VHPT entry is still valid.  */
+                       ia64_ptcl(ifa, PAGE_SHIFT << 2);
+
+                       atomic64_inc(&d->arch.shadow_fault_count);
+               }
+               else {
+                       /* Reflect.
+                          In this case there is no need to purge.  */
+                       ia64_handle_reflection(ifa, regs, isr, 0, 8);
+               }
+       }
+       else {
+               /* We don't know wether or not the fault must be
+                  reflected.  The VHPT entry is not valid.  */
+               /* FIXME: in metaphysical mode, we could do an ITC now.  */
+               ia64_ptcl(ifa, PAGE_SHIFT << 2);
+       }
+}
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/fw_emul.c
--- a/xen/arch/ia64/xen/fw_emul.c       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/fw_emul.c       Fri Jul 28 10:51:38 2006 +0100
@@ -16,6 +16,7 @@
  *
  */
 #include <xen/config.h>
+#include <xen/console.h>
 #include <asm/system.h>
 #include <asm/pgalloc.h>
 
@@ -95,8 +96,8 @@ sal_emulator (long index, unsigned long 
                        }
                        else {
                                struct domain *d = current->domain;
-                               d->arch.boot_rdv_ip = in2;
-                               d->arch.boot_rdv_r1 = in3;
+                               d->arch.sal_data->boot_rdv_ip = in2;
+                               d->arch.sal_data->boot_rdv_r1 = in3;
                        }
                }
                else
@@ -343,6 +344,7 @@ xen_pal_emulator(unsigned long index, u6
            case PAL_HALT:
                    if (current->domain == dom0) {
                            printf ("Domain0 halts the machine\n");
+                           console_start_sync();
                            (*efi.reset_system)(EFI_RESET_SHUTDOWN,0,0,NULL);
                    }
                    else
@@ -368,7 +370,7 @@ efi_translate_domain_addr(unsigned long 
        *fault = IA64_NO_FAULT;
 
 again:
-       if (v->domain->arch.efi_virt_mode) {
+       if (v->domain->arch.sal_data->efi_virt_mode) {
                *fault = vcpu_tpa(v, domain_addr, &mpaddr);
                if (*fault != IA64_NO_FAULT) return 0;
        }
@@ -432,7 +434,9 @@ efi_emulate_set_virtual_address_map(
        fpswa_interface_t *fpswa_inf = d->arch.fpswa_inf;
 
        if (descriptor_version != EFI_MEMDESC_VERSION) {
-               printf ("efi_emulate_set_virtual_address_map: memory descriptor 
version unmatched\n");
+               printf ("efi_emulate_set_virtual_address_map: memory "
+                       "descriptor version unmatched (%d vs %d)\n",
+                       (int)descriptor_version, EFI_MEMDESC_VERSION);
                return EFI_INVALID_PARAMETER;
        }
 
@@ -441,7 +445,8 @@ efi_emulate_set_virtual_address_map(
                return EFI_INVALID_PARAMETER;
        }
 
-       if (d->arch.efi_virt_mode) return EFI_UNSUPPORTED;
+       if (d->arch.sal_data->efi_virt_mode)
+               return EFI_UNSUPPORTED;
 
        efi_map_start = virtual_map;
        efi_map_end   = efi_map_start + memory_map_size;
@@ -483,7 +488,7 @@ efi_emulate_set_virtual_address_map(
        }
 
        /* The virtual address map has been applied. */
-       d->arch.efi_virt_mode = 1;
+       d->arch.sal_data->efi_virt_mode = 1;
 
        return EFI_SUCCESS;
 }
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/hypercall.c
--- a/xen/arch/ia64/xen/hypercall.c     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/hypercall.c     Fri Jul 28 10:51:38 2006 +0100
@@ -28,16 +28,11 @@
 #include <xen/domain.h>
 #include <public/callback.h>
 #include <xen/event.h>
+#include <asm/privop_stat.h>
 
 static long do_physdev_op_compat(XEN_GUEST_HANDLE(physdev_op_t) uop);
 static long do_physdev_op(int cmd, XEN_GUEST_HANDLE(void) arg);
 static long do_callback_op(int cmd, XEN_GUEST_HANDLE(void) arg);
-/* FIXME: where these declarations should be there ? */
-extern int dump_privop_counts_to_user(char *, int);
-extern int zero_privop_counts_to_user(char *, int);
-
-unsigned long idle_when_pending = 0;
-unsigned long pal_halt_light_count = 0;
 
 hypercall_t ia64_hypercall_table[] =
        {
@@ -159,8 +154,8 @@ fw_hypercall_ipi (struct pt_regs *regs)
                        
                /* First or next rendez-vous: set registers.  */
                vcpu_init_regs (targ);
-               vcpu_regs (targ)->cr_iip = d->arch.boot_rdv_ip;
-               vcpu_regs (targ)->r1 = d->arch.boot_rdv_r1;
+               vcpu_regs (targ)->cr_iip = d->arch.sal_data->boot_rdv_ip;
+               vcpu_regs (targ)->r1 = d->arch.sal_data->boot_rdv_r1;
                vcpu_regs (targ)->b0 = d->arch.sal_return_addr;
 
                if (test_and_clear_bit(_VCPUF_down,
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/irq.c
--- a/xen/arch/ia64/xen/irq.c   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/irq.c   Fri Jul 28 10:51:38 2006 +0100
@@ -499,19 +499,6 @@ void irq_exit(void)
        sub_preempt_count(IRQ_EXIT_OFFSET);
 }
 
-/*
- * ONLY gets called from ia64_leave_kernel
- * ONLY call with interrupts enabled
- */
-void process_soft_irq(void)
-{
-       if (!in_interrupt() && local_softirq_pending()) {
-               add_preempt_count(SOFTIRQ_OFFSET);
-               do_softirq();
-               sub_preempt_count(SOFTIRQ_OFFSET);
-       }
-}
-
 // this is a temporary hack until real console input is implemented
 void guest_forward_keyboard_input(int irq, void *nada, struct pt_regs *regs)
 {
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/ivt.S
--- a/xen/arch/ia64/xen/ivt.S   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/ivt.S   Fri Jul 28 10:51:38 2006 +0100
@@ -1,21 +1,6 @@
-
-#ifdef XEN
-//#define CONFIG_DISABLE_VHPT  // FIXME: change when VHPT is enabled??
-// these are all hacked out for now as the entire IVT
-// will eventually be replaced... just want to use it
-// for startup code to handle TLB misses
-//#define ia64_leave_kernel 0
-//#define ia64_ret_from_syscall 0
-//#define ia64_handle_irq 0
-//#define ia64_fault 0
-#define ia64_illegal_op_fault 0
-#define ia64_prepare_handle_unaligned 0
-#define ia64_bad_break 0
-#define ia64_trace_syscall 0
-#define sys_call_table 0
-#define sys_ni_syscall 0
+#ifdef XEN
+#include <asm/debugger.h>
 #include <asm/vhpt.h>
-#include <asm/debugger.h>
 #endif
 /*
  * arch/ia64/kernel/ivt.S
@@ -96,25 +81,18 @@
 #include "minstate.h"
 
 #define FAULT(n)                                                               
        \
+       mov r19=n;                      /* prepare to save predicates */        
        \
        mov r31=pr;                                                             
        \
-       mov r19=n;;                     /* prepare to save predicates */        
        \
        br.sptk.many dispatch_to_fault_handler
 
 #define FAULT_OR_REFLECT(n)                                                    
        \
-       mov r31=pr;                                                             
        \
-       mov r20=cr.ipsr;;                                                       
        \
+       mov r20=cr.ipsr;                                                        
        \
        mov r19=n;      /* prepare to save predicates */                        
        \
+       mov r31=pr;;                                                            
        \
        extr.u r20=r20,IA64_PSR_CPL0_BIT,2;;                                    
        \
        cmp.ne p6,p0=r0,r20;    /* cpl != 0?*/                                  
        \
 (p6)   br.dptk.many dispatch_reflection;                                       
        \
        br.sptk.few dispatch_to_fault_handler
-
-#ifdef XEN
-#define REFLECT(n)                                                             
        \
-       mov r31=pr;                                                             
        \
-       mov r19=n;;                     /* prepare to save predicates */        
        \
-       br.sptk.many dispatch_reflection
-#endif
 
        .section .text.ivt,"ax"
 
@@ -258,8 +236,8 @@ ENTRY(itlb_miss)
 ENTRY(itlb_miss)
        DBG_FAULT(1)
 #ifdef XEN
+       mov r16 = cr.ifa
        mov r31 = pr
-       mov r16 = cr.ifa
        ;;
        extr.u r17=r16,59,5
        ;;
@@ -322,8 +300,8 @@ ENTRY(dtlb_miss)
 ENTRY(dtlb_miss)
        DBG_FAULT(2)
 #ifdef XEN
+       mov r16=cr.ifa                          // get virtual address
        mov r31=pr
-       mov r16=cr.ifa                          // get virtual address
        ;;
        extr.u r17=r16,59,5
        ;;
@@ -444,12 +422,12 @@ ENTRY(alt_itlb_miss)
 ENTRY(alt_itlb_miss)
        DBG_FAULT(3)
 #ifdef XEN
+       mov r16=cr.ifa          // get address that caused the TLB miss
        mov r31=pr
-       mov r16=cr.ifa          // get address that caused the TLB miss
        ;;
 late_alt_itlb_miss:
+       mov r21=cr.ipsr
        movl r17=PAGE_KERNEL
-       mov r21=cr.ipsr
        movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff)
        ;;
 #else
@@ -499,14 +477,14 @@ ENTRY(alt_dtlb_miss)
 ENTRY(alt_dtlb_miss)
        DBG_FAULT(4)
 #ifdef XEN
+       mov r16=cr.ifa          // get address that caused the TLB miss
        mov r31=pr
-       mov r16=cr.ifa          // get address that caused the TLB miss
        ;;
 late_alt_dtlb_miss:
+       mov r20=cr.isr
        movl r17=PAGE_KERNEL
-       mov r20=cr.isr
+       mov r21=cr.ipsr
        movl r19=(((1 << IA64_MAX_PHYS_BITS) - 1) & ~0xfff)
-       mov r21=cr.ipsr
        ;;
 #endif
 #ifdef CONFIG_DISABLE_VHPT
@@ -592,7 +570,7 @@ GLOBAL_ENTRY(frametable_miss)
        shladd r24=r19,3,r24    // r24=&pte[pte_offset(addr)]
        ;;
 (p7)   ld8 r24=[r24]           // r24=pte[pte_offset(addr)]
-       mov r25=0x700|(_PAGE_SIZE_16K<<2) // key=7
+       mov r25=0x700|(PAGE_SHIFT<<2) // key=7
 (p6)   br.spnt.few frametable_fault
        ;;
        mov cr.itir=r25
@@ -622,9 +600,11 @@ ENTRY(frametable_fault)
        rfi
 END(frametable_fault)
 GLOBAL_ENTRY(ia64_frametable_probe)
+       {
        probe.r r8=r32,0        // destination register must be r8
        nop.f 0x0
        br.ret.sptk.many b0     // this instruction must be in bundle 2
+       }
 END(ia64_frametable_probe)
 #endif /* CONFIG_VIRTUAL_FRAME_TABLE */
 
@@ -706,8 +686,9 @@ ENTRY(ikey_miss)
        DBG_FAULT(6)
 #ifdef XEN
        FAULT_OR_REFLECT(6)
-#endif
+#else
        FAULT(6)
+#endif
 END(ikey_miss)
 
        
//-----------------------------------------------------------------------------------
@@ -755,8 +736,9 @@ ENTRY(dkey_miss)
        DBG_FAULT(7)
 #ifdef XEN
        FAULT_OR_REFLECT(7)
-#endif
+#else
        FAULT(7)
+#endif
 END(dkey_miss)
 
        .org ia64_ivt+0x2000
@@ -765,8 +747,49 @@ ENTRY(dirty_bit)
 ENTRY(dirty_bit)
        DBG_FAULT(8)
 #ifdef XEN
-       FAULT_OR_REFLECT(8)
-#endif
+       mov r20=cr.ipsr
+       mov r31=pr;;
+       extr.u r20=r20,IA64_PSR_CPL0_BIT,2;;
+       mov r19=8       /* prepare to save predicates */
+       cmp.eq p6,p0=r0,r20     /* cpl == 0?*/
+(p6)   br.sptk.few dispatch_to_fault_handler
+       /* If shadow mode is not enabled, reflect the fault.  */
+       movl r22=THIS_CPU(cpu_kr)+IA64_KR_CURRENT_OFFSET
+       ;;
+       ld8 r22=[r22]
+       ;;
+       add r22=IA64_VCPU_DOMAIN_OFFSET,r22
+       ;;
+       /* Read domain.  */
+       ld8 r22=[r22]
+       ;;
+       add r22=IA64_DOMAIN_SHADOW_BITMAP_OFFSET,r22
+       ;;
+       ld8 r22=[r22]
+       ;;
+       cmp.eq p6,p0=r0,r22     /* !shadow_bitmap ?*/
+(p6)   br.dptk.many dispatch_reflection
+
+       SAVE_MIN_WITH_COVER
+       alloc r14=ar.pfs,0,0,4,0
+       mov out0=cr.ifa
+       mov out1=cr.itir
+       mov out2=cr.isr
+       adds out3=16,sp
+
+       ssm psr.ic | PSR_DEFAULT_BITS
+       ;;
+       srlz.i                                  // guarantee that interruption 
collection is on
+       ;;
+(p15)  ssm psr.i                               // restore psr.i
+       adds r3=8,r2                            // set up second base pointer
+       ;;
+       SAVE_REST
+       movl r14=ia64_leave_kernel
+       ;;
+       mov rp=r14
+       br.call.sptk.many b6=ia64_shadow_fault
+#else
        /*
         * What we do here is to simply turn on the dirty bit in the PTE.  We 
need to
         * update both the page-table and the TLB entry.  To efficiently access 
the PTE,
@@ -822,6 +845,7 @@ 1:  ld8 r18=[r17]
 #endif
        mov pr=r31,-1                           // restore pr
        rfi
+#endif
 END(dirty_bit)
 
        .org ia64_ivt+0x2400
@@ -830,13 +854,13 @@ ENTRY(iaccess_bit)
 ENTRY(iaccess_bit)
        DBG_FAULT(9)
 #ifdef XEN
-       mov r31=pr;
        mov r16=cr.isr
        mov r17=cr.ifa
+       mov r31=pr
        mov r19=9
-       movl r20=0x2400
+       mov r20=0x2400
        br.sptk.many fast_access_reflect;;
-#endif
+#else
        // Like Entry 8, except for instruction access
        mov r16=cr.ifa                          // get the address that caused 
the fault
        movl r30=1f                             // load continuation point in 
case of nested fault
@@ -895,6 +919,7 @@ 1:  ld8 r18=[r17]
 #endif /* !CONFIG_SMP */
        mov pr=r31,-1
        rfi
+#endif
 END(iaccess_bit)
 
        .org ia64_ivt+0x2800
@@ -903,13 +928,13 @@ ENTRY(daccess_bit)
 ENTRY(daccess_bit)
        DBG_FAULT(10)
 #ifdef XEN
-       mov r31=pr;
        mov r16=cr.isr
        mov r17=cr.ifa
+       mov r31=pr
        mov r19=10
-       movl r20=0x2800
+       mov r20=0x2800
        br.sptk.many fast_access_reflect;;
-#endif
+#else
        // Like Entry 8, except for data access
        mov r16=cr.ifa                          // get the address that caused 
the fault
        movl r30=1f                             // load continuation point in 
case of nested fault
@@ -955,6 +980,7 @@ 1:  ld8 r18=[r17]
        mov b0=r29                              // restore b0
        mov pr=r31,-1
        rfi
+#endif
 END(daccess_bit)
 
        .org ia64_ivt+0x2c00
@@ -1017,7 +1043,7 @@ ENTRY(break_fault)
        ;;
        br.sptk.many fast_break_reflect
        ;;
-#endif
+#else /* !XEN */
        movl r16=THIS_CPU(cpu_kr)+IA64_KR_CURRENT_OFFSET;;
        ld8 r16=[r16]
        mov r17=cr.iim
@@ -1097,6 +1123,7 @@ ENTRY(break_fault)
 (p8)   br.call.sptk.many b6=b6                 // ignore this return addr
        br.cond.sptk ia64_trace_syscall
        // NOT REACHED
+#endif
 END(break_fault)
 
        .org ia64_ivt+0x3000
@@ -1191,6 +1218,7 @@ END(dispatch_break_fault)
        DBG_FAULT(14)
        FAULT(14)
 
+#ifndef XEN
        /*
         * There is no particular reason for this code to be here, other than 
that
         * there happens to be space here that would go unused otherwise.  If 
this
@@ -1330,13 +1358,15 @@ GLOBAL_ENTRY(ia64_syscall_setup)
 (p10)  mov r8=-EINVAL
        br.ret.sptk.many b7
 END(ia64_syscall_setup)
-
+#endif /* XEN */
+       
        .org ia64_ivt+0x3c00
 
/////////////////////////////////////////////////////////////////////////////////////////
 // 0x3c00 Entry 15 (size 64 bundles) Reserved
        DBG_FAULT(15)
        FAULT(15)
 
+#ifndef XEN
        /*
         * Squatting in this space ...
         *
@@ -1375,6 +1405,7 @@ ENTRY(dispatch_illegal_op_fault)
 (p6)   br.call.dpnt.many b6=b6         // call returns to ia64_leave_kernel
        br.sptk.many ia64_leave_kernel
 END(dispatch_illegal_op_fault)
+#endif
 
        .org ia64_ivt+0x4000
 
/////////////////////////////////////////////////////////////////////////////////////////
@@ -1420,6 +1451,7 @@ END(dispatch_privop_fault)
        DBG_FAULT(17)
        FAULT(17)
 
+#ifndef XEN
 ENTRY(non_syscall)
        SAVE_MIN_WITH_COVER
 
@@ -1445,6 +1477,7 @@ ENTRY(non_syscall)
        ;;
        br.call.sptk.many b6=ia64_bad_break     // avoid WAW on CFM and ignore 
return addr
 END(non_syscall)
+#endif
 
        .org ia64_ivt+0x4800
 
/////////////////////////////////////////////////////////////////////////////////////////
@@ -1452,13 +1485,13 @@ END(non_syscall)
        DBG_FAULT(18)
        FAULT(18)
 
+#ifndef XEN
        /*
         * There is no particular reason for this code to be here, other than 
that
         * there happens to be space here that would go unused otherwise.  If 
this
         * fault ever gets "unreserved", simply moved the following code to a 
more
         * suitable spot...
         */
-
 ENTRY(dispatch_unaligned_handler)
        SAVE_MIN_WITH_COVER
        ;;
@@ -1480,6 +1513,7 @@ ENTRY(dispatch_unaligned_handler)
 //     br.sptk.many ia64_prepare_handle_unaligned
     br.call.sptk.many b6=ia64_handle_unaligned
 END(dispatch_unaligned_handler)
+#endif
 
        .org ia64_ivt+0x4c00
 
/////////////////////////////////////////////////////////////////////////////////////////
@@ -1534,7 +1568,7 @@ ENTRY(page_not_present)
        DBG_FAULT(20)
 #ifdef XEN
        FAULT_OR_REFLECT(20)
-#endif
+#else
        mov r16=cr.ifa
        rsm psr.dt
        /*
@@ -1548,6 +1582,7 @@ ENTRY(page_not_present)
        mov r31=pr
        srlz.d
        br.sptk.many page_fault
+#endif
 END(page_not_present)
 
        .org ia64_ivt+0x5100
@@ -1557,13 +1592,14 @@ ENTRY(key_permission)
        DBG_FAULT(21)
 #ifdef XEN
        FAULT_OR_REFLECT(21)
-#endif
+#else
        mov r16=cr.ifa
        rsm psr.dt
        mov r31=pr
        ;;
        srlz.d
        br.sptk.many page_fault
+#endif
 END(key_permission)
 
        .org ia64_ivt+0x5200
@@ -1573,13 +1609,14 @@ ENTRY(iaccess_rights)
        DBG_FAULT(22)
 #ifdef XEN
        FAULT_OR_REFLECT(22)
-#endif
+#else
        mov r16=cr.ifa
        rsm psr.dt
        mov r31=pr
        ;;
        srlz.d
        br.sptk.many page_fault
+#endif
 END(iaccess_rights)
 
        .org ia64_ivt+0x5300
@@ -1594,13 +1631,14 @@ ENTRY(daccess_rights)
        mov r19=23
        movl r20=0x5300
        br.sptk.many fast_access_reflect;;
-#endif
+#else
        mov r16=cr.ifa
        rsm psr.dt
        mov r31=pr
        ;;
        srlz.d
        br.sptk.many page_fault
+#endif
 END(daccess_rights)
 
        .org ia64_ivt+0x5400
@@ -1667,8 +1705,9 @@ ENTRY(nat_consumption)
        DBG_FAULT(26)
 #ifdef XEN
        FAULT_OR_REFLECT(26)
-#endif
+#else
        FAULT(26)
+#endif
 END(nat_consumption)
 
        .org ia64_ivt+0x5700
@@ -1679,7 +1718,7 @@ ENTRY(speculation_vector)
 #ifdef XEN
        // this probably need not reflect...
        FAULT_OR_REFLECT(27)
-#endif
+#else
        /*
         * A [f]chk.[as] instruction needs to take the branch to the recovery 
code but
         * this part of the architecture is not implemented in hardware on some 
CPUs, such
@@ -1710,6 +1749,7 @@ ENTRY(speculation_vector)
        ;;
 
        rfi                             // and go back
+#endif
 END(speculation_vector)
 
        .org ia64_ivt+0x5800
@@ -1725,8 +1765,9 @@ ENTRY(debug_vector)
        DBG_FAULT(29)
 #ifdef XEN
        FAULT_OR_REFLECT(29)
-#endif
+#else
        FAULT(29)
+#endif
 END(debug_vector)
 
        .org ia64_ivt+0x5a00
@@ -1736,11 +1777,12 @@ ENTRY(unaligned_access)
        DBG_FAULT(30)
 #ifdef XEN
        FAULT_OR_REFLECT(30)
-#endif
+#else
        mov r16=cr.ipsr
        mov r31=pr              // prepare to save predicates
        ;;
        br.sptk.many dispatch_unaligned_handler
+#endif
 END(unaligned_access)
 
        .org ia64_ivt+0x5b00
@@ -1750,8 +1792,9 @@ ENTRY(unsupported_data_reference)
        DBG_FAULT(31)
 #ifdef XEN
        FAULT_OR_REFLECT(31)
-#endif
+#else
        FAULT(31)
+#endif
 END(unsupported_data_reference)
 
        .org ia64_ivt+0x5c00
@@ -1761,8 +1804,9 @@ ENTRY(floating_point_fault)
        DBG_FAULT(32)
 #ifdef XEN
        FAULT_OR_REFLECT(32)
-#endif
+#else
        FAULT(32)
+#endif
 END(floating_point_fault)
 
        .org ia64_ivt+0x5d00
@@ -1772,8 +1816,9 @@ ENTRY(floating_point_trap)
        DBG_FAULT(33)
 #ifdef XEN
        FAULT_OR_REFLECT(33)
-#endif
+#else
        FAULT(33)
+#endif
 END(floating_point_trap)
 
        .org ia64_ivt+0x5e00
@@ -1783,8 +1828,9 @@ ENTRY(lower_privilege_trap)
        DBG_FAULT(34)
 #ifdef XEN
        FAULT_OR_REFLECT(34)
-#endif
+#else
        FAULT(34)
+#endif
 END(lower_privilege_trap)
 
        .org ia64_ivt+0x5f00
@@ -1794,8 +1840,9 @@ ENTRY(taken_branch_trap)
        DBG_FAULT(35)
 #ifdef XEN
        FAULT_OR_REFLECT(35)
-#endif
+#else
        FAULT(35)
+#endif
 END(taken_branch_trap)
 
        .org ia64_ivt+0x6000
@@ -1805,8 +1852,9 @@ ENTRY(single_step_trap)
        DBG_FAULT(36)
 #ifdef XEN
        FAULT_OR_REFLECT(36)
-#endif
+#else
        FAULT(36)
+#endif
 END(single_step_trap)
 
        .org ia64_ivt+0x6100
@@ -1864,8 +1912,9 @@ ENTRY(ia32_exception)
        DBG_FAULT(45)
 #ifdef XEN
        FAULT_OR_REFLECT(45)
-#endif
+#else
        FAULT(45)
+#endif
 END(ia32_exception)
 
        .org ia64_ivt+0x6a00
@@ -1875,7 +1924,7 @@ ENTRY(ia32_intercept)
        DBG_FAULT(46)
 #ifdef XEN
        FAULT_OR_REFLECT(46)
-#endif
+#else
 #ifdef CONFIG_IA32_SUPPORT
        mov r31=pr
        mov r16=cr.isr
@@ -1899,6 +1948,7 @@ 1:
 1:
 #endif // CONFIG_IA32_SUPPORT
        FAULT(46)
+#endif
 END(ia32_intercept)
 
        .org ia64_ivt+0x6b00
@@ -1908,12 +1958,13 @@ ENTRY(ia32_interrupt)
        DBG_FAULT(47)
 #ifdef XEN
        FAULT_OR_REFLECT(47)
-#endif
+#else
 #ifdef CONFIG_IA32_SUPPORT
        mov r31=pr
        br.sptk.many dispatch_to_ia32_handler
 #else
        FAULT(47)
+#endif
 #endif
 END(ia32_interrupt)
 
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/mm.c
--- a/xen/arch/ia64/xen/mm.c    Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/mm.c    Fri Jul 28 10:51:38 2006 +0100
@@ -170,6 +170,7 @@
 #include <asm/pgalloc.h>
 #include <asm/vhpt.h>
 #include <asm/vcpu.h>
+#include <asm/shadow.h>
 #include <linux/efi.h>
 
 #ifndef CONFIG_XEN_IA64_DOM0_VP
@@ -178,6 +179,8 @@ static void domain_page_flush(struct dom
 static void domain_page_flush(struct domain* d, unsigned long mpaddr,
                               unsigned long old_mfn, unsigned long new_mfn);
 #endif
+
+extern unsigned long ia64_iobase;
 
 static struct domain *dom_xen, *dom_io;
 
@@ -417,13 +420,13 @@ u64 translate_domain_pte(u64 pteval, u64
        u64 mask, mpaddr, pteval2;
        u64 arflags;
        u64 arflags2;
+       u64 maflags2;
 
        pteval &= ((1UL << 53) - 1);// ignore [63:53] bits
 
        // FIXME address had better be pre-validated on insert
        mask = ~itir_mask(itir.itir);
-       mpaddr = (((pteval & ~_PAGE_ED) & _PAGE_PPN_MASK) & ~mask) |
-                (address & mask);
+       mpaddr = ((pteval & _PAGE_PPN_MASK) & ~mask) | (address & mask);
 #ifdef CONFIG_XEN_IA64_DOM0_VP
        if (itir.ps > PAGE_SHIFT) {
                itir.ps = PAGE_SHIFT;
@@ -453,6 +456,8 @@ u64 translate_domain_pte(u64 pteval, u64
        }
 #endif
        pteval2 = lookup_domain_mpa(d, mpaddr, entry);
+
+       /* Check access rights.  */
        arflags  = pteval  & _PAGE_AR_MASK;
        arflags2 = pteval2 & _PAGE_AR_MASK;
        if (arflags != _PAGE_AR_R && arflags2 == _PAGE_AR_R) {
@@ -465,21 +470,53 @@ u64 translate_domain_pte(u64 pteval, u64
                        pteval2, arflags2, mpaddr);
 #endif
                pteval = (pteval & ~_PAGE_AR_MASK) | _PAGE_AR_R;
-    }
-
-       pteval2 &= _PAGE_PPN_MASK; // ignore non-addr bits
-       pteval2 |= (pteval & _PAGE_ED);
-       pteval2 |= _PAGE_PL_2; // force PL0->2 (PL3 is unaffected)
-       pteval2 = (pteval & ~_PAGE_PPN_MASK) | pteval2;
-       /*
-        * Don't let non-dom0 domains map uncached addresses.  This can
-        * happen when domU tries to touch i/o port space.  Also prevents
-        * possible address aliasing issues.
-        */
-       if (d != dom0)
-               pteval2 &= ~_PAGE_MA_MASK;
-
-       return pteval2;
+       }
+
+       /* Check memory attribute. The switch is on the *requested* memory
+          attribute.  */
+       maflags2 = pteval2 & _PAGE_MA_MASK;
+       switch (pteval & _PAGE_MA_MASK) {
+       case _PAGE_MA_NAT:
+               /* NaT pages are always accepted!  */                
+               break;
+       case _PAGE_MA_UC:
+       case _PAGE_MA_UCE:
+       case _PAGE_MA_WC:
+               if (maflags2 == _PAGE_MA_WB) {
+                       /* Don't let domains WB-map uncached addresses.
+                          This can happen when domU tries to touch i/o
+                          port space.  Also prevents possible address
+                          aliasing issues.  */
+                       printf("Warning: UC to WB for mpaddr=%lx\n", mpaddr);
+                       pteval = (pteval & ~_PAGE_MA_MASK) | _PAGE_MA_WB;
+               }
+               break;
+       case _PAGE_MA_WB:
+               if (maflags2 != _PAGE_MA_WB) {
+                       /* Forbid non-coherent access to coherent memory. */
+                       panic_domain(NULL, "try to use WB mem attr on "
+                                    "UC page, mpaddr=%lx\n", mpaddr);
+               }
+               break;
+       default:
+               panic_domain(NULL, "try to use unknown mem attribute\n");
+       }
+
+       /* If shadow mode is enabled, virtualize dirty bit.  */
+       if (shadow_mode_enabled(d) && (pteval & _PAGE_D)) {
+               u64 mp_page = mpaddr >> PAGE_SHIFT;
+               pteval |= _PAGE_VIRT_D;
+
+               /* If the page is not already dirty, don't set the dirty bit! */
+               if (mp_page < d->arch.shadow_bitmap_size * 8
+                   && !test_bit(mp_page, d->arch.shadow_bitmap))
+                       pteval &= ~_PAGE_D;
+       }
+    
+       /* Ignore non-addr bits of pteval2 and force PL0->2
+          (PL3 is unaffected) */
+       return (pteval & ~_PAGE_PPN_MASK) |
+              (pteval2 & _PAGE_PPN_MASK) | _PAGE_PL_2;
 }
 
 // given a current domain metaphysical address, return the physical address
@@ -583,7 +620,7 @@ lookup_alloc_domain_pte(struct domain* d
 }
 
 //XXX xxx_none() should be used instread of !xxx_present()?
-static volatile pte_t*
+volatile pte_t*
 lookup_noalloc_domain_pte(struct domain* d, unsigned long mpaddr)
 {
     struct mm_struct *mm = &d->arch.mm;
@@ -807,8 +844,19 @@ assign_new_domain0_page(struct domain *d
 #endif
 }
 
+static unsigned long
+flags_to_prot (unsigned long flags)
+{
+    unsigned long res = _PAGE_PL_2 | __DIRTY_BITS;
+
+    res |= flags & ASSIGN_readonly ? _PAGE_AR_R: _PAGE_AR_RWX;
+    res |= flags & ASSIGN_nocache ? _PAGE_MA_UC: _PAGE_MA_WB;
+    
+    return res;
+}
+
 /* map a physical address to the specified metaphysical addr */
-// flags: currently only ASSIGN_readonly
+// flags: currently only ASSIGN_readonly, ASSIGN_nocache
 // This is called by assign_domain_mmio_page().
 // So accessing to pte is racy.
 void
@@ -820,13 +868,12 @@ __assign_domain_page(struct domain *d,
     pte_t old_pte;
     pte_t new_pte;
     pte_t ret_pte;
-    unsigned long arflags = (flags & ASSIGN_readonly)? _PAGE_AR_R: 
_PAGE_AR_RWX;
+    unsigned long prot = flags_to_prot(flags);
 
     pte = lookup_alloc_domain_pte(d, mpaddr);
 
     old_pte = __pte(0);
-    new_pte = pfn_pte(physaddr >> PAGE_SHIFT,
-                      __pgprot(__DIRTY_BITS | _PAGE_PL_2 | arflags));
+    new_pte = pfn_pte(physaddr >> PAGE_SHIFT, __pgprot(prot));
     ret_pte = ptep_cmpxchg_rel(&d->arch.mm, mpaddr, pte, old_pte, new_pte);
     if (pte_val(ret_pte) == pte_val(old_pte))
         smp_mb();
@@ -849,6 +896,60 @@ assign_domain_page(struct domain *d,
     __assign_domain_page(d, mpaddr, physaddr, ASSIGN_writable);
 }
 
+int
+ioports_permit_access(struct domain *d, unsigned long fp, unsigned long lp)
+{
+    int ret;
+    unsigned long off;
+    unsigned long fp_offset;
+    unsigned long lp_offset;
+
+    ret = rangeset_add_range(d->arch.ioport_caps, fp, lp);
+    if (ret != 0)
+        return ret;
+
+    fp_offset = IO_SPACE_SPARSE_ENCODING(fp) & ~PAGE_MASK;
+    lp_offset = PAGE_ALIGN(IO_SPACE_SPARSE_ENCODING(lp));
+
+    for (off = fp_offset; off <= lp_offset; off += PAGE_SIZE)
+        __assign_domain_page(d, IO_PORTS_PADDR + off,
+                             ia64_iobase + off, ASSIGN_nocache);
+
+    return 0;
+}
+
+int
+ioports_deny_access(struct domain *d, unsigned long fp, unsigned long lp)
+{
+    int ret;
+    struct mm_struct *mm = &d->arch.mm;
+    unsigned long off;
+    unsigned long fp_offset;
+    unsigned long lp_offset;
+
+    ret = rangeset_remove_range(d->arch.ioport_caps, fp, lp);
+    if (ret != 0)
+        return ret;
+
+    fp_offset = IO_SPACE_SPARSE_ENCODING(fp) & ~PAGE_MASK;
+    lp_offset = PAGE_ALIGN(IO_SPACE_SPARSE_ENCODING(lp));
+
+    for (off = fp_offset; off <= lp_offset; off += PAGE_SIZE) {
+        unsigned long mpaddr = IO_PORTS_PADDR + off;
+        volatile pte_t *pte;
+        pte_t old_pte;
+
+        pte = lookup_noalloc_domain_pte_none(d, mpaddr);
+        BUG_ON(pte == NULL);
+        BUG_ON(pte_none(*pte));
+
+        // clear pte
+        old_pte = ptep_get_and_clear(mm, mpaddr, pte);
+    }
+    domain_flush_vtlb_all();
+    return 0;
+}
+
 #ifdef CONFIG_XEN_IA64_DOM0_VP
 static void
 assign_domain_same_page(struct domain *d,
@@ -925,7 +1026,7 @@ assign_domain_mmio_page(struct domain *d
                 __func__, __LINE__, d, mpaddr, size);
         return -EINVAL;
     }
-    assign_domain_same_page(d, mpaddr, size, ASSIGN_writable);
+    assign_domain_same_page(d, mpaddr, size, ASSIGN_writable | ASSIGN_nocache);
     return mpaddr;
 }
 
@@ -951,11 +1052,12 @@ assign_domain_page_replace(struct domain
     volatile pte_t* pte;
     pte_t old_pte;
     pte_t npte;
-    unsigned long arflags = (flags & ASSIGN_readonly)? _PAGE_AR_R: 
_PAGE_AR_RWX;
+    unsigned long prot = flags_to_prot(flags);
+
     pte = lookup_alloc_domain_pte(d, mpaddr);
 
     // update pte
-    npte = pfn_pte(mfn, __pgprot(__DIRTY_BITS | _PAGE_PL_2 | arflags));
+    npte = pfn_pte(mfn, __pgprot(prot));
     old_pte = ptep_xchg(mm, mpaddr, pte, npte);
     if (pte_mem(old_pte)) {
         unsigned long old_mfn = pte_pfn(old_pte);
@@ -997,7 +1099,7 @@ assign_domain_page_cmpxchg_rel(struct do
     unsigned long old_arflags;
     pte_t old_pte;
     unsigned long new_mfn;
-    unsigned long new_arflags;
+    unsigned long new_prot;
     pte_t new_pte;
     pte_t ret_pte;
 
@@ -1013,10 +1115,9 @@ assign_domain_page_cmpxchg_rel(struct do
         return -EINVAL;
     }
 
-    new_arflags = (flags & ASSIGN_readonly)? _PAGE_AR_R: _PAGE_AR_RWX;
+    new_prot = flags_to_prot(flags);
     new_mfn = page_to_mfn(new_page);
-    new_pte = pfn_pte(new_mfn,
-                      __pgprot(__DIRTY_BITS | _PAGE_PL_2 | new_arflags));
+    new_pte = pfn_pte(new_mfn, __pgprot(new_prot));
 
     // update pte
     ret_pte = ptep_cmpxchg_rel(mm, mpaddr, pte, old_pte, new_pte);
@@ -1136,6 +1237,10 @@ dom0vp_add_physmap(struct domain* d, uns
     int error = 0;
     struct domain* rd;
 
+    /* Not allowed by a domain.  */
+    if (flags & ASSIGN_nocache)
+        return -EINVAL;
+
     rd = find_domain_by_id(domid);
     if (unlikely(rd == NULL)) {
         switch (domid) {
@@ -1155,11 +1260,10 @@ dom0vp_add_physmap(struct domain* d, uns
         get_knownalive_domain(rd);
     }
 
-    if (unlikely(rd == d)) {
+    if (unlikely(rd == d || !mfn_valid(mfn))) {
         error = -EINVAL;
         goto out1;
     }
-    BUG_ON(!mfn_valid(mfn));
     if (unlikely(get_page(mfn_to_page(mfn), rd) == 0)) {
         error = -EINVAL;
         goto out1;
@@ -1416,10 +1520,13 @@ guest_physmap_remove_page(struct domain 
 
 //XXX sledgehammer.
 //    flush finer range.
-void
+static void
 domain_page_flush(struct domain* d, unsigned long mpaddr,
                   unsigned long old_mfn, unsigned long new_mfn)
 {
+    if (shadow_mode_enabled(d))
+        shadow_mark_page_dirty(d, mpaddr >> PAGE_SHIFT);
+
     domain_flush_vtlb_all();
 }
 
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/privop.c
--- a/xen/arch/ia64/xen/privop.c        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/privop.c        Fri Jul 28 10:51:38 2006 +0100
@@ -12,12 +12,11 @@
 #include <asm/delay.h> // Debug only
 #include <asm/dom_fw.h>
 #include <asm/vhpt.h>
-
-/* FIXME: where these declarations should be there ? */
-extern int dump_reflect_counts(char *);
-extern void zero_reflect_counts(void);
+#include <asm/bundle.h>
+#include <asm/privop_stat.h>
 
 long priv_verbose=0;
+unsigned long privop_trace = 0;
 
 /* Set to 1 to handle privified instructions from the privify tool. */
 #ifndef CONFIG_PRIVIFY
@@ -25,84 +24,6 @@ static const int privify_en = 0;
 #else
 static const int privify_en = 1;
 #endif
-
-/**************************************************************************
-Hypercall bundle creation
-**************************************************************************/
-
-void build_hypercall_bundle(UINT64 *imva, UINT64 brkimm, UINT64 hypnum, UINT64 
ret)
-{
-       INST64_A5 slot0;
-       INST64_I19 slot1;
-       INST64_B4 slot2;
-       IA64_BUNDLE bundle;
-
-       // slot1: mov r2 = hypnum (low 20 bits)
-       slot0.inst = 0;
-       slot0.qp = 0; slot0.r1 = 2; slot0.r3 = 0; slot0.major = 0x9;
-       slot0.imm7b = hypnum; slot0.imm9d = hypnum >> 7;
-       slot0.imm5c = hypnum >> 16; slot0.s = 0;
-       // slot1: break brkimm
-       slot1.inst = 0;
-       slot1.qp = 0; slot1.x6 = 0; slot1.x3 = 0; slot1.major = 0x0;
-       slot1.imm20 = brkimm; slot1.i = brkimm >> 20;
-       // if ret slot2: br.ret.sptk.many rp
-       // else slot2: br.cond.sptk.many rp
-       slot2.inst = 0; slot2.qp = 0; slot2.p = 1; slot2.b2 = 0;
-       slot2.wh = 0; slot2.d = 0; slot2.major = 0x0;
-       if (ret) {
-               slot2.btype = 4; slot2.x6 = 0x21;
-       }
-       else {
-               slot2.btype = 0; slot2.x6 = 0x20;
-       }
-       
-       bundle.i64[0] = 0; bundle.i64[1] = 0;
-       bundle.template = 0x11;
-       bundle.slot0 = slot0.inst; bundle.slot2 = slot2.inst;
-       bundle.slot1a = slot1.inst; bundle.slot1b = slot1.inst >> 18;
-       
-       imva[0] = bundle.i64[0]; imva[1] = bundle.i64[1];
-       ia64_fc (imva);
-       ia64_fc (imva + 1);
-}
-
-void build_pal_hypercall_bundles(UINT64 *imva, UINT64 brkimm, UINT64 hypnum)
-{
-       extern unsigned long pal_call_stub[];
-       IA64_BUNDLE bundle;
-       INST64_A5 slot_a5;
-       INST64_M37 slot_m37;
-
-       /* The source of the hypercall stub is the pal_call_stub function
-          defined in xenasm.S.  */
-
-       /* Copy the first bundle and patch the hypercall number.  */
-       bundle.i64[0] = pal_call_stub[0];
-       bundle.i64[1] = pal_call_stub[1];
-       slot_a5.inst = bundle.slot0;
-       slot_a5.imm7b = hypnum;
-       slot_a5.imm9d = hypnum >> 7;
-       slot_a5.imm5c = hypnum >> 16;
-       bundle.slot0 = slot_a5.inst;
-       imva[0] = bundle.i64[0];
-       imva[1] = bundle.i64[1];
-       ia64_fc (imva);
-       ia64_fc (imva + 1);
-       
-       /* Copy the second bundle and patch the hypercall vector.  */
-       bundle.i64[0] = pal_call_stub[2];
-       bundle.i64[1] = pal_call_stub[3];
-       slot_m37.inst = bundle.slot0;
-       slot_m37.imm20a = brkimm;
-       slot_m37.i = brkimm >> 20;
-       bundle.slot0 = slot_m37.inst;
-       imva[2] = bundle.i64[0];
-       imva[3] = bundle.i64[1];
-       ia64_fc (imva + 2);
-       ia64_fc (imva + 3);
-}
-
 
 /**************************************************************************
 Privileged operation emulation routines
@@ -351,12 +272,10 @@ static IA64FAULT priv_mov_to_pmd(VCPU *v
        return (vcpu_set_pmd(vcpu,r3,r2));
 }
 
-unsigned long to_cr_cnt[128] = { 0 };
-
 static IA64FAULT priv_mov_to_cr(VCPU *vcpu, INST64 inst)
 {
        UINT64 val = vcpu_get_gr(vcpu, inst.M32.r2);
-       to_cr_cnt[inst.M32.cr3]++;
+       privcnt.to_cr_cnt[inst.M32.cr3]++;
        switch (inst.M32.cr3) {
            case 0: return vcpu_set_dcr(vcpu,val);
            case 1: return vcpu_set_itm(vcpu,val);
@@ -488,8 +407,6 @@ static IA64FAULT priv_mov_from_pmc(VCPU 
        return fault;
 }
 
-unsigned long from_cr_cnt[128] = { 0 };
-
 #define cr_get(cr) \
        ((fault = vcpu_get_##cr(vcpu,&val)) == IA64_NO_FAULT) ? \
                vcpu_set_gr(vcpu, tgt, val, 0) : fault;
@@ -500,7 +417,7 @@ static IA64FAULT priv_mov_from_cr(VCPU *
        UINT64 val;
        IA64FAULT fault;
 
-       from_cr_cnt[inst.M33.cr3]++;
+       privcnt.from_cr_cnt[inst.M33.cr3]++;
        switch (inst.M33.cr3) {
            case 0: return cr_get(dcr);
            case 1: return cr_get(itm);
@@ -586,28 +503,10 @@ static const PPEFCN Mpriv_funcs[64] = {
   0, 0, 0, 0, 0, 0, 0, 0
 };
 
-struct {
-       unsigned long mov_to_ar_imm;
-       unsigned long mov_to_ar_reg;
-       unsigned long mov_from_ar;
-       unsigned long ssm;
-       unsigned long rsm;
-       unsigned long rfi;
-       unsigned long bsw0;
-       unsigned long bsw1;
-       unsigned long cover;
-       unsigned long fc;
-       unsigned long cpuid;
-       unsigned long Mpriv_cnt[64];
-} privcnt = { 0 };
-
-unsigned long privop_trace = 0;
-
 static IA64FAULT
 priv_handle_op(VCPU *vcpu, REGS *regs, int privlvl)
 {
        IA64_BUNDLE bundle;
-       IA64_BUNDLE __get_domain_bundle(UINT64);
        int slot;
        IA64_SLOT_TYPE slot_type;
        INST64 inst;
@@ -787,18 +686,10 @@ priv_emulate(VCPU *vcpu, REGS *regs, UIN
                (void)vcpu_increment_iip(vcpu);
        }
        if (fault == IA64_ILLOP_FAULT)
-               printf("priv_emulate: priv_handle_op fails, isr=0x%lx\n",isr);
+               printf("priv_emulate: priv_handle_op fails, "
+                      "isr=0x%lx iip=%lx\n",isr, regs->cr_iip);
        return fault;
 }
-
-static const char * const hyperpriv_str[HYPERPRIVOP_MAX+1] = {
-       0, "rfi", "rsm.dt", "ssm.dt", "cover", "itc.d", "itc.i", "ssm.i",
-       "=ivr", "=tpr", "tpr=", "eoi", "itm=", "thash", "ptc.ga", "itr.d",
-       "=rr", "rr=", "kr=", "fc", "=cpuid", "=pmd", "=ar.eflg", "ar.eflg="
-};
-
-unsigned long slow_hyperpriv_cnt[HYPERPRIVOP_MAX+1] = { 0 };
-unsigned long fast_hyperpriv_cnt[HYPERPRIVOP_MAX+1] = { 0 };
 
 /* hyperprivops are generally executed in assembly (with physical psr.ic off)
  * so this code is primarily used for debugging them */
@@ -809,10 +700,9 @@ ia64_hyperprivop(unsigned long iim, REGS
        UINT64 val;
        UINT64 itir, ifa;
 
-// FIXME: Handle faults appropriately for these
        if (!iim || iim > HYPERPRIVOP_MAX) {
-               printf("bad hyperprivop; ignored\n");
-               printf("iim=%lx, iip=0x%lx\n", iim, regs->cr_iip);
+               panic_domain(regs, "bad hyperprivop: iim=%lx, iip=0x%lx\n",
+                            iim, regs->cr_iip);
                return 1;
        }
        slow_hyperpriv_cnt[iim]++;
@@ -899,293 +789,15 @@ ia64_hyperprivop(unsigned long iim, REGS
            case HYPERPRIVOP_SET_EFLAG:
                (void)vcpu_set_ar(v,24,regs->r8);
                return 1;
+           case HYPERPRIVOP_RSM_BE:
+               (void)vcpu_reset_psr_sm(v, IA64_PSR_BE);
+               return 1;
+           case HYPERPRIVOP_GET_PSR:
+               (void)vcpu_get_psr(v, &val);
+               regs->r8 = val;
+               return 1;
        }
        return 0;
 }
 
 
-/**************************************************************************
-Privileged operation instrumentation routines
-**************************************************************************/
-
-static const char * const Mpriv_str[64] = {
-  "mov_to_rr", "mov_to_dbr", "mov_to_ibr", "mov_to_pkr",
-  "mov_to_pmc", "mov_to_pmd", "<0x06>", "<0x07>",
-  "<0x08>", "ptc_l", "ptc_g", "ptc_ga",
-  "ptr_d", "ptr_i", "itr_d", "itr_i",
-  "mov_from_rr", "mov_from_dbr", "mov_from_ibr", "mov_from_pkr",
-  "mov_from_pmc", "<0x15>", "<0x16>", "<0x17>",
-  "<0x18>", "<0x19>", "privified-thash", "privified-ttag",
-  "<0x1c>", "<0x1d>", "tpa", "tak",
-  "<0x20>", "<0x21>", "<0x22>", "<0x23>",
-  "mov_from_cr", "mov_from_psr", "<0x26>", "<0x27>",
-  "<0x28>", "<0x29>", "<0x2a>", "<0x2b>",
-  "mov_to_cr", "mov_to_psr", "itc_d", "itc_i",
-  "<0x30>", "<0x31>", "<0x32>", "<0x33>",
-  "ptc_e", "<0x35>", "<0x36>", "<0x37>",
-  "<0x38>", "<0x39>", "<0x3a>", "<0x3b>",
-  "<0x3c>", "<0x3d>", "<0x3e>", "<0x3f>"
-};
-
-#define RS "Rsvd"
-static const char * const cr_str[128] = {
-  "dcr","itm","iva",RS,RS,RS,RS,RS,
-  "pta",RS,RS,RS,RS,RS,RS,RS,
-  "ipsr","isr",RS,"iip","ifa","itir","iipa","ifs",
-  "iim","iha",RS,RS,RS,RS,RS,RS,
-  RS,RS,RS,RS,RS,RS,RS,RS, RS,RS,RS,RS,RS,RS,RS,RS,
-  RS,RS,RS,RS,RS,RS,RS,RS, RS,RS,RS,RS,RS,RS,RS,RS,
-  "lid","ivr","tpr","eoi","irr0","irr1","irr2","irr3",
-  "itv","pmv","cmcv",RS,RS,RS,RS,RS,
-  "lrr0","lrr1",RS,RS,RS,RS,RS,RS,
-  RS,RS,RS,RS,RS,RS,RS,RS, RS,RS,RS,RS,RS,RS,RS,RS,
-  RS,RS,RS,RS,RS,RS,RS,RS, RS,RS,RS,RS,RS,RS,RS,RS,
-  RS,RS,RS,RS,RS,RS,RS,RS
-};
-
-// FIXME: should use snprintf to ensure no buffer overflow
-static int dump_privop_counts(char *buf)
-{
-       int i, j;
-       UINT64 sum = 0;
-       char *s = buf;
-
-       // this is ugly and should probably produce sorted output
-       // but it will have to do for now
-       sum += privcnt.mov_to_ar_imm; sum += privcnt.mov_to_ar_reg;
-       sum += privcnt.ssm; sum += privcnt.rsm;
-       sum += privcnt.rfi; sum += privcnt.bsw0;
-       sum += privcnt.bsw1; sum += privcnt.cover;
-       for (i=0; i < 64; i++) sum += privcnt.Mpriv_cnt[i];
-       s += sprintf(s,"Privop statistics: (Total privops: %ld)\n",sum);
-       if (privcnt.mov_to_ar_imm)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.mov_to_ar_imm,
-                       "mov_to_ar_imm", (privcnt.mov_to_ar_imm*100L)/sum);
-       if (privcnt.mov_to_ar_reg)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.mov_to_ar_reg,
-                       "mov_to_ar_reg", (privcnt.mov_to_ar_reg*100L)/sum);
-       if (privcnt.mov_from_ar)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.mov_from_ar,
-                       "privified-mov_from_ar", 
(privcnt.mov_from_ar*100L)/sum);
-       if (privcnt.ssm)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.ssm,
-                       "ssm", (privcnt.ssm*100L)/sum);
-       if (privcnt.rsm)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.rsm,
-                       "rsm", (privcnt.rsm*100L)/sum);
-       if (privcnt.rfi)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.rfi,
-                       "rfi", (privcnt.rfi*100L)/sum);
-       if (privcnt.bsw0)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.bsw0,
-                       "bsw0", (privcnt.bsw0*100L)/sum);
-       if (privcnt.bsw1)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.bsw1,
-                       "bsw1", (privcnt.bsw1*100L)/sum);
-       if (privcnt.cover)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.cover,
-                       "cover", (privcnt.cover*100L)/sum);
-       if (privcnt.fc)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.fc,
-                       "privified-fc", (privcnt.fc*100L)/sum);
-       if (privcnt.cpuid)
-               s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.cpuid,
-                       "privified-getcpuid", (privcnt.cpuid*100L)/sum);
-       for (i=0; i < 64; i++) if (privcnt.Mpriv_cnt[i]) {
-               if (!Mpriv_str[i]) s += sprintf(s,"PRIVSTRING NULL!!\n");
-               else s += sprintf(s,"%10ld  %s [%ld%%]\n", privcnt.Mpriv_cnt[i],
-                       Mpriv_str[i], (privcnt.Mpriv_cnt[i]*100L)/sum);
-               if (i == 0x24) { // mov from CR
-                       s += sprintf(s,"            [");
-                       for (j=0; j < 128; j++) if (from_cr_cnt[j]) {
-                               if (!cr_str[j])
-                                       s += sprintf(s,"PRIVSTRING NULL!!\n");
-                               s += 
sprintf(s,"%s(%ld),",cr_str[j],from_cr_cnt[j]);
-                       }
-                       s += sprintf(s,"]\n");
-               }
-               else if (i == 0x2c) { // mov to CR
-                       s += sprintf(s,"            [");
-                       for (j=0; j < 128; j++) if (to_cr_cnt[j]) {
-                               if (!cr_str[j])
-                                       s += sprintf(s,"PRIVSTRING NULL!!\n");
-                               s += 
sprintf(s,"%s(%ld),",cr_str[j],to_cr_cnt[j]);
-                       }
-                       s += sprintf(s,"]\n");
-               }
-       }
-       return s - buf;
-}
-
-static int zero_privop_counts(char *buf)
-{
-       int i, j;
-       char *s = buf;
-
-       // this is ugly and should probably produce sorted output
-       // but it will have to do for now
-       privcnt.mov_to_ar_imm = 0; privcnt.mov_to_ar_reg = 0;
-       privcnt.mov_from_ar = 0;
-       privcnt.ssm = 0; privcnt.rsm = 0;
-       privcnt.rfi = 0; privcnt.bsw0 = 0;
-       privcnt.bsw1 = 0; privcnt.cover = 0;
-       privcnt.fc = 0; privcnt.cpuid = 0;
-       for (i=0; i < 64; i++) privcnt.Mpriv_cnt[i] = 0;
-       for (j=0; j < 128; j++) from_cr_cnt[j] = 0;
-       for (j=0; j < 128; j++) to_cr_cnt[j] = 0;
-       s += sprintf(s,"All privop statistics zeroed\n");
-       return s - buf;
-}
-
-#ifdef PRIVOP_ADDR_COUNT
-
-extern struct privop_addr_count privop_addr_counter[];
-
-void privop_count_addr(unsigned long iip, int inst)
-{
-       struct privop_addr_count *v = &privop_addr_counter[inst];
-       int i;
-
-       for (i = 0; i < PRIVOP_COUNT_NADDRS; i++) {
-               if (!v->addr[i]) { v->addr[i] = iip; v->count[i]++; return; }
-               else if (v->addr[i] == iip)  { v->count[i]++; return; }
-       }
-       v->overflow++;;
-}
-
-static int dump_privop_addrs(char *buf)
-{
-       int i,j;
-       char *s = buf;
-       s += sprintf(s,"Privop addresses:\n");
-       for (i = 0; i < PRIVOP_COUNT_NINSTS; i++) {
-               struct privop_addr_count *v = &privop_addr_counter[i];
-               s += sprintf(s,"%s:\n",v->instname);
-               for (j = 0; j < PRIVOP_COUNT_NADDRS; j++) {
-                       if (!v->addr[j]) break;
-                       s += sprintf(s," at 0x%lx 
#%ld\n",v->addr[j],v->count[j]);
-               }
-               if (v->overflow) 
-                       s += sprintf(s," other #%ld\n",v->overflow);
-       }
-       return s - buf;
-}
-
-static void zero_privop_addrs(void)
-{
-       int i,j;
-       for (i = 0; i < PRIVOP_COUNT_NINSTS; i++) {
-               struct privop_addr_count *v = &privop_addr_counter[i];
-               for (j = 0; j < PRIVOP_COUNT_NADDRS; j++)
-                       v->addr[j] = v->count[j] = 0;
-               v->overflow = 0;
-       }
-}
-#endif
-
-extern unsigned long dtlb_translate_count;
-extern unsigned long tr_translate_count;
-extern unsigned long phys_translate_count;
-extern unsigned long vhpt_translate_count;
-extern unsigned long fast_vhpt_translate_count;
-extern unsigned long recover_to_page_fault_count;
-extern unsigned long recover_to_break_fault_count;
-extern unsigned long lazy_cover_count;
-extern unsigned long idle_when_pending;
-extern unsigned long pal_halt_light_count;
-extern unsigned long context_switch_count;
-
-static int dump_misc_stats(char *buf)
-{
-       char *s = buf;
-       s += sprintf(s,"Virtual TR translations: %ld\n",tr_translate_count);
-       s += sprintf(s,"Virtual VHPT slow translations: 
%ld\n",vhpt_translate_count);
-       s += sprintf(s,"Virtual VHPT fast translations: 
%ld\n",fast_vhpt_translate_count);
-       s += sprintf(s,"Virtual DTLB translations: %ld\n",dtlb_translate_count);
-       s += sprintf(s,"Physical translations: %ld\n",phys_translate_count);
-       s += sprintf(s,"Recoveries to page fault: 
%ld\n",recover_to_page_fault_count);
-       s += sprintf(s,"Recoveries to break fault: 
%ld\n",recover_to_break_fault_count);
-       s += sprintf(s,"Idle when pending: %ld\n",idle_when_pending);
-       s += sprintf(s,"PAL_HALT_LIGHT (no pending): 
%ld\n",pal_halt_light_count);
-       s += sprintf(s,"context switches: %ld\n",context_switch_count);
-       s += sprintf(s,"Lazy covers: %ld\n",lazy_cover_count);
-       return s - buf;
-}
-
-static void zero_misc_stats(void)
-{
-       dtlb_translate_count = 0;
-       tr_translate_count = 0;
-       phys_translate_count = 0;
-       vhpt_translate_count = 0;
-       fast_vhpt_translate_count = 0;
-       recover_to_page_fault_count = 0;
-       recover_to_break_fault_count = 0;
-       lazy_cover_count = 0;
-       pal_halt_light_count = 0;
-       idle_when_pending = 0;
-       context_switch_count = 0;
-}
-
-static int dump_hyperprivop_counts(char *buf)
-{
-       int i;
-       char *s = buf;
-       unsigned long total = 0;
-       for (i = 1; i <= HYPERPRIVOP_MAX; i++) total += slow_hyperpriv_cnt[i];
-       s += sprintf(s,"Slow hyperprivops (total %ld):\n",total);
-       for (i = 1; i <= HYPERPRIVOP_MAX; i++)
-               if (slow_hyperpriv_cnt[i])
-                       s += sprintf(s,"%10ld %s\n",
-                               slow_hyperpriv_cnt[i], hyperpriv_str[i]);
-       total = 0;
-       for (i = 1; i <= HYPERPRIVOP_MAX; i++) total += fast_hyperpriv_cnt[i];
-       s += sprintf(s,"Fast hyperprivops (total %ld):\n",total);
-       for (i = 1; i <= HYPERPRIVOP_MAX; i++)
-               if (fast_hyperpriv_cnt[i])
-                       s += sprintf(s,"%10ld %s\n",
-                               fast_hyperpriv_cnt[i], hyperpriv_str[i]);
-       return s - buf;
-}
-
-static void zero_hyperprivop_counts(void)
-{
-       int i;
-       for (i = 0; i <= HYPERPRIVOP_MAX; i++) slow_hyperpriv_cnt[i] = 0;
-       for (i = 0; i <= HYPERPRIVOP_MAX; i++) fast_hyperpriv_cnt[i] = 0;
-}
-
-#define TMPBUFLEN 8*1024
-int dump_privop_counts_to_user(char __user *ubuf, int len)
-{
-       char buf[TMPBUFLEN];
-       int n = dump_privop_counts(buf);
-
-       n += dump_hyperprivop_counts(buf + n);
-       n += dump_reflect_counts(buf + n);
-#ifdef PRIVOP_ADDR_COUNT
-       n += dump_privop_addrs(buf + n);
-#endif
-       n += dump_vhpt_stats(buf + n);
-       n += dump_misc_stats(buf + n);
-       if (len < TMPBUFLEN) return -1;
-       if (__copy_to_user(ubuf,buf,n)) return -1;
-       return n;
-}
-
-int zero_privop_counts_to_user(char __user *ubuf, int len)
-{
-       char buf[TMPBUFLEN];
-       int n = zero_privop_counts(buf);
-
-       zero_hyperprivop_counts();
-#ifdef PRIVOP_ADDR_COUNT
-       zero_privop_addrs();
-#endif
-       zero_vhpt_stats();
-       zero_misc_stats();
-       zero_reflect_counts();
-       if (len < TMPBUFLEN) return -1;
-       if (__copy_to_user(ubuf,buf,n)) return -1;
-       return n;
-}
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/regionreg.c
--- a/xen/arch/ia64/xen/regionreg.c     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/regionreg.c     Fri Jul 28 10:51:38 2006 +0100
@@ -342,22 +342,3 @@ void load_region_regs(struct vcpu *v)
                panic_domain(0,"load_region_regs: can't set! bad=%lx\n",bad);
        }
 }
-
-void load_region_reg7_and_pta(struct vcpu *v)
-{
-       unsigned long rr7, pta;
-
-       if (!is_idle_domain(v->domain)) {  
-               ia64_set_pta(VHPT_ADDR | (1 << 8) | (VHPT_SIZE_LOG2 << 2) |
-                            VHPT_ENABLED);
-
-               // TODO: These probably should be validated
-               rr7 =  VCPU(v,rrs[7]);
-               if (!set_one_rr(0xe000000000000000L, rr7))
-                       panic_domain(0, "%s: can't set!\n", __func__);
-       }
-       else {
-               pta = ia64_get_pta();
-               ia64_set_pta(pta & ~VHPT_ENABLED);
-       }
-}
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/vcpu.c
--- a/xen/arch/ia64/xen/vcpu.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/vcpu.c  Fri Jul 28 10:51:38 2006 +0100
@@ -20,6 +20,8 @@
 #include <asm/privop.h>
 #include <xen/event.h>
 #include <asm/vmx_phy_mode.h>
+#include <asm/bundle.h>
+#include <asm/privop_stat.h>
 
 /* FIXME: where these declarations should be there ? */
 extern void getreg(unsigned long regnum, unsigned long *val, int *nat, struct 
pt_regs *regs);
@@ -27,9 +29,6 @@ extern void getfpreg (unsigned long regn
 extern void getfpreg (unsigned long regnum, struct ia64_fpreg *fpval, struct 
pt_regs *regs);
 
 extern void setfpreg (unsigned long regnum, struct ia64_fpreg *fpval, struct 
pt_regs *regs);
-
-extern void panic_domain(struct pt_regs *, const char *, ...);
-extern IA64_BUNDLE __get_domain_bundle(UINT64);
 
 typedef        union {
        struct ia64_psr ia64_psr;
@@ -46,24 +45,6 @@ typedef      union {
 #define        IA64_PTA_BASE_BIT       15
 #define        IA64_PTA_LFMT           (1UL << IA64_PTA_VF_BIT)
 #define        IA64_PTA_SZ(x)  (x##UL << IA64_PTA_SZ_BIT)
-
-#define STATIC
-
-#ifdef PRIVOP_ADDR_COUNT
-struct privop_addr_count privop_addr_counter[PRIVOP_COUNT_NINSTS+1] = {
-       { "=ifa",  { 0 }, { 0 }, 0 },
-       { "thash", { 0 }, { 0 }, 0 },
-       { 0,       { 0 }, { 0 }, 0 }
-};
-extern void privop_count_addr(unsigned long addr, int inst);
-#define        PRIVOP_COUNT_ADDR(regs,inst) 
privop_count_addr(regs->cr_iip,inst)
-#else
-#define        PRIVOP_COUNT_ADDR(x,y) do {} while (0)
-#endif
-
-unsigned long dtlb_translate_count = 0;
-unsigned long tr_translate_count = 0;
-unsigned long phys_translate_count = 0;
 
 unsigned long vcpu_verbose = 0;
 
@@ -282,7 +263,6 @@ IA64FAULT vcpu_reset_psr_sm(VCPU *vcpu, 
        return IA64_NO_FAULT;
 }
 
-#define SPURIOUS_VECTOR 0xf
 
 IA64FAULT vcpu_set_psr_dt(VCPU *vcpu)
 {
@@ -446,8 +426,6 @@ UINT64 vcpu_get_ipsr_int_state(VCPU *vcp
 
 IA64FAULT vcpu_get_dcr(VCPU *vcpu, UINT64 *pval)
 {
-//extern unsigned long privop_trace;
-//privop_trace=0;
 //verbose("vcpu_get_dcr: called @%p\n",PSCB(vcpu,iip));
        // Reads of cr.dcr on Xen always have the sign bit set, so
        // a domain can differentiate whether it is running on SP or not
@@ -495,10 +473,8 @@ IA64FAULT vcpu_get_iip(VCPU *vcpu, UINT6
 
 IA64FAULT vcpu_get_ifa(VCPU *vcpu, UINT64 *pval)
 {
-       UINT64 val = PSCB(vcpu,ifa);
-       REGS *regs = vcpu_regs(vcpu);
-       PRIVOP_COUNT_ADDR(regs,_GET_IFA);
-       *pval = val;
+       PRIVOP_COUNT_ADDR(vcpu_regs(vcpu),_GET_IFA);
+       *pval = PSCB(vcpu,ifa);
        return (IA64_NO_FAULT);
 }
 
@@ -564,18 +540,13 @@ IA64FAULT vcpu_get_iim(VCPU *vcpu, UINT6
 
 IA64FAULT vcpu_get_iha(VCPU *vcpu, UINT64 *pval)
 {
-       //return vcpu_thash(vcpu,PSCB(vcpu,ifa),pval);
-       UINT64 val = PSCB(vcpu,iha);
-       REGS *regs = vcpu_regs(vcpu);
-       PRIVOP_COUNT_ADDR(regs,_THASH);
-       *pval = val;
+       PRIVOP_COUNT_ADDR(vcpu_regs(vcpu),_THASH);
+       *pval = PSCB(vcpu,iha);
        return (IA64_NO_FAULT);
 }
 
 IA64FAULT vcpu_set_dcr(VCPU *vcpu, UINT64 val)
 {
-//extern unsigned long privop_trace;
-//privop_trace=1;
        // Reads of cr.dcr on SP always have the sign bit set, so
        // a domain can differentiate whether it is running on SP or not
        // Thus, writes of DCR should ignore the sign bit
@@ -1332,11 +1303,6 @@ IA64FAULT vcpu_ttag(VCPU *vcpu, UINT64 v
        return (IA64_ILLOP_FAULT);
 }
 
-unsigned long vhpt_translate_count = 0;
-unsigned long fast_vhpt_translate_count = 0;
-unsigned long recover_to_page_fault_count = 0;
-unsigned long recover_to_break_fault_count = 0;
-
 int warn_region0_address = 0; // FIXME later: tie to a boot parameter?
 
 /* Return TRUE iff [b1,e1] and [b2,e2] partially or fully overlaps.  */
@@ -1386,7 +1352,7 @@ static TR_ENTRY*
 static TR_ENTRY*
 vcpu_tr_lookup(VCPU* vcpu, unsigned long va, UINT64 rid, BOOLEAN is_data)
 {
-       unsigned int* regions;
+       unsigned char* regions;
        TR_ENTRY *trp;
        int tr_max;
        int i;
@@ -1911,13 +1877,15 @@ IA64FAULT vcpu_set_pkr(VCPU *vcpu, UINT6
  VCPU translation register access routines
 **************************************************************************/
 
-static void vcpu_set_tr_entry(TR_ENTRY *trp, UINT64 pte, UINT64 itir, UINT64 
ifa)
+static void
+vcpu_set_tr_entry_rid(TR_ENTRY *trp, UINT64 pte,
+                      UINT64 itir, UINT64 ifa, UINT64 rid)
 {
        UINT64 ps;
        union pte_flags new_pte;
 
        trp->itir = itir;
-       trp->rid = VCPU(current,rrs[ifa>>61]) & RR_RID_MASK;
+       trp->rid = rid;
        ps = trp->ps;
        new_pte.val = pte;
        if (new_pte.pl < 2) new_pte.pl = 2;
@@ -1931,29 +1899,100 @@ static void vcpu_set_tr_entry(TR_ENTRY *
        trp->pte.val = new_pte.val;
 }
 
+static inline void
+vcpu_set_tr_entry(TR_ENTRY *trp, UINT64 pte, UINT64 itir, UINT64 ifa)
+{
+       vcpu_set_tr_entry_rid(trp, pte, itir, ifa,
+                             VCPU(current, rrs[ifa>>61]) & RR_RID_MASK);
+}
+
 IA64FAULT vcpu_itr_d(VCPU *vcpu, UINT64 slot, UINT64 pte,
-               UINT64 itir, UINT64 ifa)
+                     UINT64 itir, UINT64 ifa)
 {
        TR_ENTRY *trp;
 
        if (slot >= NDTRS) return IA64_RSVDREG_FAULT;
+
+       vcpu_purge_tr_entry(&PSCBX(vcpu, dtlb));
+
        trp = &PSCBX(vcpu,dtrs[slot]);
 //printf("***** itr.d: setting slot %d: ifa=%p\n",slot,ifa);
        vcpu_set_tr_entry(trp,pte,itir,ifa);
        vcpu_quick_region_set(PSCBX(vcpu,dtr_regions),ifa);
+
+       /*
+        * FIXME According to spec, vhpt should be purged, but this
+        * incurs considerable performance loss, since it is safe for
+        * linux not to purge vhpt, vhpt purge is disabled until a
+        * feasible way is found.
+        *
+        * vcpu_flush_tlb_vhpt_range(ifa & itir_mask(itir), itir_ps(itir));
+        */
+
        return IA64_NO_FAULT;
 }
 
 IA64FAULT vcpu_itr_i(VCPU *vcpu, UINT64 slot, UINT64 pte,
-               UINT64 itir, UINT64 ifa)
+                     UINT64 itir, UINT64 ifa)
 {
        TR_ENTRY *trp;
 
        if (slot >= NITRS) return IA64_RSVDREG_FAULT;
+
+       vcpu_purge_tr_entry(&PSCBX(vcpu, itlb));
+
        trp = &PSCBX(vcpu,itrs[slot]);
 //printf("***** itr.i: setting slot %d: ifa=%p\n",slot,ifa);
        vcpu_set_tr_entry(trp,pte,itir,ifa);
        vcpu_quick_region_set(PSCBX(vcpu,itr_regions),ifa);
+
+       /*
+        * FIXME According to spec, vhpt should be purged, but this
+        * incurs considerable performance loss, since it is safe for
+        * linux not to purge vhpt, vhpt purge is disabled until a
+        * feasible way is found.
+        *
+        * vcpu_flush_tlb_vhpt_range(ifa & itir_mask(itir), itir_ps(itir));
+        */
+
+       return IA64_NO_FAULT;
+}
+
+IA64FAULT vcpu_set_itr(VCPU *vcpu, u64 slot, u64 pte,
+                       u64 itir, u64 ifa, u64 rid)
+{
+       TR_ENTRY *trp;
+
+       if (slot >= NITRS)
+               return IA64_RSVDREG_FAULT;
+       trp = &PSCBX(vcpu, itrs[slot]);
+       vcpu_set_tr_entry_rid(trp, pte, itir, ifa, rid);
+
+       /* Recompute the itr_region.  */
+       vcpu->arch.itr_regions = 0;
+       for (trp = vcpu->arch.itrs; trp < &vcpu->arch.itrs[NITRS]; trp++)
+               if (trp->pte.p)
+                       vcpu_quick_region_set(vcpu->arch.itr_regions,
+                                             trp->vadr);
+       return IA64_NO_FAULT;
+}
+
+IA64FAULT vcpu_set_dtr(VCPU *vcpu, u64 slot, u64 pte,
+                       u64 itir, u64 ifa, u64 rid)
+{
+       TR_ENTRY *trp;
+
+       if (slot >= NDTRS)
+               return IA64_RSVDREG_FAULT;
+       trp = &PSCBX(vcpu, dtrs[slot]);
+       vcpu_set_tr_entry_rid(trp, pte, itir, ifa, rid);
+
+       /* Recompute the dtr_region.  */
+       vcpu->arch.dtr_regions = 0;
+       for (trp = vcpu->arch.dtrs; trp < &vcpu->arch.dtrs[NDTRS]; trp++)
+               if (trp->pte.p)
+                       vcpu_quick_region_set(vcpu->arch.dtr_regions,
+                                             trp->vadr);
        return IA64_NO_FAULT;
 }
 
@@ -2021,7 +2060,7 @@ again:
        vcpu_itc_no_srlz(vcpu,2,ifa,pteval,pte,logps);
        if (swap_rr0) set_metaphysical_rr0();
        if (p2m_entry_retry(&entry)) {
-               vcpu_flush_tlb_vhpt_range(ifa & ((1 << logps) - 1), logps);
+               vcpu_flush_tlb_vhpt_range(ifa, logps);
                goto again;
        }
        return IA64_NO_FAULT;
@@ -2044,7 +2083,7 @@ again:
        vcpu_itc_no_srlz(vcpu, 1,ifa,pteval,pte,logps);
        if (swap_rr0) set_metaphysical_rr0();
        if (p2m_entry_retry(&entry)) {
-               vcpu_flush_tlb_vhpt_range(ifa & ((1 << logps) - 1), logps);
+               vcpu_flush_tlb_vhpt_range(ifa, logps);
                goto again;
        }
        return IA64_NO_FAULT;
@@ -2096,7 +2135,7 @@ IA64FAULT vcpu_ptc_e(VCPU *vcpu, UINT64 
        // architected loop to purge the entire TLB, should use
        //  base = stride1 = stride2 = 0, count0 = count 1 = 1
 
-       vcpu_flush_vtlb_all ();
+       vcpu_flush_vtlb_all(current);
 
        return IA64_NO_FAULT;
 }
@@ -2178,7 +2217,6 @@ IA64FAULT vcpu_ptr_i(VCPU *vcpu,UINT64 v
                        vcpu_quick_region_set(vcpu->arch.itr_regions,
                                              trp->vadr);
 
-
        vcpu_flush_tlb_vhpt_range (vadr, log_range);
 
        return IA64_NO_FAULT;
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/vhpt.c
--- a/xen/arch/ia64/xen/vhpt.c  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/vhpt.c  Fri Jul 28 10:51:38 2006 +0100
@@ -23,7 +23,7 @@ DEFINE_PER_CPU (unsigned long, vhpt_padd
 DEFINE_PER_CPU (unsigned long, vhpt_paddr);
 DEFINE_PER_CPU (unsigned long, vhpt_pend);
 
-static void vhpt_flush(void)
+void vhpt_flush(void)
 {
        struct vhpt_lf_entry *v = __va(__ia64_per_cpu_var(vhpt_paddr));
        int i;
@@ -129,10 +129,8 @@ void vhpt_init(void)
 }
 
 
-void vcpu_flush_vtlb_all (void)
-{
-       struct vcpu *v = current;
-
+void vcpu_flush_vtlb_all(struct vcpu *v)
+{
        /* First VCPU tlb.  */
        vcpu_purge_tr_entry(&PSCBX(v,dtlb));
        vcpu_purge_tr_entry(&PSCBX(v,itlb));
@@ -148,6 +146,11 @@ void vcpu_flush_vtlb_all (void)
           check this.  */
 }
 
+static void __vcpu_flush_vtlb_all(void *vcpu)
+{
+       vcpu_flush_vtlb_all((struct vcpu*)vcpu);
+}
+
 void domain_flush_vtlb_all (void)
 {
        int cpu = smp_processor_id ();
@@ -158,12 +161,11 @@ void domain_flush_vtlb_all (void)
                        continue;
 
                if (v->processor == cpu)
-                       vcpu_flush_vtlb_all ();
+                       vcpu_flush_vtlb_all(v);
                else
-                       smp_call_function_single
-                               (v->processor,
-                                (void(*)(void *))vcpu_flush_vtlb_all,
-                                NULL,1,1);
+                       smp_call_function_single(v->processor,
+                                                __vcpu_flush_vtlb_all,
+                                                v, 1, 1);
        }
 }
 
@@ -234,7 +236,7 @@ static void flush_tlb_vhpt_all (struct d
        local_flush_tlb_all ();
 }
 
-void domain_flush_destroy (struct domain *d)
+void domain_flush_tlb_vhpt(struct domain *d)
 {
        /* Very heavy...  */
        on_each_cpu ((void (*)(void *))flush_tlb_vhpt_all, d, 1, 1);
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/xenasm.S
--- a/xen/arch/ia64/xen/xenasm.S        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/xenasm.S        Fri Jul 28 10:51:38 2006 +0100
@@ -131,7 +131,7 @@ 1:
 #endif
 
        //  Shared info
-       mov r24=PAGE_SHIFT<<2
+       mov r24=XSI_SHIFT<<2
        movl r25=__pgprot(__DIRTY_BITS | _PAGE_PL_2 | _PAGE_AR_RW)
        ;;
        ptr.d   in3,r24
@@ -144,7 +144,7 @@ 1:
        
        // Map mapped_regs
        mov r22=XMAPPEDREGS_OFS
-       mov r24=PAGE_SHIFT<<2
+       mov r24=XMAPPEDREGS_SHIFT<<2
        ;; 
        add r22=r22,in3
        ;;
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/xenmisc.c
--- a/xen/arch/ia64/xen/xenmisc.c       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/xenmisc.c       Fri Jul 28 10:51:38 2006 +0100
@@ -28,8 +28,6 @@ unsigned long loops_per_jiffy = (1<<12);
 /* FIXME: where these declarations should be there ? */
 extern void show_registers(struct pt_regs *regs);
 
-void ia64_mca_init(void) { printf("ia64_mca_init() skipped (Machine check 
abort handling)\n"); }
-void ia64_mca_cpu_init(void *x) { }
 void hpsim_setup(char **x)
 {
 #ifdef CONFIG_SMP
@@ -174,7 +172,7 @@ void panic_domain(struct pt_regs *regs, 
 void panic_domain(struct pt_regs *regs, const char *fmt, ...)
 {
        va_list args;
-       char buf[128];
+       char buf[256];
        struct vcpu *v = current;
 
        printf("$$$$$ PANIC in domain %d (k6=0x%lx): ",
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/ia64/xen/xensetup.c
--- a/xen/arch/ia64/xen/xensetup.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/ia64/xen/xensetup.c      Fri Jul 28 10:51:38 2006 +0100
@@ -15,6 +15,7 @@
 #include <xen/gdbstub.h>
 #include <xen/compile.h>
 #include <xen/console.h>
+#include <xen/domain.h>
 #include <xen/serial.h>
 #include <xen/trace.h>
 #include <asm/meminit.h>
@@ -25,10 +26,9 @@
 #include <linux/efi.h>
 #include <asm/iosapic.h>
 
-/* Be sure the struct shared_info fits on a page because it is mapped in
-   domain. */
-#if SHARED_INFO_SIZE > PAGE_SIZE
- #error "struct shared_info does not not fit in PAGE_SIZE"
+/* Be sure the struct shared_info size is <= XSI_SIZE.  */
+#if SHARED_INFO_SIZE > XSI_SIZE
+#error "struct shared_info bigger than XSI_SIZE"
 #endif
 
 unsigned long xenheap_phys_end, total_pages;
@@ -65,8 +65,8 @@ integer_param("maxcpus", max_cpus);
 /* xencons: if true enable xenconsole input (and irq).
    Note: you have to disable 8250 serials in domains (to avoid use of the
    same resource).  */
-static int opt_xencons = 0;
-boolean_param("xencons", opt_xencons);
+static int opt_xencons = 1;
+integer_param("xencons", opt_xencons);
 
 /* Toggle to allow non-legacy xencons UARTs to run in polling mode */
 static int opt_xencons_poll = 0;
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/x86/hvm/vmx/vmx.c
--- a/xen/arch/x86/hvm/vmx/vmx.c        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/x86/hvm/vmx/vmx.c        Fri Jul 28 10:51:38 2006 +0100
@@ -286,7 +286,7 @@ static inline int long_mode_do_msr_write
         if ( msr_content & ~(EFER_LME | EFER_LMA | EFER_NX | EFER_SCE) )
         {
             printk("trying to set reserved bit in EFER\n");
-            vmx_inject_exception(v, TRAP_gp_fault, 0);
+            vmx_inject_hw_exception(v, TRAP_gp_fault, 0);
             return 0;
         }
 
@@ -300,7 +300,7 @@ static inline int long_mode_do_msr_write
             {
                 printk("trying to set LME bit when "
                        "in paging mode or PAE bit is not set\n");
-                vmx_inject_exception(v, TRAP_gp_fault, 0);
+                vmx_inject_hw_exception(v, TRAP_gp_fault, 0);
                 return 0;
             }
 
@@ -318,7 +318,7 @@ static inline int long_mode_do_msr_write
         if ( !IS_CANO_ADDRESS(msr_content) )
         {
             HVM_DBG_LOG(DBG_LEVEL_1, "Not cano address of msr write\n");
-            vmx_inject_exception(v, TRAP_gp_fault, 0);
+            vmx_inject_hw_exception(v, TRAP_gp_fault, 0);
             return 0;
         }
 
@@ -1438,7 +1438,7 @@ static int vmx_set_cr0(unsigned long val
                        &v->arch.hvm_vmx.cpu_state) )
         {
             HVM_DBG_LOG(DBG_LEVEL_1, "Enable paging before PAE enabled\n");
-            vmx_inject_exception(v, TRAP_gp_fault, 0);
+            vmx_inject_hw_exception(v, TRAP_gp_fault, 0);
         }
 
         if ( test_bit(VMX_CPU_STATE_LME_ENABLED,
@@ -1520,7 +1520,7 @@ static int vmx_set_cr0(unsigned long val
     {
         if ( value & X86_CR0_PG ) {
             /* inject GP here */
-            vmx_inject_exception(v, TRAP_gp_fault, 0);
+            vmx_inject_hw_exception(v, TRAP_gp_fault, 0);
             return 0;
         } else {
             /*
@@ -1764,7 +1764,7 @@ static int mov_to_cr(int gp, int cr, str
         else
         {
             if ( test_bit(VMX_CPU_STATE_LMA_ENABLED, 
&v->arch.hvm_vmx.cpu_state) )
-                vmx_inject_exception(v, TRAP_gp_fault, 0);
+                vmx_inject_hw_exception(v, TRAP_gp_fault, 0);
 
             clear_bit(VMX_CPU_STATE_PAE_ENABLED, &v->arch.hvm_vmx.cpu_state);
         }
@@ -2192,7 +2192,7 @@ asmlinkage void vmx_vmexit_handler(struc
             if ( test_bit(_DOMF_debugging, &v->domain->domain_flags) )
                 domain_pause_for_debugger();
             else 
-                vmx_inject_exception(v, TRAP_int3, VMX_DELIVER_NO_ERROR_CODE);
+                vmx_reflect_exception(v);
             break;
         }
 #endif
@@ -2219,7 +2219,7 @@ asmlinkage void vmx_vmexit_handler(struc
                 /*
                  * Inject #PG using Interruption-Information Fields
                  */
-                vmx_inject_exception(v, TRAP_page_fault, regs.error_code);
+                vmx_inject_hw_exception(v, TRAP_page_fault, regs.error_code);
                 v->arch.hvm_vmx.cpu_cr2 = va;
                 TRACE_3D(TRC_VMX_INT, v->domain->domain_id, TRAP_page_fault, 
va);
             }
@@ -2335,7 +2335,7 @@ asmlinkage void vmx_vmexit_handler(struc
     case EXIT_REASON_VMON:
         /* Report invalid opcode exception when a VMX guest tries to execute 
             any of the VMX instructions */
-        vmx_inject_exception(v, TRAP_invalid_op, VMX_DELIVER_NO_ERROR_CODE);
+        vmx_inject_hw_exception(v, TRAP_invalid_op, VMX_DELIVER_NO_ERROR_CODE);
         break;
 
     default:
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/x86/shadow32.c
--- a/xen/arch/x86/shadow32.c   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/x86/shadow32.c   Fri Jul 28 10:51:38 2006 +0100
@@ -835,12 +835,12 @@ void free_monitor_pagetable(struct vcpu 
 }
 
 static int
-map_p2m_entry(l1_pgentry_t *l1tab, unsigned long va,
-              unsigned long gpa, unsigned long mfn)
+map_p2m_entry(l1_pgentry_t *l1tab, unsigned long gpfn, unsigned long mfn)
 {
     unsigned long *l0tab = NULL;
     l1_pgentry_t l1e = { 0 };
     struct page_info *page;
+    unsigned long va = RO_MPT_VIRT_START + (gpfn * sizeof(mfn));
 
     l1e = l1tab[l1_table_offset(va)];
     if ( !(l1e_get_flags(l1e) & _PAGE_PRESENT) )
@@ -858,7 +858,7 @@ map_p2m_entry(l1_pgentry_t *l1tab, unsig
     else
         l0tab = map_domain_page(l1e_get_pfn(l1e));
 
-    l0tab[gpa & ((PAGE_SIZE / sizeof(mfn)) - 1)] = mfn;
+    l0tab[gpfn & ((PAGE_SIZE / sizeof(mfn)) - 1)] = mfn;
 
     unmap_domain_page(l0tab);
 
@@ -877,15 +877,9 @@ set_p2m_entry(struct domain *d, unsigned
     unsigned long va = pfn << PAGE_SHIFT;
 
     if ( shadow_mode_external(d) )
-    {
         tabpfn = pagetable_get_pfn(d->vcpu[0]->arch.monitor_table);
-        va = RO_MPT_VIRT_START + (pfn * sizeof (unsigned long));
-    }
     else
-    {
         tabpfn = pagetable_get_pfn(d->arch.phys_table);
-        va = pfn << PAGE_SHIFT;
-    }
 
     ASSERT(tabpfn != 0);
     ASSERT(shadow_lock_is_acquired(d));
@@ -902,12 +896,12 @@ set_p2m_entry(struct domain *d, unsigned
         l1_pgentry_t *l1tab = NULL;
         l2_pgentry_t l2e;
 
-        l2e = l2[l2_table_offset(va)];
+        l2e = l2[l2_table_offset(RO_MPT_VIRT_START)];
 
         ASSERT( l2e_get_flags(l2e) & _PAGE_PRESENT );
 
         l1tab = map_domain_page(l2e_get_pfn(l2e));
-        if ( !(error = map_p2m_entry(l1tab, va, pfn, mfn)) )
+        if ( !(error = map_p2m_entry(l1tab, pfn, mfn)) )
             domain_crash(d);
 
         unmap_domain_page(l1tab);
@@ -952,7 +946,6 @@ alloc_p2m_table(struct domain *d)
 alloc_p2m_table(struct domain *d)
 {
     struct list_head *list_ent;
-    unsigned long va = RO_MPT_VIRT_START;   /* phys_to_machine_mapping */
 
     l2_pgentry_t *l2tab = NULL;
     l1_pgentry_t *l1tab = NULL;
@@ -965,14 +958,14 @@ alloc_p2m_table(struct domain *d)
     {
         l2tab = map_domain_page(
             pagetable_get_pfn(d->vcpu[0]->arch.monitor_table));
-        l2e = l2tab[l2_table_offset(va)];
+        l2e = l2tab[l2_table_offset(RO_MPT_VIRT_START)];
         if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
         {
             page = alloc_domheap_page(NULL);
 
             l1tab = map_domain_page(page_to_mfn(page));
             memset(l1tab, 0, PAGE_SIZE);
-            l2e = l2tab[l2_table_offset(va)] =
+            l2e = l2tab[l2_table_offset(RO_MPT_VIRT_START)] =
                 l2e_from_page(page, __PAGE_HYPERVISOR);
         }
         else
@@ -1002,14 +995,13 @@ alloc_p2m_table(struct domain *d)
         page = list_entry(list_ent, struct page_info, list);
         mfn = page_to_mfn(page);
 
-        if ( !(error = map_p2m_entry(l1tab, va, gpfn, mfn)) )
+        if ( !(error = map_p2m_entry(l1tab, gpfn, mfn)) )
         {
             domain_crash(d);
             break;
         }
 
         list_ent = frame_table[mfn].list.next;
-        va += sizeof(mfn);
     }
 
     unmap_domain_page(l1tab);
diff -r 1eb42266de1b -r e5c84586c333 xen/arch/x86/shadow_public.c
--- a/xen/arch/x86/shadow_public.c      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/arch/x86/shadow_public.c      Fri Jul 28 10:51:38 2006 +0100
@@ -438,6 +438,8 @@ static void alloc_monitor_pagetable(stru
             (l3e_get_flags(mpl3e[i]) & _PAGE_PRESENT) ?
             l2e_from_pfn(l3e_get_pfn(mpl3e[i]), __PAGE_HYPERVISOR) :
             l2e_empty();
+    for ( i = 0; i < (MACHPHYS_MBYTES >> (L2_PAGETABLE_SHIFT - 20)); i++ )
+        mpl2e[l2_table_offset(RO_MPT_VIRT_START) + i] = l2e_empty();
 
     if ( v->vcpu_id == 0 )
     {
@@ -1471,8 +1473,7 @@ int _shadow_mode_refcounts(struct domain
 }
 
 static int
-map_p2m_entry(pgentry_64_t *top_tab, unsigned long va,
-              unsigned long gpfn, unsigned long mfn)
+map_p2m_entry(pgentry_64_t *top_tab, unsigned long gpfn, unsigned long mfn)
 {
 #if CONFIG_PAGING_LEVELS >= 4
     pgentry_64_t l4e = { 0 };
@@ -1487,6 +1488,7 @@ map_p2m_entry(pgentry_64_t *top_tab, uns
     l2_pgentry_t l2e = { 0 };
     l1_pgentry_t l1e = { 0 };
     struct page_info *page;
+    unsigned long va = RO_MPT_VIRT_START + (gpfn * sizeof(mfn));
 
 #if CONFIG_PAGING_LEVELS >= 4
     l4e = top_tab[l4_table_offset(va)];
@@ -1568,7 +1570,7 @@ map_p2m_entry(pgentry_64_t *top_tab, uns
 
     unmap_domain_page(l1tab);
 
-    l0tab[gpfn & ((PAGE_SIZE / sizeof (mfn)) - 1) ] = mfn;
+    l0tab[gpfn & ((PAGE_SIZE / sizeof(mfn)) - 1)] = mfn;
 
     unmap_domain_page(l0tab);
 
@@ -1584,7 +1586,6 @@ set_p2m_entry(struct domain *d, unsigned
               struct domain_mmap_cache *l1cache)
 {
     unsigned long tabmfn = pagetable_get_pfn(d->vcpu[0]->arch.monitor_table);
-    unsigned long va = RO_MPT_VIRT_START + (gpfn * sizeof(unsigned long));
     pgentry_64_t *top_tab;
     int error;
 
@@ -1593,7 +1594,7 @@ set_p2m_entry(struct domain *d, unsigned
 
     top_tab = map_domain_page_with_cache(tabmfn, l2cache);
 
-    if ( !(error = map_p2m_entry(top_tab, va, gpfn, mfn)) )
+    if ( !(error = map_p2m_entry(top_tab, gpfn, mfn)) )
         domain_crash(d);
 
     unmap_domain_page_with_cache(top_tab, l2cache);
@@ -1605,10 +1606,9 @@ alloc_p2m_table(struct domain *d)
 alloc_p2m_table(struct domain *d)
 {
     struct list_head *list_ent;
-    unsigned long va = RO_MPT_VIRT_START; /*  phys_to_machine_mapping */
     pgentry_64_t *top_tab = NULL;
-    unsigned long mfn;
-    int gpfn, error = 0;
+    unsigned long gpfn, mfn;
+    int error = 0;
 
     ASSERT( pagetable_get_pfn(d->vcpu[0]->arch.monitor_table) );
 
@@ -1624,14 +1624,13 @@ alloc_p2m_table(struct domain *d)
         page = list_entry(list_ent, struct page_info, list);
         mfn = page_to_mfn(page);
 
-        if ( !(error = map_p2m_entry(top_tab, va, gpfn, mfn)) )
+        if ( !(error = map_p2m_entry(top_tab, gpfn, mfn)) )
         {
             domain_crash(d);
             break;
         }
 
         list_ent = frame_table[mfn].list.next;
-        va += sizeof(mfn);
     }
 
     unmap_domain_page(top_tab);
diff -r 1eb42266de1b -r e5c84586c333 xen/common/memory.c
--- a/xen/common/memory.c       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/common/memory.c       Fri Jul 28 10:51:38 2006 +0100
@@ -170,7 +170,7 @@ guest_remove_page(
     if ( test_and_clear_bit(_PGC_allocated, &page->count_info) )
         put_page(page);
 
-    if ( unlikely((page->count_info & PGC_count_mask) != 1) )
+    if ( unlikely(!page_is_removable(page)) )
     {
         /* We'll make this a guest-visible error in future, so take heed! */
         DPRINTK("Dom%d freeing in-use page %lx (pseudophys %lx):"
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/config.h
--- a/xen/include/asm-ia64/config.h     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/config.h     Fri Jul 28 10:51:38 2006 +0100
@@ -139,17 +139,19 @@ extern int smp_num_siblings;
 #define platform_outw  __ia64_outw
 #define platform_outl  __ia64_outl
 
-// FIXME: This just overrides a use in a typedef (not allowed in ia64,
-//  or maybe just in older gcc's?) used in timer.c but should be OK
-//  (and indeed is probably required!) elsewhere
-#undef __cacheline_aligned
-#undef ____cacheline_aligned
-#undef ____cacheline_aligned_in_smp
-#define __cacheline_aligned
+#include <xen/cache.h>
+#ifndef CONFIG_SMP
 #define __cacheline_aligned_in_smp
-#define ____cacheline_aligned
+#else
+#define __cacheline_aligned_in_smp __cacheline_aligned
+#endif
+
+#define ____cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
+#ifndef CONFIG_SMP
 #define ____cacheline_aligned_in_smp
-#define ____cacheline_maxaligned_in_smp
+#else
+#define ____cacheline_aligned_in_smp ____cacheline_aligned
+#endif
 
 #ifndef __ASSEMBLY__
 #include "asm/types.h" // for u64
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/dom_fw.h
--- a/xen/include/asm-ia64/dom_fw.h     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/dom_fw.h     Fri Jul 28 10:51:38 2006 +0100
@@ -166,8 +166,6 @@ extern struct ia64_pal_retval xen_pal_em
 extern struct ia64_pal_retval xen_pal_emulator(UINT64, u64, u64, u64);
 extern struct sal_ret_values sal_emulator (long index, unsigned long in1, 
unsigned long in2, unsigned long in3, unsigned long in4, unsigned long in5, 
unsigned long in6, unsigned long in7);
 extern struct ia64_pal_retval pal_emulator_static (unsigned long);
-extern unsigned long dom_fw_setup (struct domain *, const char *, int);
 extern efi_status_t efi_emulator (struct pt_regs *regs, unsigned long *fault);
 
-extern void build_pal_hypercall_bundles(unsigned long *imva, unsigned long 
brkimm, unsigned long hypnum);
-extern void build_hypercall_bundle(UINT64 *imva, UINT64 brkimm, UINT64 hypnum, 
UINT64 ret);
+extern void dom_fw_setup (struct domain *, unsigned long bp_mpa, unsigned long 
maxmem);
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/domain.h
--- a/xen/include/asm-ia64/domain.h     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/domain.h     Fri Jul 28 10:51:38 2006 +0100
@@ -11,6 +11,7 @@
 #include <xen/list.h>
 #include <xen/cpumask.h>
 #include <asm/fpswa.h>
+#include <xen/rangeset.h>
 
 struct p2m_entry {
     volatile pte_t*     pte;
@@ -49,6 +50,9 @@ extern unsigned long domain_set_shared_i
    if false, flush and invalidate caches.  */
 extern void domain_cache_flush (struct domain *d, int sync_only);
 
+/* Control the shadow mode.  */
+extern int shadow_mode_control(struct domain *d, dom0_shadow_control_t *sc);
+
 /* Cleanly crash the current domain with a message.  */
 extern void panic_domain(struct pt_regs *, const char *, ...)
      __attribute__ ((noreturn, format (printf, 2, 3)));
@@ -58,10 +62,34 @@ struct mm_struct {
     // atomic_t mm_users;                      /* How many users with user 
space? */
 };
 
+struct last_vcpu {
+#define INVALID_VCPU_ID INT_MAX
+    int vcpu_id;
+} ____cacheline_aligned_in_smp;
+
+/* These are data in domain memory for SAL emulator.  */
+struct xen_sal_data {
+    /* OS boot rendez vous.  */
+    unsigned long boot_rdv_ip;
+    unsigned long boot_rdv_r1;
+
+    /* There are these for EFI_SET_VIRTUAL_ADDRESS_MAP emulation. */
+    int efi_virt_mode;         /* phys : 0 , virt : 1 */
+};
+
 struct arch_domain {
     struct mm_struct mm;
-    unsigned long metaphysical_rr0;
-    unsigned long metaphysical_rr4;
+
+    /* Flags.  */
+    union {
+        unsigned long flags;
+        struct {
+            unsigned int is_vti : 1;
+        };
+    };
+
+    /* Allowed accesses to io ports.  */
+    struct rangeset *ioport_caps;
 
     /* There are two ranges of RID for a domain:
        one big range, used to virtualize domain RID,
@@ -69,61 +97,72 @@ struct arch_domain {
     /* Big range.  */
     int starting_rid;          /* first RID assigned to domain */
     int ending_rid;            /* one beyond highest RID assigned to domain */
-    int rid_bits;              /* number of virtual rid bits (default: 18) */
     /* Metaphysical range.  */
     int starting_mp_rid;
     int ending_mp_rid;
-
+    /* RID for metaphysical mode.  */
+    unsigned long metaphysical_rr0;
+    unsigned long metaphysical_rr4;
+    
+    int rid_bits;              /* number of virtual rid bits (default: 18) */
     int breakimm;     /* The imm value for hypercalls.  */
 
-    int physmap_built;         /* Whether is physmap built or not */
-    int imp_va_msb;
-    /* System pages out of guest memory, like for xenstore/console */
-    unsigned long sys_pgnr;
-    unsigned long max_pfn; /* Max pfn including I/O holes */
     struct virtual_platform_def     vmx_platform;
 #define        hvm_domain vmx_platform /* platform defs are not vmx specific */
 
-    /* OS boot rendez vous.  */
-    unsigned long boot_rdv_ip;
-    unsigned long boot_rdv_r1;
-
+    u64 xen_vastart;
+    u64 xen_vaend;
+    u64 shared_info_va;
+ 
+    /* Address of SAL emulator data  */
+    struct xen_sal_data *sal_data;
     /* SAL return point.  */
     unsigned long sal_return_addr;
 
-    u64 shared_info_va;
-    unsigned long initrd_start;
-    unsigned long initrd_len;
-    char *cmdline;
-    /* There are these for EFI_SET_VIRTUAL_ADDRESS_MAP emulation. */
-    int efi_virt_mode;         /* phys : 0 , virt : 1 */
-    /* Metaphysical address to efi_runtime_services_t in domain firmware 
memory is set. */
+    /* Address of efi_runtime_services_t (placed in domain memory)  */
     void *efi_runtime;
-    /* Metaphysical address to fpswa_interface_t in domain firmware memory is 
set. */
+    /* Address of fpswa_interface_t (placed in domain memory)  */
     void *fpswa_inf;
+
+    /* Bitmap of shadow dirty bits.
+       Set iff shadow mode is enabled.  */
+    u64 *shadow_bitmap;
+    /* Length (in bits!) of shadow bitmap.  */
+    unsigned long shadow_bitmap_size;
+    /* Number of bits set in bitmap.  */
+    atomic64_t shadow_dirty_count;
+    /* Number of faults.  */
+    atomic64_t shadow_fault_count;
+
+    struct last_vcpu last_vcpu[NR_CPUS];
 };
 #define INT_ENABLE_OFFSET(v)             \
     (sizeof(vcpu_info_t) * (v)->vcpu_id + \
     offsetof(vcpu_info_t, evtchn_upcall_mask))
 
 struct arch_vcpu {
-       TR_ENTRY itrs[NITRS];
-       TR_ENTRY dtrs[NDTRS];
-       TR_ENTRY itlb;
-       TR_ENTRY dtlb;
-       unsigned int itr_regions;
-       unsigned int dtr_regions;
-       unsigned long irr[4];
-       unsigned long insvc[4];
-       unsigned long tc_regions;
-       unsigned long iva;
-       unsigned long dcr;
-       unsigned long itc;
-       unsigned long domain_itm;
-       unsigned long domain_itm_last;
-       unsigned long xen_itm;
-
-    mapped_regs_t *privregs; /* save the state of vcpu */
+    /* Save the state of vcpu.
+       This is the first entry to speed up accesses.  */
+    mapped_regs_t *privregs;
+
+    /* TR and TC.  */
+    TR_ENTRY itrs[NITRS];
+    TR_ENTRY dtrs[NDTRS];
+    TR_ENTRY itlb;
+    TR_ENTRY dtlb;
+
+    /* Bit is set if there is a tr/tc for the region.  */
+    unsigned char itr_regions;
+    unsigned char dtr_regions;
+    unsigned char tc_regions;
+
+    unsigned long irr[4];          /* Interrupt request register.  */
+    unsigned long insvc[4];            /* Interrupt in service.  */
+    unsigned long iva;
+    unsigned long dcr;
+    unsigned long domain_itm;
+    unsigned long domain_itm_last;
+
     unsigned long event_callback_ip;           // event callback handler
     unsigned long failsafe_callback_ip;        // Do we need it?
 
@@ -149,10 +188,17 @@ struct arch_vcpu {
     int mode_flags;
     fpswa_ret_t fpswa_ret;     /* save return values of FPSWA emulation */
     struct arch_vmx_struct arch_vmx; /* Virtual Machine Extensions */
+
+#define INVALID_PROCESSOR       INT_MAX
+    int last_processor;
 };
 
 #include <asm/uaccess.h> /* for KERNEL_DS */
 #include <asm/pgtable.h>
+
+/* Guest physical address of IO ports space.  */
+#define IO_PORTS_PADDR          0x00000ffffc000000UL
+#define IO_PORTS_SIZE           0x0000000004000000UL
 
 #endif /* __ASM_DOMAIN_H__ */
 
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/iocap.h
--- a/xen/include/asm-ia64/iocap.h      Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/iocap.h      Fri Jul 28 10:51:38 2006 +0100
@@ -7,4 +7,12 @@
 #ifndef __IA64_IOCAP_H__
 #define __IA64_IOCAP_H__
 
+extern int ioports_permit_access(struct domain *d,
+                                 unsigned long s, unsigned long e);
+extern int ioports_deny_access(struct domain *d,
+                               unsigned long s, unsigned long e);
+
+#define ioports_access_permitted(d, s, e)               \
+    rangeset_contains_range((d)->arch.ioport_caps, s, e)
+
 #endif /* __IA64_IOCAP_H__ */
diff -r 1eb42266de1b -r e5c84586c333 
xen/include/asm-ia64/linux-xen/asm/README.origin
--- a/xen/include/asm-ia64/linux-xen/asm/README.origin  Thu Jul 27 17:44:14 
2006 -0500
+++ b/xen/include/asm-ia64/linux-xen/asm/README.origin  Fri Jul 28 10:51:38 
2006 +0100
@@ -5,6 +5,7 @@
 # (e.g. with #ifdef XEN or XEN in a comment) so that they can be
 # easily updated to future versions of the corresponding Linux files.
 
+asmmacro.h             -> linux/include/asm-ia64/asmmacro.h
 cache.h                        -> linux/include/asm-ia64/cache.h
 gcc_intrin.h           -> linux/include/asm-ia64/gcc_intrin.h
 ia64regs.h             -> linux/include/asm-ia64/ia64regs.h
diff -r 1eb42266de1b -r e5c84586c333 
xen/include/asm-ia64/linux-xen/asm/mca_asm.h
--- a/xen/include/asm-ia64/linux-xen/asm/mca_asm.h      Thu Jul 27 17:44:14 
2006 -0500
+++ b/xen/include/asm-ia64/linux-xen/asm/mca_asm.h      Fri Jul 28 10:51:38 
2006 +0100
@@ -58,7 +58,9 @@
 #endif
 
 #ifdef XEN
-//FIXME LATER
+#define GET_THIS_PADDR(reg, var)               \
+       movl    reg = THIS_CPU(var)             \
+       tpa     reg = reg
 #else
 #define GET_THIS_PADDR(reg, var)               \
        mov     reg = IA64_KR(PER_CPU_DATA);;   \
diff -r 1eb42266de1b -r e5c84586c333 
xen/include/asm-ia64/linux-xen/asm/pgtable.h
--- a/xen/include/asm-ia64/linux-xen/asm/pgtable.h      Thu Jul 27 17:44:14 
2006 -0500
+++ b/xen/include/asm-ia64/linux-xen/asm/pgtable.h      Fri Jul 28 10:51:38 
2006 +0100
@@ -62,7 +62,12 @@
 #define _PAGE_D                        (1 << _PAGE_D_BIT)      /* page dirty 
bit */
 #define _PAGE_PPN_MASK         (((__IA64_UL(1) << IA64_MAX_PHYS_BITS) - 1) & 
~0xfffUL)
 #define _PAGE_ED               (__IA64_UL(1) << 52)    /* exception deferral */
+#ifdef XEN
+#define _PAGE_VIRT_D           (__IA64_UL(1) << 53)    /* Virtual dirty bit */
+#define _PAGE_PROTNONE         0
+#else
 #define _PAGE_PROTNONE         (__IA64_UL(1) << 63)
+#endif
 
 /* Valid only for a PTE with the present bit cleared: */
 #define _PAGE_FILE             (1 << 1)                /* see swap & file pte 
remarks below */
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/linux-xen/asm/system.h
--- a/xen/include/asm-ia64/linux-xen/asm/system.h       Thu Jul 27 17:44:14 
2006 -0500
+++ b/xen/include/asm-ia64/linux-xen/asm/system.h       Fri Jul 28 10:51:38 
2006 +0100
@@ -19,8 +19,8 @@
 #include <asm/pal.h>
 #include <asm/percpu.h>
 
+#ifndef XEN
 #define GATE_ADDR              __IA64_UL_CONST(0xa000000000000000)
-#ifndef XEN
 /*
  * 0xa000000000000000+2*PERCPU_PAGE_SIZE
  * - 0xa000000000000000+3*PERCPU_PAGE_SIZE remain unmapped (guard page)
diff -r 1eb42266de1b -r e5c84586c333 
xen/include/asm-ia64/linux/asm/README.origin
--- a/xen/include/asm-ia64/linux/asm/README.origin      Thu Jul 27 17:44:14 
2006 -0500
+++ b/xen/include/asm-ia64/linux/asm/README.origin      Fri Jul 28 10:51:38 
2006 +0100
@@ -5,7 +5,6 @@
 # the instructions in the README there.
 
 acpi.h                 -> linux/include/asm-ia64/acpi.h
-asmmacro.h             -> linux/include/asm-ia64/asmmacro.h
 atomic.h               -> linux/include/asm-ia64/atomic.h
 bitops.h               -> linux/include/asm-ia64/bitops.h
 break.h                        -> linux/include/asm-ia64/break.h
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/mm.h
--- a/xen/include/asm-ia64/mm.h Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/mm.h Fri Jul 28 10:51:38 2006 +0100
@@ -211,6 +211,11 @@ static inline int get_page_and_type(stru
     }
 
     return rc;
+}
+
+static inline int page_is_removable(struct page_info *page)
+{
+    return ((page->count_info & PGC_count_mask) == 2);
 }
 
 #define        set_machinetophys(_mfn, _pfn) do { } while(0);
@@ -429,7 +434,7 @@ struct p2m_entry;
 struct p2m_entry;
 extern unsigned long lookup_domain_mpa(struct domain *d, unsigned long mpaddr, 
struct p2m_entry* entry);
 extern void *domain_mpa_to_imva(struct domain *d, unsigned long mpaddr);
-
+extern volatile pte_t *lookup_noalloc_domain_pte(struct domain* d, unsigned 
long mpaddr);
 #ifdef CONFIG_XEN_IA64_DOM0_VP
 extern unsigned long assign_domain_mmio_page(struct domain *d, unsigned long 
mpaddr, unsigned long size);
 extern unsigned long assign_domain_mach_page(struct domain *d, unsigned long 
mpaddr, unsigned long size, unsigned long flags);
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/privop.h
--- a/xen/include/asm-ia64/privop.h     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/privop.h     Fri Jul 28 10:51:38 2006 +0100
@@ -2,234 +2,9 @@
 #define _XEN_IA64_PRIVOP_H
 
 #include <asm/ia64_int.h>
-#include <asm/vmx_vcpu.h>
 #include <asm/vcpu.h>
 
-typedef unsigned long IA64_INST;
-
 extern IA64FAULT priv_emulate(VCPU *vcpu, REGS *regs, UINT64 isr);
-
-typedef union U_IA64_BUNDLE {
-    unsigned long i64[2];
-    struct { unsigned long template:5,slot0:41,slot1a:18,slot1b:23,slot2:41; };
-    // NOTE: following doesn't work because bitfields can't cross natural
-    // size boundaries
-    //struct { unsigned long template:5, slot0:41, slot1:41, slot2:41; };
-} IA64_BUNDLE;
-
-typedef enum E_IA64_SLOT_TYPE { I, M, F, B, L, ILLEGAL } IA64_SLOT_TYPE;
-
-typedef union U_INST64_A5 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, imm7b:7, r3:2, imm5c:5, imm9d:9, s:1, 
major:4; };
-} INST64_A5;
-
-typedef union U_INST64_B4 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, btype:3, un3:3, p:1, b2:3, un11:11, x6:6, 
wh:2, d:1, un1:1, major:4; };
-} INST64_B4;
-
-typedef union U_INST64_B8 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, un21:21, x6:6, un4:4, major:4; };
-} INST64_B8;
-
-typedef union U_INST64_B9 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, imm20:20, :1, x6:6, :3, i:1, major:4; };
-} INST64_B9;
-
-typedef union U_INST64_I19 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, imm20:20, :1, x6:6, x3:3, i:1, major:4; };
-} INST64_I19;
-
-typedef union U_INST64_I26 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, r2:7, ar3:7, x6:6, x3:3, :1, major:4;};
-} INST64_I26;
-
-typedef union U_INST64_I27 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, imm:7, ar3:7, x6:6, x3:3, s:1, major:4;};
-} INST64_I27;
-
-typedef union U_INST64_I28 { // not privileged (mov from AR)
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, :7, ar3:7, x6:6, x3:3, :1, major:4;};
-} INST64_I28;
-
-typedef union U_INST64_M28 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :14, r3:7, x6:6, x3:3, :1, major:4;};
-} INST64_M28;
-
-typedef union U_INST64_M29 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, r2:7, ar3:7, x6:6, x3:3, :1, major:4;};
-} INST64_M29;
-
-typedef union U_INST64_M30 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, imm:7, ar3:7,x4:4,x2:2,x3:3,s:1,major:4;};
-} INST64_M30;
-
-typedef union U_INST64_M31 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, :7, ar3:7, x6:6, x3:3, :1, major:4;};
-} INST64_M31;
-
-typedef union U_INST64_M32 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, r2:7, cr3:7, x6:6, x3:3, :1, major:4;};
-} INST64_M32;
-
-typedef union U_INST64_M33 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, :7, cr3:7, x6:6, x3:3, :1, major:4; };
-} INST64_M33;
-
-typedef union U_INST64_M35 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, r2:7, :7, x6:6, x3:3, :1, major:4; };
-   
-} INST64_M35;
-
-typedef union U_INST64_M36 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, :14, x6:6, x3:3, :1, major:4; }; 
-} INST64_M36;
-
-typedef union U_INST64_M37 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, imm20a:20,:1, x4:4,x2:2,x3:3, i:1, major:4; };
-} INST64_M37;
-
-typedef union U_INST64_M41 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, r2:7, :7, x6:6, x3:3, :1, major:4; }; 
-} INST64_M41;
-
-typedef union U_INST64_M42 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, r2:7, r3:7, x6:6, x3:3, :1, major:4; };
-} INST64_M42;
-
-typedef union U_INST64_M43 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, :7, r3:7, x6:6, x3:3, :1, major:4; };
-} INST64_M43;
-
-typedef union U_INST64_M44 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, imm:21, x4:4, i2:2, x3:3, i:1, major:4; };
-} INST64_M44;
-
-typedef union U_INST64_M45 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, r2:7, r3:7, x6:6, x3:3, :1, major:4; };
-} INST64_M45;
-
-typedef union U_INST64_M46 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, un7:7, r3:7, x6:6, x3:3, un1:1, 
major:4; };
-} INST64_M46;
-
-typedef union U_INST64_M47 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, un14:14, r3:7, x6:6, x3:3, un1:1, major:4; };
-} INST64_M47;
-
-typedef union U_INST64_M1{
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, un7:7, r3:7, x:1, hint:2, x6:6, m:1, 
major:4; };
-} INST64_M1;
-
-typedef union U_INST64_M2{
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, r2:7, r3:7, x:1, hint:2, x6:6, m:1, 
major:4; };
-} INST64_M2;
-
-typedef union U_INST64_M3{
-    IA64_INST inst;
-    struct { unsigned long qp:6, r1:7, imm7:7, r3:7, i:1, hint:2, x6:6, s:1, 
major:4; };
-} INST64_M3;
-
-typedef union U_INST64_M4 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, un7:7, r2:7, r3:7, x:1, hint:2, x6:6, m:1, 
major:4; };
-} INST64_M4;
-
-typedef union U_INST64_M5 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, imm7:7, r2:7, r3:7, i:1, hint:2, x6:6, s:1, 
major:4; };
-} INST64_M5;
-
-typedef union U_INST64_M6 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, f1:7, un7:7, r3:7, x:1, hint:2, x6:6, m:1, 
major:4; };
-} INST64_M6;
-
-typedef union U_INST64_M9 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, f2:7, r3:7, x:1, hint:2, x6:6, m:1, 
major:4; };
-} INST64_M9;
-
-typedef union U_INST64_M10 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, imm7:7, f2:7, r3:7, i:1, hint:2, x6:6, s:1, 
major:4; };
-} INST64_M10;
-
-typedef union U_INST64_M12 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, f1:7, f2:7, r3:7, x:1, hint:2, x6:6, m:1, 
major:4; };
-} INST64_M12;
-                        
-typedef union U_INST64_M15 {
-    IA64_INST inst;
-    struct { unsigned long qp:6, :7, imm7:7, r3:7, i:1, hint:2, x6:6, s:1, 
major:4; };
-} INST64_M15;
-
-typedef union U_INST64 {
-    IA64_INST inst;
-    struct { unsigned long :37, major:4; } generic;
-    INST64_A5 A5;      // used in build_hypercall_bundle only
-    INST64_B4 B4;      // used in build_hypercall_bundle only
-    INST64_B8 B8;      // rfi, bsw.[01]
-    INST64_B9 B9;      // break.b
-    INST64_I19 I19;    // used in build_hypercall_bundle only
-    INST64_I26 I26;    // mov register to ar (I unit)
-    INST64_I27 I27;    // mov immediate to ar (I unit)
-    INST64_I28 I28;    // mov from ar (I unit)
-    INST64_M1  M1;     // ld integer
-    INST64_M2  M2;
-    INST64_M3  M3;
-    INST64_M4  M4;     // st integer
-    INST64_M5  M5;
-    INST64_M6  M6;     // ldfd floating pointer
-    INST64_M9  M9;     // stfd floating pointer
-    INST64_M10 M10;    // stfd floating pointer
-    INST64_M12 M12;    // ldfd pair floating pointer
-    INST64_M15 M15;    // lfetch + imm update
-    INST64_M28 M28;    // purge translation cache entry
-    INST64_M29 M29;    // mov register to ar (M unit)
-    INST64_M30 M30;    // mov immediate to ar (M unit)
-    INST64_M31 M31;    // mov from ar (M unit)
-    INST64_M32 M32;    // mov reg to cr
-    INST64_M33 M33;    // mov from cr
-    INST64_M35 M35;    // mov to psr
-    INST64_M36 M36;    // mov from psr
-    INST64_M37 M37;    // break.m
-    INST64_M41 M41;    // translation cache insert
-    INST64_M42 M42;    // mov to indirect reg/translation reg insert
-    INST64_M43 M43;    // mov from indirect reg
-    INST64_M44 M44;    // set/reset system mask
-    INST64_M45 M45;    // translation purge
-    INST64_M46 M46;    // translation access (tpa,tak)
-    INST64_M47 M47;    // purge translation entry
-} INST64;
-
-#define MASK_41 ((UINT64)0x1ffffffffff)
 
 extern void privify_memory(void *start, UINT64 len);
 
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/regionreg.h
--- a/xen/include/asm-ia64/regionreg.h  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/regionreg.h  Fri Jul 28 10:51:38 2006 +0100
@@ -79,6 +79,5 @@ extern int set_metaphysical_rr0(void);
 extern int set_metaphysical_rr0(void);
 
 extern void load_region_regs(struct vcpu *v);
-extern void load_region_reg7_and_pta(struct vcpu *v);
 
 #endif         /* !_REGIONREG_H_ */
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/shadow.h
--- a/xen/include/asm-ia64/shadow.h     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/shadow.h     Fri Jul 28 10:51:38 2006 +0100
@@ -45,6 +45,24 @@ void guest_physmap_remove_page(struct do
 void guest_physmap_remove_page(struct domain *d, unsigned long gpfn, unsigned 
long mfn);
 #endif
 
+static inline int
+shadow_mode_enabled(struct domain *d)
+{
+    return d->arch.shadow_bitmap != NULL;
+}
+
+static inline int
+shadow_mark_page_dirty(struct domain *d, unsigned long gpfn)
+{
+    if (gpfn < d->arch.shadow_bitmap_size * 8
+        && !test_and_set_bit(gpfn, d->arch.shadow_bitmap)) {
+        /* The page was not dirty.  */
+        atomic64_inc(&d->arch.shadow_dirty_count);
+        return 1;
+    } else
+        return 0;
+}
+
 #endif // _XEN_SHADOW_H
 
 /*
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/tlbflush.h
--- a/xen/include/asm-ia64/tlbflush.h   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/tlbflush.h   Fri Jul 28 10:51:38 2006 +0100
@@ -11,7 +11,7 @@
 */
 
 /* Local all flush of vTLB.  */
-void vcpu_flush_vtlb_all (void);
+void vcpu_flush_vtlb_all(struct vcpu *v);
 
 /* Local range flush of machine TLB only (not full VCPU virtual TLB!!!)  */
 void vcpu_flush_tlb_vhpt_range (u64 vadr, u64 log_range);
@@ -22,8 +22,8 @@ void domain_flush_vtlb_all (void);
 /* Global range-flush of vTLB.  */
 void domain_flush_vtlb_range (struct domain *d, u64 vadr, u64 addr_range);
 
-/* Final vTLB flush on every dirty cpus.  */
-void domain_flush_destroy (struct domain *d);
+/* Flush vhpt and mTLB on every dirty cpus.  */
+void domain_flush_tlb_vhpt(struct domain *d);
 
 /* Flush v-tlb on cpus set in mask for current domain.  */
 void flush_tlb_mask(cpumask_t mask);
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/vcpu.h
--- a/xen/include/asm-ia64/vcpu.h       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/vcpu.h       Fri Jul 28 10:51:38 2006 +0100
@@ -4,7 +4,8 @@
 // TODO: Many (or perhaps most) of these should eventually be
 // static inline functions
 
-//#include "thread.h"
+#include <asm/fpu.h>
+#include <asm/tlb.h>
 #include <asm/ia64_int.h>
 #include <public/arch-ia64.h>
 typedef        unsigned long UINT64;
@@ -12,29 +13,14 @@ typedef     int BOOLEAN;
 typedef        int BOOLEAN;
 struct vcpu;
 typedef        struct vcpu VCPU;
-
 typedef cpu_user_regs_t REGS;
-
 
 /* Note: PSCB stands for Privilegied State Communication Block.  */
 #define VCPU(_v,_x)    (_v->arch.privregs->_x)
 #define PSCB(_v,_x) VCPU(_v,_x)
 #define PSCBX(_v,_x) (_v->arch._x)
 
-#define PRIVOP_ADDR_COUNT
-#ifdef PRIVOP_ADDR_COUNT
-#define _GET_IFA 0
-#define _THASH 1
-#define PRIVOP_COUNT_NINSTS 2
-#define PRIVOP_COUNT_NADDRS 30
-
-struct privop_addr_count {
-       char *instname;
-       unsigned long addr[PRIVOP_COUNT_NADDRS];
-       unsigned long count[PRIVOP_COUNT_NADDRS];
-       unsigned long overflow;
-};
-#endif
+#define SPURIOUS_VECTOR 0xf
 
 /* general registers */
 extern UINT64 vcpu_get_gr(VCPU *vcpu, unsigned long reg);
@@ -176,6 +162,11 @@ extern UINT64 vcpu_get_tmp(VCPU *, UINT6
 extern UINT64 vcpu_get_tmp(VCPU *, UINT64);
 extern void vcpu_set_tmp(VCPU *, UINT64, UINT64);
 
+extern IA64FAULT vcpu_set_dtr(VCPU *vcpu, u64 slot,
+                              u64 pte, u64 itir, u64 ifa, u64 rid);
+extern IA64FAULT vcpu_set_itr(VCPU *vcpu, u64 slot,
+                              u64 pte, u64 itir, u64 ifa, u64 rid);
+
 /* Initialize vcpu regs.  */
 extern void vcpu_init_regs (struct vcpu *v);
 
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/vhpt.h
--- a/xen/include/asm-ia64/vhpt.h       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/vhpt.h       Fri Jul 28 10:51:38 2006 +0100
@@ -21,6 +21,8 @@
 #define        VLE_CCHAIN_OFFSET               24
 
 #ifndef __ASSEMBLY__
+#include <xen/percpu.h>
+
 //
 // VHPT Long Format Entry (as recognized by hw)
 //
@@ -40,6 +42,7 @@ extern void vhpt_multiple_insert(unsigne
                                 unsigned long logps);
 extern void vhpt_insert (unsigned long vadr, unsigned long pte,
                         unsigned long logps);
+void vhpt_flush(void);
 
 /* Currently the VHPT is allocated per CPU.  */
 DECLARE_PER_CPU (unsigned long, vhpt_paddr);
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/vmx.h
--- a/xen/include/asm-ia64/vmx.h        Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/vmx.h        Fri Jul 28 10:51:38 2006 +0100
@@ -34,16 +34,14 @@ extern void vmx_final_setup_guest(struct
 extern void vmx_final_setup_guest(struct vcpu *v);
 extern void vmx_save_state(struct vcpu *v);
 extern void vmx_load_state(struct vcpu *v);
-extern void vmx_setup_platform(struct domain *d, struct vcpu_guest_context *c);
+extern void vmx_setup_platform(struct domain *d);
 extern void vmx_wait_io(void);
 extern void vmx_io_assist(struct vcpu *v);
-extern void panic_domain(struct pt_regs *regs, const char *fmt, ...);
 extern int ia64_hypercall (struct pt_regs *regs);
 extern void vmx_save_state(struct vcpu *v);
 extern void vmx_load_state(struct vcpu *v);
 extern void show_registers(struct pt_regs *regs);
 #define show_execution_state show_registers
-extern int vmx_build_physmap_table(struct domain *d);
 extern unsigned long __gpfn_to_mfn_foreign(struct domain *d, unsigned long 
gpfn);
 extern void sync_split_caches(void);
 extern void vmx_virq_line_assist(struct vcpu *v);
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/vmx_pal.h
--- a/xen/include/asm-ia64/vmx_pal.h    Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/vmx_pal.h    Fri Jul 28 10:51:38 2006 +0100
@@ -74,10 +74,11 @@ ia64_pal_vp_exit_env(u64 iva)
 #define        VP_FR_PMC       1UL<<1
 #define        VP_OPCODE       1UL<<8
 #define        VP_CAUSE        1UL<<9
+#define        VP_FW_ACC       1UL<<63
 /* init vp env with initializing vm_buffer */
-#define        VP_INIT_ENV_INITALIZE  
VP_INITIALIZE|VP_FR_PMC|VP_OPCODE|VP_CAUSE
+#define        VP_INIT_ENV_INITALIZE  
VP_INITIALIZE|VP_FR_PMC|VP_OPCODE|VP_CAUSE|VP_FW_ACC
 /* init vp env without initializing vm_buffer */
-#define        VP_INIT_ENV  VP_FR_PMC|VP_OPCODE|VP_CAUSE
+#define        VP_INIT_ENV  VP_FR_PMC|VP_OPCODE|VP_CAUSE|VP_FW_ACC
 
 static inline s64
 ia64_pal_vp_init_env (u64 config_options, u64 pbase_addr, \
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/vmx_phy_mode.h
--- a/xen/include/asm-ia64/vmx_phy_mode.h       Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/vmx_phy_mode.h       Fri Jul 28 10:51:38 2006 +0100
@@ -96,7 +96,6 @@ extern void recover_if_physical_mode(VCP
 extern void recover_if_physical_mode(VCPU *vcpu);
 extern void vmx_init_all_rr(VCPU *vcpu);
 extern void vmx_load_all_rr(VCPU *vcpu);
-extern void vmx_load_rr7_and_pta(VCPU *vcpu);
 extern void physical_tlb_miss(VCPU *vcpu, u64 vadr);
 /*
  * No sanity check here, since all psr changes have been
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/vmx_vcpu.h
--- a/xen/include/asm-ia64/vmx_vcpu.h   Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/vmx_vcpu.h   Fri Jul 28 10:51:38 2006 +0100
@@ -103,6 +103,7 @@ extern void vlsapic_reset(VCPU *vcpu);
 extern void vlsapic_reset(VCPU *vcpu);
 extern int vmx_check_pending_irq(VCPU *vcpu);
 extern void guest_write_eoi(VCPU *vcpu);
+extern int is_unmasked_irq(VCPU *vcpu);
 extern uint64_t guest_read_vivr(VCPU *vcpu);
 extern void vmx_inject_vhpi(VCPU *vcpu, u8 vec);
 extern int vmx_vcpu_pend_interrupt(VCPU *vcpu, uint8_t vector);
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/vmx_vpd.h
--- a/xen/include/asm-ia64/vmx_vpd.h    Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/vmx_vpd.h    Fri Jul 28 10:51:38 2006 +0100
@@ -106,7 +106,7 @@ struct arch_vmx_struct {
 
 #define ARCH_VMX_IO_WAIT        3       /* Waiting for I/O completion */
 #define ARCH_VMX_INTR_ASSIST    4       /* Need DM's assist to issue intr */
-#define ARCH_VMX_CONTIG_MEM    5       /* Need contiguous machine pages */
+#define ARCH_VMX_DOMAIN         5       /* Need it to indicate VTi domain */
 
 
 #define VMX_DEBUG 1
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/xenpage.h
--- a/xen/include/asm-ia64/xenpage.h    Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/xenpage.h    Fri Jul 28 10:51:38 2006 +0100
@@ -60,6 +60,13 @@ static inline int get_order_from_pages(u
     return order;
 }
 
+static inline int get_order_from_shift(unsigned long shift)
+{
+    if (shift <= PAGE_SHIFT)
+       return 0;
+    else
+       return shift - PAGE_SHIFT;
+}
 #endif
 
 #undef __pa
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-ia64/xensystem.h
--- a/xen/include/asm-ia64/xensystem.h  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-ia64/xensystem.h  Fri Jul 28 10:51:38 2006 +0100
@@ -19,6 +19,7 @@
 
 #define HYPERVISOR_VIRT_START   0xe800000000000000
 #define KERNEL_START            0xf000000004000000
+#define GATE_ADDR              KERNEL_START
 #define DEFAULT_SHAREDINFO_ADDR         0xf100000000000000
 #define PERCPU_ADDR             (DEFAULT_SHAREDINFO_ADDR - PERCPU_PAGE_SIZE)
 #define VHPT_ADDR               0xf200000000000000
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-x86/hvm/vmx/vmx.h
--- a/xen/include/asm-x86/hvm/vmx/vmx.h Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h Fri Jul 28 10:51:38 2006 +0100
@@ -143,11 +143,12 @@ extern unsigned int cpu_rev;
  */
 #define INTR_INFO_VECTOR_MASK           0xff            /* 7:0 */
 #define INTR_INFO_INTR_TYPE_MASK        0x700           /* 10:8 */
-#define INTR_INFO_DELIEVER_CODE_MASK    0x800           /* 11 */
+#define INTR_INFO_DELIVER_CODE_MASK     0x800           /* 11 */
 #define INTR_INFO_VALID_MASK            0x80000000      /* 31 */
 
 #define INTR_TYPE_EXT_INTR              (0 << 8) /* external interrupt */
-#define INTR_TYPE_EXCEPTION             (3 << 8) /* processor exception */
+#define INTR_TYPE_HW_EXCEPTION             (3 << 8) /* hardware exception */
+#define INTR_TYPE_SW_EXCEPTION             (6 << 8) /* software exception */
 
 /*
  * Exit Qualifications for MOV for Control Register Access
@@ -421,7 +422,7 @@ static inline int vmx_pgbit_test(struct 
 }
 
 static inline int __vmx_inject_exception(struct vcpu *v, int trap, int type, 
-                                         int error_code)
+                                         int error_code, int ilen)
 {
     unsigned long intr_fields;
 
@@ -429,22 +430,33 @@ static inline int __vmx_inject_exception
     intr_fields = (INTR_INFO_VALID_MASK | type | trap);
     if (error_code != VMX_DELIVER_NO_ERROR_CODE) {
         __vmwrite(VM_ENTRY_EXCEPTION_ERROR_CODE, error_code);
-        intr_fields |= INTR_INFO_DELIEVER_CODE_MASK;
+        intr_fields |= INTR_INFO_DELIVER_CODE_MASK;
      }
-    
+
+    if(ilen)
+      __vmwrite(VM_ENTRY_INSTRUCTION_LEN, ilen);
+
     __vmwrite(VM_ENTRY_INTR_INFO_FIELD, intr_fields);
     return 0;
 }
 
-static inline int vmx_inject_exception(struct vcpu *v, int trap, int 
error_code)
+static inline int vmx_inject_hw_exception(struct vcpu *v, int trap, int 
error_code)
 {
     v->arch.hvm_vmx.vector_injected = 1;
-    return __vmx_inject_exception(v, trap, INTR_TYPE_EXCEPTION, error_code);
+    return __vmx_inject_exception(v, trap, INTR_TYPE_HW_EXCEPTION,
+                                 error_code, 0);
+}
+
+static inline int vmx_inject_sw_exception(struct vcpu *v, int trap, int 
instruction_len) {
+     v->arch.hvm_vmx.vector_injected=1;
+     return __vmx_inject_exception(v, trap, INTR_TYPE_SW_EXCEPTION,
+                                  VMX_DELIVER_NO_ERROR_CODE,
+                                  instruction_len);
 }
 
 static inline int vmx_inject_extint(struct vcpu *v, int trap, int error_code)
 {
-    __vmx_inject_exception(v, trap, INTR_TYPE_EXT_INTR, error_code);
+    __vmx_inject_exception(v, trap, INTR_TYPE_EXT_INTR, error_code, 0);
     __vmwrite(GUEST_INTERRUPTIBILITY_INFO, 0);
 
     return 0;
@@ -452,14 +464,14 @@ static inline int vmx_inject_extint(stru
 
 static inline int vmx_reflect_exception(struct vcpu *v)
 {
-    int error_code, vector;
-
-    __vmread(VM_EXIT_INTR_INFO, &vector);
-    if (vector & INTR_INFO_DELIEVER_CODE_MASK)
+    int error_code, intr_info, vector;
+
+    __vmread(VM_EXIT_INTR_INFO, &intr_info);
+    vector = intr_info & 0xff;
+    if (intr_info & INTR_INFO_DELIVER_CODE_MASK)
         __vmread(VM_EXIT_INTR_ERROR_CODE, &error_code);
     else
         error_code = VMX_DELIVER_NO_ERROR_CODE;
-    vector &= 0xff;
 
 #ifndef NDEBUG
     {
@@ -472,7 +484,19 @@ static inline int vmx_reflect_exception(
     }
 #endif /* NDEBUG */
 
-    vmx_inject_exception(v, vector, error_code);
+    /* According to Intel Virtualization Technology Specification for
+       the IA-32 Intel Architecture (C97063-002 April 2005), section
+       2.8.3, SW_EXCEPTION should be used for #BP and #OV, and
+       HW_EXCPEPTION used for everything else.  The main difference
+       appears to be that for SW_EXCEPTION, the EIP/RIP is incremented
+       by VM_ENTER_INSTRUCTION_LEN bytes, whereas for HW_EXCEPTION, 
+       it is not.  */
+    if((intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_SW_EXCEPTION) {
+      int ilen;
+      __vmread(VM_EXIT_INSTRUCTION_LEN, &ilen);
+      vmx_inject_sw_exception(v, vector, ilen);
+    } else
+      vmx_inject_hw_exception(v, vector, error_code);
     return 0;
 }
 
diff -r 1eb42266de1b -r e5c84586c333 xen/include/asm-x86/mm.h
--- a/xen/include/asm-x86/mm.h  Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/asm-x86/mm.h  Fri Jul 28 10:51:38 2006 +0100
@@ -241,6 +241,11 @@ static inline int get_page_and_type(stru
     return rc;
 }
 
+static inline int page_is_removable(struct page_info *page)
+{
+    return ((page->count_info & PGC_count_mask) == 1);
+}
+
 #define ASSERT_PAGE_IS_TYPE(_p, _t)                            \
     ASSERT(((_p)->u.inuse.type_info & PGT_type_mask) == (_t)); \
     ASSERT(((_p)->u.inuse.type_info & PGT_count_mask) != 0)
diff -r 1eb42266de1b -r e5c84586c333 xen/include/public/arch-ia64.h
--- a/xen/include/public/arch-ia64.h    Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/public/arch-ia64.h    Fri Jul 28 10:51:38 2006 +0100
@@ -42,19 +42,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_pfn_t);
 
 typedef unsigned long xen_ulong_t;
 
-#define MAX_NR_SECTION  32  /* at most 32 memory holes */
-struct mm_section {
-    unsigned long start;  /* start of memory hole */
-    unsigned long end;    /* end of memory hole */
-};
-typedef struct mm_section mm_section_t;
-
-struct pmt_entry {
-    unsigned long mfn : 56;
-    unsigned long type: 8;
-};
-typedef struct pmt_entry pmt_entry_t;
-
 #define GPFN_MEM          (0UL << 56) /* Guest pfn is normal mem */
 #define GPFN_FRAME_BUFFER (1UL << 56) /* VGA framebuffer */
 #define GPFN_LOW_MMIO     (2UL << 56) /* Low MMIO range */
@@ -95,16 +82,6 @@ typedef struct pmt_entry pmt_entry_t;
 
 #define GFW_START        (4*MEM_G -16*MEM_M)
 #define GFW_SIZE         (16*MEM_M)
-
-/*
- * NB. This may become a 64-bit count with no shift. If this happens then the 
- * structure size will still be 8 bytes, so no other alignments will change.
- */
-struct tsc_timestamp {
-    unsigned int  tsc_bits;      /* 0: 32 bits read from the CPU's TSC. */
-    unsigned int  tsc_bitshift;  /* 4: 'tsc_bits' uses N:N+31 of TSC.   */
-}; /* 8 bytes */
-typedef struct tsc_timestamp tsc_timestamp_t;
 
 struct pt_fpreg {
     union {
@@ -185,7 +162,7 @@ struct cpu_user_regs {
     unsigned long r6;  /* preserved */
     unsigned long r7;  /* preserved */
     unsigned long eml_unat;    /* used for emulating instruction */
-    unsigned long rfi_pfs;     /* used for elulating rfi */
+    unsigned long pad0;     /* alignment pad */
 
 };
 typedef struct cpu_user_regs cpu_user_regs_t;
@@ -299,20 +276,23 @@ struct mapped_regs {
             unsigned long tmp[8]; // temp registers (e.g. for hyperprivops)
         };
     };
+};
+typedef struct mapped_regs mapped_regs_t;
+
+struct vpd {
+    struct mapped_regs vpd_low;
     unsigned long  reserved6[3456];
     unsigned long  vmm_avail[128];
     unsigned long  reserved7[4096];
 };
-typedef struct mapped_regs mapped_regs_t;
+typedef struct vpd vpd_t;
 
 struct arch_vcpu_info {
 };
 typedef struct arch_vcpu_info arch_vcpu_info_t;
 
-typedef mapped_regs_t vpd_t;
-
 struct arch_shared_info {
-    unsigned int flags;
+    /* PFN of the start_info page.  */
     unsigned long start_info_pfn;
 
     /* Interrupt vector for event channel.  */
@@ -320,30 +300,30 @@ struct arch_shared_info {
 };
 typedef struct arch_shared_info arch_shared_info_t;
 
-struct arch_initrd_info {
-    unsigned long start;
-    unsigned long size;
-};
-typedef struct arch_initrd_info arch_initrd_info_t;
-
 typedef unsigned long xen_callback_t;
 
-#define IA64_COMMAND_LINE_SIZE 512
+struct ia64_tr_entry {
+    unsigned long pte;
+    unsigned long itir;
+    unsigned long vadr;
+    unsigned long rid;
+};
+
+struct vcpu_extra_regs {
+    struct ia64_tr_entry itrs[8];
+    struct ia64_tr_entry dtrs[8];
+    unsigned long iva;
+    unsigned long dcr;
+    unsigned long event_callback_ip;
+};
+
 struct vcpu_guest_context {
-#define VGCF_FPU_VALID (1<<0)
-#define VGCF_VMX_GUEST (1<<1)
-#define VGCF_IN_KERNEL (1<<2)
+#define VGCF_EXTRA_REGS (1<<1) /* Get/Set extra regs.  */
     unsigned long flags;       /* VGCF_* flags */
-    unsigned long pt_base;     /* PMT table base */
-    unsigned long share_io_pg; /* Shared page for I/O emulation */
-    unsigned long sys_pgnr;    /* System pages out of domain memory */
-    unsigned long vm_assist;   /* VMASST_TYPE_* bitmap, now none on IPF */
 
     struct cpu_user_regs user_regs;
-    struct mapped_regs *privregs;
-    struct arch_shared_info shared;
-    struct arch_initrd_info initrd;
-    char cmdline[IA64_COMMAND_LINE_SIZE];
+    struct vcpu_extra_regs extra_regs;
+    unsigned long privregs_pfn;
 };
 typedef struct vcpu_guest_context vcpu_guest_context_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
@@ -379,18 +359,43 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_conte
 #define _ASSIGN_readonly                0
 #define ASSIGN_readonly                 (1UL << _ASSIGN_readonly)
 #define ASSIGN_writable                 (0UL << _ASSIGN_readonly) // dummy flag
+/* Internal only: memory attribute must be WC/UC/UCE.  */
+#define _ASSIGN_nocache                 1
+#define ASSIGN_nocache                  (1UL << _ASSIGN_nocache)
+
+/* This structure has the same layout of struct ia64_boot_param, defined in
+   <asm/system.h>.  It is redefined here to ease use.  */
+struct xen_ia64_boot_param {
+       unsigned long command_line;     /* physical address of cmd line args */
+       unsigned long efi_systab;       /* physical address of EFI system table 
*/
+       unsigned long efi_memmap;       /* physical address of EFI memory map */
+       unsigned long efi_memmap_size;  /* size of EFI memory map */
+       unsigned long efi_memdesc_size; /* size of an EFI memory map descriptor 
*/
+       unsigned int  efi_memdesc_version;      /* memory descriptor version */
+       struct {
+               unsigned short num_cols;        /* number of columns on 
console.  */
+               unsigned short num_rows;        /* number of rows on console.  
*/
+               unsigned short orig_x;  /* cursor's x position */
+               unsigned short orig_y;  /* cursor's y position */
+       } console_info;
+       unsigned long fpswa;            /* physical address of the fpswa 
interface */
+       unsigned long initrd_start;
+       unsigned long initrd_size;
+       unsigned long domain_start;     /* va where the boot time domain begins 
*/
+       unsigned long domain_size;      /* how big is the boot domain */
+};
 
 #endif /* !__ASSEMBLY__ */
 
 /* Address of shared_info in domain virtual space.
    This is the default address, for compatibility only.  */
-#define XSI_BASE                               0xf100000000000000
+#define XSI_BASE                       0xf100000000000000
 
 /* Size of the shared_info area (this is not related to page size).  */
-#define XSI_LOG_SIZE                   14
-#define XSI_SIZE                               (1 << XSI_LOG_SIZE)
+#define XSI_SHIFT                      14
+#define XSI_SIZE                       (1 << XSI_SHIFT)
 /* Log size of mapped_regs area (64 KB - only 4KB is used).  */
-#define XMAPPEDREGS_LOG_SIZE   16
+#define XMAPPEDREGS_SHIFT              12
 /* Offset of XASI (Xen arch shared info) wrt XSI_BASE.  */
 #define XMAPPEDREGS_OFS                        XSI_SIZE
 
@@ -418,7 +423,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_guest_conte
 #define HYPERPRIVOP_GET_PMD            0x15
 #define HYPERPRIVOP_GET_EFLAG          0x16
 #define HYPERPRIVOP_SET_EFLAG          0x17
-#define HYPERPRIVOP_MAX                        0x17
+#define HYPERPRIVOP_RSM_BE             0x18
+#define HYPERPRIVOP_GET_PSR            0x19
+#define HYPERPRIVOP_MAX                        0x19
 
 #endif /* __HYPERVISOR_IF_IA64_H__ */
 
diff -r 1eb42266de1b -r e5c84586c333 xen/include/public/dom0_ops.h
--- a/xen/include/public/dom0_ops.h     Thu Jul 27 17:44:14 2006 -0500
+++ b/xen/include/public/dom0_ops.h     Fri Jul 28 10:51:38 2006 +0100
@@ -518,12 +518,16 @@ DEFINE_XEN_GUEST_HANDLE(dom0_hypercall_i
 #define DOM0_DOMAIN_SETUP     49
 #define _XEN_DOMAINSETUP_hvm_guest 0
 #define XEN_DOMAINSETUP_hvm_guest  (1UL<<_XEN_DOMAINSETUP_hvm_guest)
+#define _XEN_DOMAINSETUP_query 1       /* Get parameters (for save)  */
+#define XEN_DOMAINSETUP_query  (1UL<<_XEN_DOMAINSETUP_query)
 typedef struct dom0_domain_setup {
     domid_t  domain;          /* domain to be affected */
     unsigned long flags;      /* XEN_DOMAINSETUP_* */
 #ifdef __ia64__
     unsigned long bp;         /* mpaddr of boot param area */
     unsigned long maxmem;        /* Highest memory address for MDT.  */
+    unsigned long xsi_va;     /* Xen shared_info area virtual address.  */
+    unsigned int hypercall_imm;        /* Break imm for Xen hypercalls.  */
 #endif
 } dom0_domain_setup_t;
 DEFINE_XEN_GUEST_HANDLE(dom0_domain_setup_t);
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/arch/ia64/kernel/gate.S
--- /dev/null   Thu Jan 01 00:00:00 1970 +0000
+++ b/linux-2.6-xen-sparse/arch/ia64/kernel/gate.S      Fri Jul 28 10:51:38 
2006 +0100
@@ -0,0 +1,488 @@
+/*
+ * This file contains the code that gets mapped at the upper end of each 
task's text
+ * region.  For now, it contains the signal trampoline code only.
+ *
+ * Copyright (C) 1999-2003 Hewlett-Packard Co
+ *     David Mosberger-Tang <davidm@xxxxxxxxxx>
+ */
+
+#include <linux/config.h>
+
+#include <asm/asmmacro.h>
+#include <asm/errno.h>
+#include <asm/asm-offsets.h>
+#include <asm/sigcontext.h>
+#include <asm/system.h>
+#include <asm/unistd.h>
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+# include <asm/privop.h>
+#endif
+
+/*
+ * We can't easily refer to symbols inside the kernel.  To avoid full runtime 
relocation,
+ * complications with the linker (which likes to create PLT stubs for branches
+ * to targets outside the shared object) and to avoid multi-phase kernel 
builds, we
+ * simply create minimalistic "patch lists" in special ELF sections.
+ */
+       .section ".data.patch.fsyscall_table", "a"
+       .previous
+#define LOAD_FSYSCALL_TABLE(reg)                       \
+[1:]   movl reg=0;                                     \
+       .xdata4 ".data.patch.fsyscall_table", 1b-.
+
+       .section ".data.patch.brl_fsys_bubble_down", "a"
+       .previous
+#define BRL_COND_FSYS_BUBBLE_DOWN(pr)                  \
+[1:](pr)brl.cond.sptk 0;                               \
+       .xdata4 ".data.patch.brl_fsys_bubble_down", 1b-.
+
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+       // The page in which hyperprivop lives must be pinned by ITR.
+       // However vDSO area isn't pinned. So issuing hyperprivop
+       // from vDSO page causes trouble that Kevin pointed out.
+       // After clearing vpsr.ic, the vcpu is pre-empted and the itlb
+       // is flushed. Then vcpu get cpu again, tlb miss fault occures.
+       // However it results in nested dtlb fault because vpsr.ic is off.
+       // To avoid such a situation, we jump into the kernel text area
+       // which is pinned, and then issue hyperprivop and return back
+       // to vDSO page.
+       // This is Dan Magenheimer's idea.
+
+       // Currently is_running_on_xen() is defined as running_on_xen.
+       // If is_running_on_xen() is a real function, we must update
+       // according to it.
+       .section ".data.patch.running_on_xen", "a"
+       .previous
+#define LOAD_RUNNING_ON_XEN(reg)                       \
+[1:]   movl reg=0;                                     \
+       .xdata4 ".data.patch.running_on_xen", 1b-.
+
+       .section ".data.patch.brl_xen_rsm_be_i", "a"
+       .previous
+#define BRL_COND_XEN_RSM_BE_I(pr)                      \
+[1:](pr)brl.cond.sptk 0;                               \
+       .xdata4 ".data.patch.brl_xen_rsm_be_i", 1b-.
+
+       .section ".data.patch.brl_xen_get_psr", "a"
+       .previous
+#define BRL_COND_XEN_GET_PSR(pr)                       \
+[1:](pr)brl.cond.sptk 0;                               \
+       .xdata4 ".data.patch.brl_xen_get_psr", 1b-.
+
+       .section ".data.patch.brl_xen_ssm_i_0", "a"
+       .previous
+#define BRL_COND_XEN_SSM_I_0(pr)                       \
+[1:](pr)brl.cond.sptk 0;                               \
+       .xdata4 ".data.patch.brl_xen_ssm_i_0", 1b-.
+
+       .section ".data.patch.brl_xen_ssm_i_1", "a"
+       .previous
+#define BRL_COND_XEN_SSM_I_1(pr)                       \
+[1:](pr)brl.cond.sptk 0;                               \
+       .xdata4 ".data.patch.brl_xen_ssm_i_1", 1b-.
+#endif
+
+GLOBAL_ENTRY(__kernel_syscall_via_break)
+       .prologue
+       .altrp b6
+       .body
+       /*
+        * Note: for (fast) syscall restart to work, the break instruction must 
be
+        *       the first one in the bundle addressed by syscall_via_break.
+        */
+{ .mib
+       break 0x100000
+       nop.i 0
+       br.ret.sptk.many b6
+}
+END(__kernel_syscall_via_break)
+
+/*
+ * On entry:
+ *     r11 = saved ar.pfs
+ *     r15 = system call #
+ *     b0  = saved return address
+ *     b6  = return address
+ * On exit:
+ *     r11 = saved ar.pfs
+ *     r15 = system call #
+ *     b0  = saved return address
+ *     all other "scratch" registers:  undefined
+ *     all "preserved" registers:      same as on entry
+ */
+
+GLOBAL_ENTRY(__kernel_syscall_via_epc)
+       .prologue
+       .altrp b6
+       .body
+{
+       /*
+        * Note: the kernel cannot assume that the first two instructions in 
this
+        * bundle get executed.  The remaining code must be safe even if
+        * they do not get executed.
+        */
+       adds r17=-1024,r15                      // A
+       mov r10=0                               // A    default to successful 
syscall execution
+       epc                                     // B    causes split-issue
+}
+       ;;
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+       // r20 = 1
+       // r22 = &vcpu->evtchn_mask
+       // r23 = &vpsr.ic
+       // r24 = vcpu->pending_interruption
+       // r25 = tmp
+       // r28 = &running_on_xen
+       // r30 = running_on_xen
+       // r31 = tmp
+       // p11 = tmp
+       // p12 = running_on_xen
+       // p13 = !running_on_xen
+       // p14 = tmp
+       // p15 = tmp
+#define isXen  p12
+#define isRaw  p13
+       LOAD_RUNNING_ON_XEN(r28)
+       movl r22=XSI_PSR_I_ADDR
+       movl r23=XSI_PSR_IC
+       movl r24=XSI_PSR_I_ADDR+(XSI_PEND_OFS-XSI_PSR_I_ADDR_OFS)
+       mov r20=1
+       ;;
+       ld4 r30=[r28]
+       ;;
+       cmp.ne isXen,isRaw=r0,r30
+       ;;
+(isRaw)        rsm psr.be | psr.i
+       BRL_COND_XEN_RSM_BE_I(isXen)
+       .global .vdso_rsm_be_i_ret
+.vdso_rsm_be_i_ret:
+#else
+       rsm psr.be | psr.i                      // M2 (5 cyc to srlz.d)
+#endif
+       LOAD_FSYSCALL_TABLE(r14)                // X
+       ;;
+       mov r16=IA64_KR(CURRENT)                // M2 (12 cyc)
+       shladd r18=r17,3,r14                    // A
+       mov r19=NR_syscalls-1                   // A
+       ;;
+       lfetch [r18]                            // M0|1
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+(isRaw)        mov r29=psr
+       BRL_COND_XEN_GET_PSR(isXen)
+       .global .vdso_get_psr_ret
+.vdso_get_psr_ret:
+#else
+       mov r29=psr                             // M2 (12 cyc)
+#endif
+       // If r17 is a NaT, p6 will be zero
+       cmp.geu p6,p7=r19,r17                   // A    (sysnr > 0 && sysnr < 
1024+NR_syscalls)?
+       ;;
+       mov r21=ar.fpsr                         // M2 (12 cyc)
+       tnat.nz p10,p9=r15                      // I0
+       mov.i r26=ar.pfs                        // I0 (would stall anyhow due 
to srlz.d...)
+       ;;
+       srlz.d                                  // M0 (forces split-issue) 
ensure PSR.BE==0
+(p6)   ld8 r18=[r18]                           // M0|1
+       nop.i 0
+       ;;
+       nop.m 0
+(p6)   tbit.z.unc p8,p0=r18,0                  // I0 (dual-issues with "mov 
b7=r18"!)
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+       ;;
+       // p14 = running_on_xen && p8
+       // p15 = !running_on_xen && p8
+(p8)   cmp.ne.unc p14,p15=r0,r30
+       ;;
+(p15)  ssm psr.i
+       BRL_COND_XEN_SSM_I_0(p14)
+       .global .vdso_ssm_i_0_ret
+.vdso_ssm_i_0_ret:
+#else
+       nop.i 0
+       ;;
+(p8)   ssm psr.i
+#endif
+(p6)   mov b7=r18                              // I0
+(p8)   br.dptk.many b7                         // B
+
+       mov r27=ar.rsc                          // M2 (12 cyc)
+/*
+ * brl.cond doesn't work as intended because the linker would convert this 
branch
+ * into a branch to a PLT.  Perhaps there will be a way to avoid this with some
+ * future version of the linker.  In the meantime, we just use an indirect 
branch
+ * instead.
+ */
+#ifdef CONFIG_ITANIUM
+(p6)   add r14=-8,r14                          // r14 <- addr of 
fsys_bubble_down entry
+       ;;
+(p6)   ld8 r14=[r14]                           // r14 <- fsys_bubble_down
+       ;;
+(p6)   mov b7=r14
+(p6)   br.sptk.many b7
+#else
+       BRL_COND_FSYS_BUBBLE_DOWN(p6)
+#endif
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+(isRaw)        ssm psr.i
+       BRL_COND_XEN_SSM_I_1(isXen)
+       .global .vdso_ssm_i_1_ret
+.vdso_ssm_i_1_ret:
+#else
+       ssm psr.i
+#endif
+       mov r10=-1
+(p10)  mov r8=EINVAL
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT
+       dv_serialize_data // shut up gas warning.
+                         // we know xen_hyper_ssm_i_0 or xen_hyper_ssm_i_1
+                         // doesn't change p9 and p10
+#endif
+(p9)   mov r8=ENOSYS
+       FSYS_RETURN
+END(__kernel_syscall_via_epc)
+
+#      define ARG0_OFF         (16 + IA64_SIGFRAME_ARG0_OFFSET)
+#      define ARG1_OFF         (16 + IA64_SIGFRAME_ARG1_OFFSET)
+#      define ARG2_OFF         (16 + IA64_SIGFRAME_ARG2_OFFSET)
+#      define SIGHANDLER_OFF   (16 + IA64_SIGFRAME_HANDLER_OFFSET)
+#      define SIGCONTEXT_OFF   (16 + IA64_SIGFRAME_SIGCONTEXT_OFFSET)
+
+#      define FLAGS_OFF        IA64_SIGCONTEXT_FLAGS_OFFSET
+#      define CFM_OFF          IA64_SIGCONTEXT_CFM_OFFSET
+#      define FR6_OFF          IA64_SIGCONTEXT_FR6_OFFSET
+#      define BSP_OFF          IA64_SIGCONTEXT_AR_BSP_OFFSET
+#      define RNAT_OFF         IA64_SIGCONTEXT_AR_RNAT_OFFSET
+#      define UNAT_OFF         IA64_SIGCONTEXT_AR_UNAT_OFFSET
+#      define FPSR_OFF         IA64_SIGCONTEXT_AR_FPSR_OFFSET
+#      define PR_OFF           IA64_SIGCONTEXT_PR_OFFSET
+#      define RP_OFF           IA64_SIGCONTEXT_IP_OFFSET
+#      define SP_OFF           IA64_SIGCONTEXT_R12_OFFSET
+#      define RBS_BASE_OFF     IA64_SIGCONTEXT_RBS_BASE_OFFSET
+#      define LOADRS_OFF       IA64_SIGCONTEXT_LOADRS_OFFSET
+#      define base0            r2
+#      define base1            r3
+       /*
+        * When we get here, the memory stack looks like this:
+        *
+        *   +===============================+
+                *   |                               |
+                *   //     struct sigframe          //
+                *   |                               |
+        *   +-------------------------------+ <-- sp+16
+        *   |      16 byte of scratch       |
+        *   |            space              |
+        *   +-------------------------------+ <-- sp
+        *
+        * The register stack looks _exactly_ the way it looked at the time the 
signal
+        * occurred.  In other words, we're treading on a potential mine-field: 
each
+        * incoming general register may be a NaT value (including sp, in which 
case the
+        * process ends up dying with a SIGSEGV).
+        *
+        * The first thing need to do is a cover to get the registers onto the 
backing
+        * store.  Once that is done, we invoke the signal handler which may 
modify some
+        * of the machine state.  After returning from the signal handler, we 
return
+        * control to the previous context by executing a sigreturn system 
call.  A signal
+        * handler may call the rt_sigreturn() function to directly return to a 
given
+        * sigcontext.  However, the user-level sigreturn() needs to do much 
more than
+        * calling the rt_sigreturn() system call as it needs to unwind the 
stack to
+        * restore preserved registers that may have been saved on the signal 
handler's
+        * call stack.
+        */
+
+#define SIGTRAMP_SAVES                                                         
                \
+       .unwabi 3, 's';         /* mark this as a sigtramp handler (saves 
scratch regs) */      \
+       .unwabi @svr4, 's'; /* backwards compatibility with old unwinders 
(remove in v2.7) */   \
+       .savesp ar.unat, UNAT_OFF+SIGCONTEXT_OFF;                               
                \
+       .savesp ar.fpsr, FPSR_OFF+SIGCONTEXT_OFF;                               
                \
+       .savesp pr, PR_OFF+SIGCONTEXT_OFF;                                      
                \
+       .savesp rp, RP_OFF+SIGCONTEXT_OFF;                                      
                \
+       .savesp ar.pfs, CFM_OFF+SIGCONTEXT_OFF;                                 
                \
+       .vframesp SP_OFF+SIGCONTEXT_OFF
+
+GLOBAL_ENTRY(__kernel_sigtramp)
+       // describe the state that is active when we get here:
+       .prologue
+       SIGTRAMP_SAVES
+       .body
+
+       .label_state 1
+
+       adds base0=SIGHANDLER_OFF,sp
+       adds base1=RBS_BASE_OFF+SIGCONTEXT_OFF,sp
+       br.call.sptk.many rp=1f
+1:
+       ld8 r17=[base0],(ARG0_OFF-SIGHANDLER_OFF)       // get pointer to 
signal handler's plabel
+       ld8 r15=[base1]                                 // get address of new 
RBS base (or NULL)
+       cover                           // push args in interrupted frame onto 
backing store
+       ;;
+       cmp.ne p1,p0=r15,r0             // do we need to switch rbs? (note: pr 
is saved by kernel)
+       mov.m r9=ar.bsp                 // fetch ar.bsp
+       .spillsp.p p1, ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+(p1)   br.cond.spnt setup_rbs          // yup -> (clobbers p8, r14-r16, and 
r18-r20)
+back_from_setup_rbs:
+       alloc r8=ar.pfs,0,0,3,0
+       ld8 out0=[base0],16             // load arg0 (signum)
+       adds base1=(ARG1_OFF-(RBS_BASE_OFF+SIGCONTEXT_OFF)),base1
+       ;;
+       ld8 out1=[base1]                // load arg1 (siginfop)
+       ld8 r10=[r17],8                 // get signal handler entry point
+       ;;
+       ld8 out2=[base0]                // load arg2 (sigcontextp)
+       ld8 gp=[r17]                    // get signal handler's global pointer
+       adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
+       ;;
+       .spillsp ar.bsp, BSP_OFF+SIGCONTEXT_OFF
+       st8 [base0]=r9                  // save sc_ar_bsp
+       adds base0=(FR6_OFF+SIGCONTEXT_OFF),sp
+       adds base1=(FR6_OFF+16+SIGCONTEXT_OFF),sp
+       ;;
+       stf.spill [base0]=f6,32
+       stf.spill [base1]=f7,32
+       ;;
+       stf.spill [base0]=f8,32
+       stf.spill [base1]=f9,32
+       mov b6=r10
+       ;;
+       stf.spill [base0]=f10,32
+       stf.spill [base1]=f11,32
+       ;;
+       stf.spill [base0]=f12,32
+       stf.spill [base1]=f13,32
+       ;;
+       stf.spill [base0]=f14,32
+       stf.spill [base1]=f15,32
+       br.call.sptk.many rp=b6                 // call the signal handler
+.ret0: adds base0=(BSP_OFF+SIGCONTEXT_OFF),sp
+       ;;
+       ld8 r15=[base0]                         // fetch sc_ar_bsp
+       mov r14=ar.bsp
+       ;;
+       cmp.ne p1,p0=r14,r15                    // do we need to restore the 
rbs?
+(p1)   br.cond.spnt restore_rbs                // yup -> (clobbers r14-r18, f6 
& f7)
+       ;;
+back_from_restore_rbs:
+       adds base0=(FR6_OFF+SIGCONTEXT_OFF),sp
+       adds base1=(FR6_OFF+16+SIGCONTEXT_OFF),sp
+       ;;
+       ldf.fill f6=[base0],32
+       ldf.fill f7=[base1],32
+       ;;
+       ldf.fill f8=[base0],32
+       ldf.fill f9=[base1],32
+       ;;
+       ldf.fill f10=[base0],32
+       ldf.fill f11=[base1],32
+       ;;
+       ldf.fill f12=[base0],32
+       ldf.fill f13=[base1],32
+       ;;
+       ldf.fill f14=[base0],32
+       ldf.fill f15=[base1],32
+       mov r15=__NR_rt_sigreturn
+       .restore sp                             // pop .prologue
+       break __BREAK_SYSCALL
+
+       .prologue
+       SIGTRAMP_SAVES
+setup_rbs:
+       mov ar.rsc=0                            // put RSE into enforced lazy 
mode
+       ;;
+       .save ar.rnat, r19
+       mov r19=ar.rnat                         // save RNaT before switching 
backing store area
+       adds r14=(RNAT_OFF+SIGCONTEXT_OFF),sp
+
+       mov r18=ar.bspstore
+       mov ar.bspstore=r15                     // switch over to new register 
backing store area
+       ;;
+
+       .spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+       st8 [r14]=r19                           // save sc_ar_rnat
+       .body
+       mov.m r16=ar.bsp                        // sc_loadrs <- (new bsp - new 
bspstore) << 16
+       adds r14=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+       ;;
+       invala
+       sub r15=r16,r15
+       extr.u r20=r18,3,6
+       ;;
+       mov ar.rsc=0xf                          // set RSE into eager mode, pl 3
+       cmp.eq p8,p0=63,r20
+       shl r15=r15,16
+       ;;
+       st8 [r14]=r15                           // save sc_loadrs
+(p8)   st8 [r18]=r19           // if bspstore points at RNaT slot, store RNaT 
there now
+       .restore sp                             // pop .prologue
+       br.cond.sptk back_from_setup_rbs
+
+       .prologue
+       SIGTRAMP_SAVES
+       .spillsp ar.rnat, RNAT_OFF+SIGCONTEXT_OFF
+       .body
+restore_rbs:
+       // On input:
+       //      r14 = bsp1 (bsp at the time of return from signal handler)
+       //      r15 = bsp0 (bsp at the time the signal occurred)
+       //
+       // Here, we need to calculate bspstore0, the value that ar.bspstore 
needs
+       // to be set to, based on bsp0 and the size of the dirty partition on
+       // the alternate stack (sc_loadrs >> 16).  This can be done with the
+       // following algorithm:
+       //
+       //  bspstore0 = rse_skip_regs(bsp0, -rse_num_regs(bsp1 - (loadrs >> 
19), bsp1));
+       //
+       // This is what the code below does.
+       //
+       alloc r2=ar.pfs,0,0,0,0                 // alloc null frame
+       adds r16=(LOADRS_OFF+SIGCONTEXT_OFF),sp
+       adds r18=(RNAT_OFF+SIGCONTEXT_OFF),sp
+       ;;
+       ld8 r17=[r16]
+       ld8 r16=[r18]                   // get new rnat
+       extr.u r18=r15,3,6      // r18 <- rse_slot_num(bsp0)
+       ;;
+       mov ar.rsc=r17                  // put RSE into enforced lazy mode
+       shr.u r17=r17,16
+       ;;
+       sub r14=r14,r17         // r14 (bspstore1) <- bsp1 - (sc_loadrs >> 16)
+       shr.u r17=r17,3         // r17 <- (sc_loadrs >> 19)
+       ;;
+       loadrs                  // restore dirty partition
+       extr.u r14=r14,3,6      // r14 <- rse_slot_num(bspstore1)
+       ;;
+       add r14=r14,r17         // r14 <- rse_slot_num(bspstore1) + (sc_loadrs 
>> 19)
+       ;;
+       shr.u r14=r14,6         // r14 <- (rse_slot_num(bspstore1) + (sc_loadrs 
>> 19))/0x40
+       ;;
+       sub r14=r14,r17         // r14 <- -rse_num_regs(bspstore1, bsp1)
+       movl r17=0x8208208208208209
+       ;;
+       add r18=r18,r14         // r18 (delta) <- rse_slot_num(bsp0) - 
rse_num_regs(bspstore1,bsp1)
+       setf.sig f7=r17
+       cmp.lt p7,p0=r14,r0     // p7 <- (r14 < 0)?
+       ;;
+(p7)   adds r18=-62,r18        // delta -= 62
+       ;;
+       setf.sig f6=r18
+       ;;
+       xmpy.h f6=f6,f7
+       ;;
+       getf.sig r17=f6
+       ;;
+       add r17=r17,r18
+       shr r18=r18,63
+       ;;
+       shr r17=r17,5
+       ;;
+       sub r17=r17,r18         // r17 = delta/63
+       ;;
+       add r17=r14,r17         // r17 <- delta/63 - rse_num_regs(bspstore1, 
bsp1)
+       ;;
+       shladd r15=r17,3,r15    // r15 <- bsp0 + 8*(delta/63 - 
rse_num_regs(bspstore1, bsp1))
+       ;;
+       mov ar.bspstore=r15                     // switch back to old register 
backing store area
+       ;;
+       mov ar.rnat=r16                         // restore RNaT
+       mov ar.rsc=0xf                          // (will be restored later on 
from sc_ar_rsc)
+       // invala not necessary as that will happen when returning to user-mode
+       br.cond.sptk back_from_restore_rbs
+END(__kernel_sigtramp)
diff -r 1eb42266de1b -r e5c84586c333 
linux-2.6-xen-sparse/arch/ia64/kernel/gate.lds.S
--- /dev/null   Thu Jan 01 00:00:00 1970 +0000
+++ b/linux-2.6-xen-sparse/arch/ia64/kernel/gate.lds.S  Fri Jul 28 10:51:38 
2006 +0100
@@ -0,0 +1,117 @@
+/*
+ * Linker script for gate DSO.  The gate pages are an ELF shared object 
prelinked to its
+ * virtual address, with only one read-only segment and one execute-only 
segment (both fit
+ * in one page).  This script controls its layout.
+ */
+
+#include <linux/config.h>
+
+#include <asm/system.h>
+
+SECTIONS
+{
+  . = GATE_ADDR + SIZEOF_HEADERS;
+
+  .hash                                : { *(.hash) }                          
:readable
+  .dynsym                      : { *(.dynsym) }
+  .dynstr                      : { *(.dynstr) }
+  .gnu.version                 : { *(.gnu.version) }
+  .gnu.version_d               : { *(.gnu.version_d) }
+  .gnu.version_r               : { *(.gnu.version_r) }
+  .dynamic                     : { *(.dynamic) }                       
:readable :dynamic
+
+  /*
+   * This linker script is used both with -r and with -shared.  For the 
layouts to match,
+   * we need to skip more than enough space for the dynamic symbol table et 
al.  If this
+   * amount is insufficient, ld -shared will barf.  Just increase it here.
+   */
+  . = GATE_ADDR + 0x500;
+
+  .data.patch                  : {
+                                   __start_gate_mckinley_e9_patchlist = .;
+                                   *(.data.patch.mckinley_e9)
+                                   __end_gate_mckinley_e9_patchlist = .;
+
+                                   __start_gate_vtop_patchlist = .;
+                                   *(.data.patch.vtop)
+                                   __end_gate_vtop_patchlist = .;
+
+                                   __start_gate_fsyscall_patchlist = .;
+                                   *(.data.patch.fsyscall_table)
+                                   __end_gate_fsyscall_patchlist = .;
+
+                                   __start_gate_brl_fsys_bubble_down_patchlist 
= .;
+                                   *(.data.patch.brl_fsys_bubble_down)
+                                   __end_gate_brl_fsys_bubble_down_patchlist = 
.;
+
+#ifdef CONFIG_XEN_IA64_VDSO_PARAVIRT

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] MErge with xenppc-unstable-merge.hg, Xen patchbot-unstable <=