The following 5 patches are re-submissions of the vt-d patch.
This set of patches has been tested against cs# 15080 and is
now much more mature and tested against more environments than
the original patch.  Specifically, we have successfully tested
the patch with following environements:
    - 32/64-bit Linux HVM guest
    - 32-bit Windows XP/Vista (64-bit should work but did not test)
    - 32PAE/64-bit hypervisor
    - APIC and PIC interrupt mechanisms
    - PCIe E1000 and PCI E100 NICs
Allen
----------------------
1) patch description:
vtd1.patch:
    - vt-d specific code 
    - low risk changes in common code
vtd2.patch:
    - io port handling
vtd3.patch:
    - interrupt handling
vtd4.patch:
    - mmio handling
vtd5.patch:
    - turn on VT-d processing in ACPI table
2) how to run
- Use same syntax as PV driver domain method to "hide" and assign PCI
device
    - use pciback.hid=(02:00.0) to "hide" device from dom0
    - use pci = [ '02:00.00' ] in /etc/xen/hvm.conf to assign device to
HVM domain
    - set acpi and apic to 0 in hvm.conf as current patch only works
with PIC
    - grub.conf: use "ioapic_ack=old" for /boot/xen.gz
      (io_apic.c contains code for avoiding global interrupt problem)
4) description of hvm PCI device assignment design:
- pci config virtualization
  - Control panel and qemu changed to pass assigned PCI devices to qemu.
  - A new file ioemu/hw/dpci.c reads assigned devices PCI conf and
constructs a
    new virtual device and attaches to the guest PCI bus.
  - PCI read/write functions are similar to other virtual devices.
Except
    write function intercepts writes to COMMAND register and do actual
    hardware writes.
- interrupt virtualization
  - Currently only works for ACPI/APIC mode
  - dpci.c makes a hypercall to tell xen device/intx on vPCI
  - In do_IRQ_guest(), when Xen determines a interrupt belongs to a
device
    owned by HVM domain, it injects guest IRQ to the domain
  - Revert back to ioapic_ack=old to allow for IRQ sharing amongst
guests.
  - Implemented new method for mask/unmask in io_apic.c to avoid
    spurious interrupt issue.
- mmio
  - When guest BIOS (i.e hvmloader) or OS changes PCI BAR, PCI config
write
    function in qemu makes a hypercall to instruct Xen to construct p2m
mapping.
  - shadow page table fault handler have been modified to allow memory
above
    max_pages to be mapped.
- ioport
  - Xen intercepts guest io port accesses
  - translates guest io port to machine io port
  - does machine port access on behalf of guest
5) new hypercalls
int xc_assign_device(int xc_handle,
                     uint32_t domain_id,
                     uint32_t machine_bdf);
int xc_domain_ioport_mapping(int xc_handle,
                             uint32_t domid,
                             uint32_t first_gport,
                             uint32_t first_mport,
                             uint32_t nr_ports,
                             uint32_t add_mapping);
int xc_irq_mapping(int xc_handle,
                   uint32_t domain_id,
                   uint32_t method,
                   uint32_t machine_irq,
                   uint32_t device,
                   uint32_t intx,
                   uint32_t add_mapping);
int xc_domain_memory_mapping(int xc_handle,
                             uint32_t domid,
                             unsigned long first_gfn,
                             unsigned long first_mfn,
                             unsigned long nr_mfns,
                             uint32_t add_mapping);
6) interface to common code: 
int iommu_setup(void);
int iommu_domain_init(struct domain *d);
int assign_device(struct domain *d, u8 bus, u8 devfn);
int release_devices(struct vcpu *v);
int hvm_do_IRQ_dpci(struct domain *d, unsigned int irq);
int dpci_ioport_intercept(ioreq_t *p, int type);
int iommu_map_page(struct domain *d,
        unsigned long gfn, unsigned long mfn);
int iommu_unmap_page(
    struct domain *d, unsigned long gfn);
void iommu_flush(struct domain *d, unsigned long gfn, u64 *p2m_entry);
void iommu_set_pgd(struct domain *d);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
 |