WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 0/5] VT-d support for PV guests

>From: Ian Pratt
>Sent: 2008年5月20日 23:34
>> > It would be good if you could provide a bit more detail on when the
>> > patch populates IOMMU entries, and how it keeps them in sync. For
>> > example, does the IOMMU map all the guest's memory, or just that
>> > which will soon be the subject of a DMA? How synchronous is the
>> > patch in removing mappings, e.g. due to page type changes 
>(pagetable
>> > pages, balloon driver) or due to unmapping grants?
>> 
>> All writable memory is initially mapped in the IOMMU.  Page type
>> changes are also reflected there.  In general all maps and 
>unmaps to a
>> domain are synced with the IOMMU.  According to the feedback I got I
>> apparently missed some places, though.  Will look into this and fix
>> it.
>
>Is "demotion" of access handled synchronously, or do you have some
>tricks to mitigate the synchronization?

All changes need be handled synchronously, as DMA request is not
restartable with VT-d fault as async event notification. Hardware bits
are designed in such way that all expected permission controls have
to exist before device actually issues access request. 

>
>> It's clear that performance will pretty much suck if you do frequent
>> updates in grant tables, but the whole idea of having passthrough
>> access for NICs is to avoid this netfront/netback data plane scheme
>> altogether.  This leaves you with grant table updates for 
>block device
>> access.  I don't know what the expected update frequency is for that
>> one.
>
>I don't entirely buy this -- I think we need to make grant map/unmaps
>fast too. We've discussed schemes to make this more efficient by doing
>the IOMMU operations at grant map time (where they can be easily
>batched) rather than at dma_map time. We've talked about using a

Agree.

>kmap-style area of physical address space to cycle the mappings through
>to avoid having to do so many synchronous invalidates (at the 
>expense of
>allowing a driver domain to be able to DMA to a page for a 
>little longer
>than it strictly ought to).

Could you elaborate a bit how kmap-style area helps here? The key
point is whether frequency of p2m mapping can be reduced...

Thanks,
Kevin 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel