WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC Patch] Support for making an E820 PCI hole in tools

To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] [RFC Patch] Support for making an E820 PCI hole in toolstack (xl + xm)
From: Keir Fraser <keir@xxxxxxx>
Date: Tue, 16 Nov 2010 09:52:41 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>, "bruce.edge@xxxxxxxxx" <bruce.edge@xxxxxxxxx>, Gianni Tedesco <gianni.tedesco@xxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Delivery-date: Tue, 16 Nov 2010 01:54:59 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:sender:user-agent:date :subject:from:to:cc:message-id:thread-topic:thread-index:in-reply-to :mime-version:content-type:content-transfer-encoding; bh=Uz1scyZEiC8qCFq6tmbWnDpYUa/UholsPQBZfK3EWO0=; b=p4lUcYQhNHWimhjLCqgySxjz2GnuXMYWvTgs3R7XT00OuR8uDUx4xzR7OjIIzJ6EMz JcJVB2WMKrLwcnm6smlbn6MLCz00njI/gF/Rw7peHhFOnIkgfiJdF1NslkISyAYaUzOl 2D/lUkB/xsv07xOtUXs1lpBv3hZOjsM8Aow7Q=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=aADVYP1AOPMWR5X1fTU7bGM8rMNyHQZ5WiC3Wemod/s62oi5DV7y9hJj0chKAj83eT xCX67k2ynQlA/Da33uQzjG60XIW7I0aEbIqZJR0BlZ8yOkONSCFeqvMc+Kh9jJItXLhP cyxYfJQN76KvibyVBbB+PGf+gpdENl8Wf0h1Y=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1289899586.31507.717.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcuFdAI2HZNbJ8keXE6rSuvSdz6TFg==
Thread-topic: [Xen-devel] [RFC Patch] Support for making an E820 PCI hole in toolstack (xl + xm)
User-agent: Microsoft-Entourage/12.27.0.100910
On 16/11/2010 09:26, "Ian Campbell" <Ian.Campbell@xxxxxxxxxx> wrote:

>>> We do make the PTE that refer to physical devices to be the DOM_IO
>> domain..
>> 
>> I think Xen will sort that out for itself when presented with a
>> hardware/device mfn.
> 
> My main concern would be with save/restore code will canonicalise all
> the MFNs in the page tables back into PFNs and then convert back to MFNs
> on the other side, which is likely to go pretty wrong on one end of the
> other unless the save restore code is aware of which MFNs are device
> MFNs and which are actual memory. I'm not sure there is any way it can
> tell.

The right answer is probably to refuse save/restore/migrate when devices are
passed through. It's somewhere between very hard and very nuts to attempt
that in general. For example, even with SR-IOV, we've only been talking
about it so far for NICs, and then in terms of having a Solarflare-like
acceleration abstraction allowing us to step off of SR-IOV for at least the
duration of the critical bit of the save/restore.

A sensible first goal would simply be to be able to do PCI passthrough both
before and after a s/r/m across reasonaly heterogenous hardware, but not
attempt to be able to maintain such a device passthru *during* the s/r/m.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>