WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Re-using the x86_emulate_memop() to perform MMIO for HVM

To: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
Subject: [Xen-devel] Re: Re-using the x86_emulate_memop() to perform MMIO for HVM.
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Thu, 4 May 2006 14:49:24 +0100
Cc: Khoa Huynh <khoa@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 04 May 2006 06:49:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <907625E08839C4409CE5768403633E0BA7FC2C@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <907625E08839C4409CE5768403633E0BA7FC2C@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
We need an emulator both in Xen and in the device model. The
current split decode-emulate is pretty barking. My plan for
now would be to copy x86_emulate.c and plumb it into qemu-dm:
so we do duplicate the code but it's actually only one source
file to maintain.

That does indeed sound like a good plan.

And it sounds like it would work. But isn't the "emulate within Xen"
going to have a problem with that? Or do we use the Xen version of
x86_emulate for the Xen devices (as we can obviously read/write those
without switching to another context, and thus don't have the problems
I've been hitting).

Yes, you'd only invoke qemu-dm for the non-Xen devices.

So what you're essentially saying is make a soft link from current
x86_emulate.[ch] into the tools/ioemu/, and set up suitable read/write
functions, and then use that in helper2.c, yes? [Sorry if I'm asking
obvious questions, but it's usually better to ask first than to have to
do things twice because you didn't ask...

Pretty much. As you say below, we also need to provide access to segment bases. But actually most instructions are okay because we know the faulting linear address of one of the memory operands (and usually there is only one).

Makes life a whole lot simpler, I should think - but for the MMX/SSE
support, that would mean that we need to FXSAVE/FXRSTOR those around
this code, right? Will that not cost unnecessarily much?

I think it's in the noise compared with the context-switch cost. However we could save the FPU state on demand, only if it turns out that qemu-dm needs it. FXSAVE will only be needed if the guest has actually been using the FPU. Otherwise we already have the up-to-date FPU state saved in memory.

I guess we should also supply the segment base (and limit?) information
in this memory block, so that x86_emulate can do things like
big-real-mode and other segmented operations that may happen at some
point.

Yes, definitely, and this will require some modifications to the emulator itself. Either an extra block of state passed in to the emulator, or adding a call-out function hook from the emulator to obtain segment bases on demand. I think the former is probably simpler.

Anything else we should think to pass along as well "while we're at it"?

No, I think that's it.

Yes, I saw that discussion. I'm not sure if it's much help to do that
(for AMD at least, Intel has a problem because they don't support
paged-real-mode, which of course is a bit of a nuisance to them...)

Well, sometimes device register accesses happen in clusters, and it would be nice to amortise the cost of switching to qemu-dm across emulation of an entire cluster of accesses. It might well be a win in many cases, but it's also fair amount of work I think.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>