[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Sharing PCI devices



> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Didier Trosset
> Sent: 23 May 2007 13:21
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] Sharing PCI devices
> 
> 
> I'd like to know if there is a prefered way to go for sharing 
> a PCI device 
> between a couple of domUs. Is it better going para-virtualized, or 
> full-virtualized ?

It is not (currently) possible to use hardware PCI devices inside a HVM
domain without major changes to the driver (and I'm not even sure if
there's sufficient support in the Hypercall interface to support the
necessary operations as it is today - but for certain, the driver needs
to be AWARE that it's in a virtual machine rather than a regular
"bare-metal" OS). 

This is basically because Xen hides the actual memory layout of the
guest's memory from the guest, and thus when the guest THINKS it knows
the physical address (which is necessary to pass through to a PCI device
for it's bus mastering operations) it will not be able to give the right
information (but rather tell the same lies that it was told by Xen).
This leads to completely incorrect memory accesses from the device,
which in turn will lead to major mishaps in the device operation itself
(e.g. it sends the wrong data out on the line, reads "garbage"
instructions from memory, or overwrites the wrong area of memory with
it's data [network packet, disk data or whatever it may be]). In the
end, you'll probably end up with a VERY corrupted system that doesn't
behave anywhere like the right way. 

In the future, hardware with IOMMU (I/O Device Memory mapping) will be
able to "redirect" the "believed" memory address to the actual physical
location known by Xen. But this will not happen very soon. 

So the answer here is that you NEED to have a Para-virtual kernel, at
least until IOMMU is available. 


Another hardware side of things:
Almost every PCI device available on the market has a very complex
state, consisting of many different registers and internal states that
may not even be visible to the outside, which means that ONE driver/OS
must be in complete control of the device itself. I'll make an example
on an IDE controller. The IDE controller has some different registers
that need to be written to start a sector read/write (or other
operation):
Sector number
Which Disk (master/slave)
Some control flags (LBA-mode or not and some such)
Command (read, write or something like that)
Let's say we have two OS's (A and B) sharing the same device to access
two different disks (A = disk 0, B = disk 1). 
OS A wants to write sector 14 with some data, OS B wants to read sector
54. 
Set sector number = 14
Set disk to 0
Set control flags.
---- Interrupt -> switch to OS B
Set sector number = 54
Set disk to 1
Set control flags.
Set command to Read.
---- Wait for comamnd to complete... 
Read out 512 bytes
... Some other stuff goes on here for a while... 
---- OS B Idle -> switch to OS A
Set command to Write
!!!! OOPS! We're writing over sector 54 on Disk 1, not sector 14 on Disk
0!!!!

> 
> For instance the PCI device I'd like to share is made 
> in-house. I am writing 
> the device driver for Linux, and I'd like to run tests on the 
> devices from 
> different distributions/OSes.

Well, you can't SHARE a PCI device. You can hide it from Dom0 and assign
it to some other domain when that domain is started. 

It's certainly feasible to set up a set of virtual machines that each
are assigned the PCI device, as long as you shut the one domain down
before you start another you should be able to do this. 

However bear in mind that if you're testing the driver as such, you
won't gain that much value from running it in a para-virtula environment
if the customer is running it in bare-metal setup, as drivers in
themselves work differently in a para-virtual and bare-metal setups, as
well as the traditional way to do para-virtual setups is to use the
standard Xen-build of Linux - so the normal way to run Xen para-virtual
domains use the same actual kernel whichever distribution you use - not
quite the same thing as using the standard kernel that came with "RHEL
5" to run your driver within. 

Of course, for testing that the user-interface or user-level control
applications work together with your hardware under distro X, this will
work fine. 

And of course, several of the latest distributions come with
para-virtual kernels that you could use for testing purposes on Xen. 

> 
> Is there any place I could get information (other than Xen 
> Interface Manual) 
> or hands-on examples for writing the code needed for the 
> backend/frontend 
> device that I would have to write.

Frontend/backend drivers would be another way to share the device, but
that is essentially hiding the actual device from the guest OS, and
forwarding the guest requests to Domain 0, so you're testing your device
in a different condition than in the bare-metal situation. Which, again,
will work OK for user-level applications that you want to test, but not
for actually how the user would normally use this in their daily work
(most likely at least). 

I don't have any helping hints on how to write frontend/backend driver
pairs. 

--
Mats
> 
> Thanks for any help
> Didier
> 
> -- 
> Didier Trosset-Moreau
> Agilent Technologies
> Geneva, Switzerland
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.