WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Shared memory and event channel

To: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Subject: Re: [Xen-devel] Shared memory and event channel
From: Ritu kaur <ritu.kaur.us@xxxxxxxxx>
Date: Mon, 22 Feb 2010 14:16:30 -0800
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 22 Feb 2010 14:17:11 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type; bh=B4mRMlb7M7mhaPu+ExaaOr5Tg0X6E65Kju6rh0kJfxg=; b=L0H2TMVHHBVGPShLuqEg/0ht9jAzo5QxfrL1RrVf+c8B7LiwxWynS/yvM+B/rDzKC4 S6CDROk30aq5eYXtsoaCuBvLswc7P54Lhd0ZBbqF2OdMzHSzmYjMNDwNR4f6M2kONsEe FxedHIJWq9bmd4+Qgo5Iw3Ts3/EW4ZtV0m2sA=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=RgpjW2FzyprP60ppLtY2VUR8JN02Kml+qaEQAWzdmOTQGIBmYFSf0d+Pl8hWypo47a /zhcrgjSSiPuVILMsJLetYAe09oRlJeIsfwjiVaEpb+ZbkxknOhy8ypV8AZVoduFa9Ol 4v09BunFaCwo8J/LysTE5v6CQTlI21VgTrX7s=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1266874463.27288.57.camel@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <29b32d341002211058l7e283336pa4fdfd0dc0b7124b@xxxxxxxxxxxxxx> <1266787199.24577.18.camel@xxxxxxxxxxxxxxxxxxxxxxx> <29b32d341002211533k4956a129ifff18281cfa92e41@xxxxxxxxxxxxxx> <1266825344.4996.183.camel@xxxxxxxxxxxxxxxxxxx> <29b32d341002220936q2f6f3cdaif3cbb766d1e644d1@xxxxxxxxxxxxxx> <1266874463.27288.57.camel@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi Daniel,

Please see inline...

On Mon, Feb 22, 2010 at 1:34 PM, Daniel Stodden <daniel.stodden@xxxxxxxxxx> wrote:
On Mon, 2010-02-22 at 12:36 -0500, Ritu kaur wrote:

>
>         I'm not sure right now how easy the control plane in XCP will
>         make it
>         without other domU's notice, but maybe consider something
>         like:
>
>          1. Take the physical NIC out of the virtual network.
>          2. Take the driver down.
>          3. Pass access to the NIC to a domU.
>          4. Let domU do the unspeakable.
>          5.-7. Revert 3,2,1 to normal.
>
>         This won't mess with the the PV drivers. Get PCI passthrough
>         to work for
>         3 and 4 and you save yourself a tedious ring protocol design.
>         If not,
>         consider doing the hardware programming in dom0, because
>         there's not
>         much left for domU anyway.
>
>         You need a split toolstack to get the dom0 network control
>         steps on
>         behalf of domU done. Might be just a scripted agent,
>         accessible to domU
>         via a couple RPCs. Could also turn out to be as simple as
>         talking
>         through the primary vif, because the connection between domU
>         and dom0
>         could remain unaffected.
>
>
>
> PCI passthrough is via config changes and no code changes, if that's
> the case I am not sure how it would solve multiple domU accesses.

My understanding after catching up a little on the past of this thread
was that you want the network controller in some maintenance mode. Is
this correct?
 
All I need to  is access NIC registers via domU's(network controller will still be working normally). Using PCI passthrough solves the problem for a domU, however, it doesn't solve when multiple domU's wanting to read NIC registers(ex. statistics). 

To get it there you will need to temporarily remove it from the virtual
network topology.

The PCI passthrough mode might solve your second problem, which is how
the domU is supposed to access the device once it's been pulled off the
data path. 

> For the second paragraph, do you have recommended readings? frankly I
> don't completely understand the solution any pointers appreciated.

> In addition, registers in NIC are memory mapped(ioremap function is
> used, and in ioctls memcpy_toio and memcpy_fromio is used to
> write/read registers) and wanted  to know if its possible to map
> memory from dom0 into domU's?

Yes. This is the third problem, which is how to program a device. I'd
recommend "Linux Device Drivers" on that subject. There are also free
books like http://tldp.org/LDP/tlk/tlk-title.html. Examples likely
outdate, but the concepts remain.

If the device is memory mapped, it doesn't mean it's in memory. It means
it's in the machine memory address space. The difference should become
clear once you're done with understanding your driver.

Is this the reason why you are so concerned about the memory sharing
mechanism?

No not really. I wanted to use shared memory between dom's as a solution for multiple domU access(since pci passthrough doesn't solve it).

The clarification I wanted here(NIC registers are memory mapped), can I take "machine memory address space(which is in dom0)" and remap it to domU's such that I can get multiple domU access.

To summarize,

1. PCI passthrough mechanism works for single domU
2. Shared memory rings between dom's as a solution to have multiple domU access, not a workable solution though
3. Take mapped machine address in dom0 and remap it into domU's(just another thought, not sure it works) and wanted clarification here.

Thanks
 
The good news is now you won't need to bother, that's only
for memory. :)

Daniel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel