On Fri, 2010-02-19 at 21:15 -0500, Ritu kaur wrote:
> Hi Jeremy,
>
> Thanks for clarification, however, what I dont understand is this(I
> have read documents and looked into driver code). Both netfront and
> netback registers with xenbus and monitors "vif" interface. >From
> netback point of view I clearly understand its communication and other
> stuff as I see vif<domid>:<intf-id> being created in dom0. However,
> when I look into domU, I do not see any vif interface created(looked
> with ifconfig, ifconfig -a commands) is it hidden from the user?
> In domU, I just see "eth*" interfaces created. how does eth*
> interfaces interact with netfront?
These *are* the netfront devices. No need to look further.
The "vif" you see in netfront is the _xenbus_ name. It's a building
block of the driver, but it means nothing to the kernel network layer.
> I looked under lib/modules/linux*/... for any pseudo drivers which
> might interact with eth*, didn't get any answers. I am completely
> confused. By the way I am using Debian etch 4.0 as a domU.
Network interfaces can have pretty arbitrary names, whether virtual or
not. I guess "eth<n>" in domU is mainly chosen because it gives users
and tools a warm fuzzy feeling. On domU, it makes everything look a
little more like like a native system would.
As a rule of thumb, ethX is what connects any respective domain to their
network environment, whether that's primarily a virtual one (in domU,
per blkfront) or a physical one (dom0, driving your physical NIC).
The vifs in dom0 are network interfaces. Each is a netback instance.
Each could carry a separate IP, but that's normally not done. They are
rather used as the ports of a virtual switch. Essentially the local end
of a point-to-point link. One vif for each interface on each guest.
You should see one or more xenbrX devices. These are basically software
switches, each connects all guests in a common virtual network, and each
xenbr<N> also connects to eth<n>, as the uplink.
Try 'brctl show', you should see how all these interfaces are connected.
Daniel
> Jeremy/Ian, have any inputs on ioctl support?
>
> Thanks
>
>
> On Fri, Feb 19, 2010 at 4:22 PM, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
> wrote:
> On 02/19/2010 02:30 PM, Ritu kaur wrote:
>
> Thanks for the clarification. In our team meeting we
> decided to drop netback changes to support exclusive
> access and go with xe command line or xencenter way to
> do it(We are using Citrix Xenserver). Had couple of
> follow-up questions related to Xen.
>
> 1.Is it correct that netfront driver(or any *front
> driver) has to be explicitly integrated or compiled in
> the guest OS? the reason I ask this is,
>
>
> An HVM domain can be completely unmodified, but it will be
> using emulated hardware devices with its normal drivers.
>
>
> a. In the documents I have read, it mentions guest OS
> can run without any modification, however, if above is
> true we have to make sure guest OS we use are compiled
> with the relevant *front drivers.
>
>
> An HVM domain can use PV drivers to optimise its IO path by
> bypassing the emulated devices and talking directly to the
> backends. PV domains always use PV drivers (but they've
> already been modified).
>
>
> b. we had done some changes to netback and netfront(as
> mentioned in the previous email), when compiling
> kernel for dom0 it includes both netfront and netback
> and assumed via some mechanism this netfront driver
> would be integrated/installed into guest domains when
> they are installed.
>
>
> No. A dom0 kernel doesn't have much use for frontends.
> They're usually present because a given kernel can run in
> either the dom0 or domU roles.
>
>
> 2. Any front or back driver communication is via
> xenbus only?
>
>
> Xenbus is used to pass small amounts of control/status/config
> information between front and backends. Bulk data transfer is
> usually handled with shared pages containing ring buffers, and
> event channels for event signalling.
>
>
> 3. Supporting ioctl calls. Our driver has ioctl
> support to read/write hardware registers and one
> solution was to use pci passthrough mechanism,
> however, it binds the NIC to a specific domU and we do
> not want that. We would like to have multiple users
> access to hw registers(mainly stats and other stuff)
> via guest domains and be able to access them
> simultaneously. For this, we decided to go with the
> mechanism of shared memory/event channel similar to
> front and back drivers. Can you please provide some
> inputs on this?
>
>
>
> It's hard to make any suggestions without knowing what your
> hardware is or what the use-cases are for these ioctls. Are
> you saying that you want to give multiple domUs direct
> unrestricted (read only?) access to the same set of
> registers? What kind of stats? Do guests need to read them
> at a very high rate, or could they fetch accumulated results
> at a lower rate?
>
> J
>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|