|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] [PATCH 2/2] Add VMDq support to ixgbe
Anna,
Since you have 2 devices (eth0 and eth1) you need to pass a list of values for
the VMDQ parameter, one for each device.
Try using "VMDQ=8,8".
Regards
Renato
> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> Fischer, Anna
> Sent: Friday, March 20, 2009 3:02 PM
> To: Mitch Williams
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] [PATCH 2/2] Add VMDq support to ixgbe
>
> > Subject: [Xen-devel] [PATCH 2/2] Add VMDq support to ixgbe
> >
> > This patch adds experimental VMDq support (AKA Netchannel2
> vmq) to the
> > ixgbe driver. This applies to the Netchannel2 tree, and
> should NOT be
> > applied to the "normal" development tree.
> >
> > To enable VMDq functionality, load the driver with the command-line
> > parameter VMDQ=<num queues>, as in:
> >
> > $ modprobe ixgbe VMDQ=8
>
> I have installed the latest netchannel2 tree. If I load ixgbe
> with modprobe VMDQ=x then it seems as if only the first NIC
> port has VMDQ enabled while the second stays disabled, or
> only enabled with 2 RX queues and 1 TX queue. Is this
> expected? Is it not possible to enable 16 queues on both NIC
> ports? I have listed some logs below.
>
>
> ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver -
> version 1.3.56.5-vmq-NAPI Copyright (c) 1999-2008 Intel Corporation.
> bus pci: add driver ixgbe
> pci: Matched Device 0000:02:00.0 with Driver ixgbe
> PCI: Enabling device 0000:02:00.0 (0100 -> 0103)
> ACPI: PCI Interrupt 0000:02:00.0[A] -> GSI 16 (level, low) -> IRQ 16
> PCI: Enabling bus mastering for device 0000:02:00.0
> PCI: Setting latency timer of device 0000:02:00.0 to 64
> ixgbe: Virtual Machine Device Queues (VMDQ) set to 16
> ixgbe: packet split disabled for Xen VMDQ
> ixgbe: 0000:02:00.0: ixgbe_init_interrupt_scheme: Multiqueue
> Enabled: Rx Queue count = 16, Tx Queue count = 16
> ixgbe: eth0: ixgbe_probe: (PCI Express:2.5Gb/s:Width x8)
> ixgbe: eth0: ixgbe_probe: MAC: 1, PHY: 0
> ixgbe: eth0: ixgbe_probe: Internal LRO is enabled
> ixgbe: eth0: ixgbe_probe: Intel(R) 10 Gigabit Network
> Connection bound device '0000:02:00.0' to driver 'ixgbe'
> pci: Bound Device 0000:02:00.0 to Driver ixgbe
> pci: Matched Device 0000:02:00.1 with Driver ixgbe
> PCI: Enabling device 0000:02:00.1 (0100 -> 0103)
> ACPI: PCI Interrupt 0000:02:00.1[B] -> GSI 17 (level, low) -> IRQ 20
> PCI: Enabling bus mastering for device 0000:02:00.1
> PCI: Setting latency timer of device 0000:02:00.1 to 64
> ixgbe: 0000:02:00.1: ixgbe_init_interrupt_scheme: Multiqueue
> Disabled: Rx Queue count = 1, Tx Queue count = 1
> ixgbe: eth1: ixgbe_probe: (PCI Express:2.5Gb/s:Width x8)
> ixgbe: eth1: ixgbe_probe: MAC: 1, PHY: 0
> ixgbe: eth1: ixgbe_probe: Internal LRO is enabled
> ixgbe: eth1: ixgbe_probe: Intel(R) 10 Gigabit Network
> Connection bound device '0000:02:00.1' to driver 'ixgbe'
> pci: Bound Device 0000:02:00.1 to Driver ixgbe
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|