WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] swiotlb=force in Konrad's xen-pcifront-0.8.2 pvops domU

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] swiotlb=force in Konrad's xen-pcifront-0.8.2 pvops domU kernel with PCI passthrough
From: Dante Cinco <dantecinco@xxxxxxxxx>
Date: Wed, 17 Nov 2010 17:09:24 -0800
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 17 Nov 2010 17:10:15 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:cc:content-type; bh=Lyc+fxtKesuTRkZtt0qtSXUrNkhc8fGelxirnQpoPQ0=; b=ZMZvPv7TaU4Q5ooPxJ1VYPMXv87wSE486jjPFKG4sCavYZwFHv8YMyBS5QZPUdfXsL WRQogXba6+GYmU6FeIGYPeVEPtKgEwqJvQLKJB72363euhiG+xc/YIOKp591yYVDfM7p 55lpxVitrULqh04yImDiAKF6VPbU0Fmcp5xI8=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=UUAvwYSjUUvdXXZ5ebB+irQYxLhIpns4HEXw61KjYWwBmcQ6Se113tn/uqFc5qXpW6 EKsVx3RExe1obGuwdd9oA3frCZX9fngu4i5NYsfB15Tf3VXtcIUbAurkg6vVAgY++R+L cmloZGox1en4ewYOi3QVM4ODz4BuHsTCwO0Cc=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20101116201349.GA18315@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20101112165541.GA10339@xxxxxxxxxxxx> <EB4C61A1A2501842A04B573FE42B14D601374FBFD2@xxxxxxxxxxxxxxxxx> <20101112223333.GD26189@xxxxxxxxxxxx> <AANLkTi=H6r2=-zJE+6eCtP4VXacYhd_e47+KRW5vdwjS@xxxxxxxxxxxxxx> <20101116185748.GA11549@xxxxxxxxxxxx> <AANLkTikw8reKXwd9CcXc3qqHuXKjbMEatAVfn19uwzs3@xxxxxxxxxxxxxx> <20101116201349.GA18315@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Nov 16, 2010 at 12:15 PM, Konrad Rzeszutek Wilk
<konrad.wilk@xxxxxxxxxx> wrote:
>> > Or is the issue that when you write to your HBA register the DMA
>> > address, the HBA register can _only_ deal with 32-bit values (4bytes)?
>>
>> The HBA register which is using the address returned by pci_map_single
>> is limited to a 32-bit value.
>>
>> > In which case the PCI device seems to be limited to addressing only up to 
>> > 4GB, right?
>>
>> The HBA has some 32-bit registers and some that are 45-bit.
>
> Ugh, so can you set up pci coherent DMA pools at startup for the 32-bit
> registers. Then set the pci_dma_mask to 45-bit and use pci_map_single for
> all others.
>>
>> >
>> >> returned 32 bits without explicitly setting the DMA mask. Once we set
>> >> the mask to 32 bits using pci_set_dma_mask, the NMIs stopped. However
>> >> with iommu=soft (and no more swiotlb=force), we're still stuck with
>> >> the abysmal I/O performance (same as when we had swiotlb=force).
>> >
>> > Right, that is expected.
>>
>> So with iommu=soft, all I/Os have to go through Xen-SWIOTLB which
>> explains why we're seeing the abysmal I/O performance, right?
>
> You are simplifying it. You are seeing abysmal I/O performance b/c you
> are doing bounce buffering. You can fix this by making the driver
> have a 32-bit pool allocated at startup and use that just for the
> HBA registers that can only do 32-bit, and then for the rest use
> the pci_map_single and use DMA mask 45-bit.

I wanted to confirm that bounce buffering was indeed occurring so I
modified swiotlb.c in the kernel and added printks in the following
functions:
swiotlb_bounce
swiotlb_tbl_map_single
swiotlb_tbl_unmap_single
Sure enough we were calling all 3 five times per I/O. We took your
suggestion and replaced pci_map_single with pci_pool_alloc. The
swiotlb calls were gone but the I/O performance only improved 6% (29k
IOPS to 31k IOPS) which is still abysmal.

Any suggestions on where to look next? I have one question about the
P2M array: Does the P2M lookup occur every DMA or just during the
allocation? What I'm getting at is this: Is the Xen-SWIOTLB a central
resource that could be a bottleneck?

>
>>
>> Is it true then that with an HVM domU kernel and PCI passthrough, it
>> does not use Xen-SWIOTLB and therefore results in better performance?
>
> Yes and no.
>
> If you allocate to your HVM guests more than 4GB you are going to
> hit the same issues with the bounce buffer.
>
> If you give your guest less than 4GB, there is no SWIOTLB running in the guest
> and QEMU along with the hypervisor end up using the hardware one (currently
> Xen hypervisor supports AMD V-i and Intel VT-d). In your case it is the VT-d
> - at which point the VT-d will remap your GMFN to MFNs. And the VT-d will
> be responsible for translating the DMA address that the PCI card will
> try to access to the real MFN.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>