This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH] Simulates the MSIx table read operation

To: "Liu, Yuan B" <yuan.b.liu@xxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Simulates the MSIx table read operation
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Fri, 6 Aug 2010 11:01:48 -0400
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Delivery-date: Fri, 06 Aug 2010 11:50:41 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <BC00F5384FCFC9499AF06F92E8B78A9E0AFFFF0941@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <BC00F5384FCFC9499AF06F92E8B78A9E0AFFFF0941@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.20 (2009-12-10)
On Wed, Aug 04, 2010 at 10:35:26AM +0800, Liu, Yuan B wrote:
> Hi,
>          This patch simulates the MSIx table read operation to avoid read 
> traffic caused by guest Linux kernel in a multiple guests environments 
> running with high interrupt rate workload.(We tested 24 guests with iperf by 
> 10Gb workload)
> [Background]
>                    The assumptions about underlying hardware of OS running in 
> the virtual machine environment would not hold for some cases. This is 
> particularly perceived when considering the CPU virtualization that, the VCPU 
> of the OS would be scheduled out while physical CPU of OS would never be. 
> This cause the corner case trouble of OS designed inherently by the 
> assumption targeting the physical CPU. We have seen the _lock-holder 
> preemption_ case. Now SR-IOV issue is yet another one.
>          [Issue]
>                    Linux generic IRQ logic for edge interrupt, during the 
> 'Writing EOI' period, has been written the way that in a high rate interrupt 
> environment, the subsequent interrupt would cause the guest busy 
> masking/unmasking interrupt if the previous one isn't handled immediately(For 
> e.g. the guest is scheduled out).
> The mask/unmask operation would cause a read operation to flush the previous 
> PCI transactions to ensure the write is successful. This corner case isn't 
> handled by the Xen which only intercept the Guests' mask/unmask operation and 
> forward other requests(read/write table) to qemu.
>                  This special case doesn't appear in the light workload but 
> in the case of many (for e.g. 24) guests, it would cause the CPU utilization 
> of Dom0 up to 140%(This is proportional to the number of the guests), which 
> definitely limit the scalability and performance of virtualization technology.
>        [Effect]
>                  This patch emulates the read operation in the Xen and test 
> showed that all the abnormal MMIO read operation is eliminated completely 
> during iperf running in a heavy workload. The CPU utilization has been 
> dropped to 60% in my test.

I am having a hard time understanding this.  Is the issue here that
read/write of the MSI-X table is being done in QEMU, and it is much
better to do so in the hypervisor which traps already the mask/unmaks
operation so that QEMU is not overwhelmed by having to do this?

With this in patch in place, wouldn't QEMU still do the read operation?

> Thanks,
> Yuan

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>