This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] [PATCH] Simulates the MSIx table read operation

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH] Simulates the MSIx table read operation
From: "Liu, Yuan B" <yuan.b.liu@xxxxxxxxx>
Date: Wed, 4 Aug 2010 10:35:26 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Delivery-date: Tue, 03 Aug 2010 19:37:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcszfbI6aiH3O44dR7u+2xk+QFZw3g==
Thread-topic: [PATCH] Simulates the MSIx table read operation


         This patch simulates the MSIx table read operation to avoid read traffic caused by guest Linux kernel in a multiple guests environments running with high interrupt rate workload.(We tested 24 guests with iperf by 10Gb workload)



                   The assumptions about underlying hardware of OS running in the virtual machine environment would not hold for some cases. This is particularly perceived when considering the CPU virtualization that, the VCPU of the OS would be scheduled out while physical CPU of OS would never be. This cause the corner case trouble of OS designed inherently by the assumption targeting the physical CPU. We have seen the _lock-holder preemption_ case. Now SR-IOV issue is yet another one.


                   Linux generic IRQ logic for edge interrupt, during the ‘Writing EOI’ period, has been written the way that in a high rate interrupt environment, the subsequent interrupt would cause the guest busy masking/unmasking interrupt if the previous one isn’t handled immediately(For e.g. the guest is scheduled out).

The mask/unmask operation would cause a read operation to flush the previous PCI transactions to ensure the write is successful. This corner case isn’t handled by the Xen which only intercept the Guests’ mask/unmask operation and forward other requests(read/write table) to qemu.

                 This special case doesn’t appear in the light workload but in the case of many (for e.g. 24) guests, it would cause the CPU utilization of Dom0 up to 140%(This is proportional to the number of the guests), which definitely limit the scalability and performance of virtualization technology.


                 This patch emulates the read operation in the Xen and test showed that all the abnormal MMIO read operation is eliminated completely during iperf running in a heavy workload. The CPU utilization has been dropped to 60% in my test.




Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>