This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Introduction to VirtIO on Xen project

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Introduction to VirtIO on Xen project
From: Wei Liu <liuw@xxxxxxxxx>
Date: Wed, 27 Apr 2011 10:53:31 +0800
Delivery-date: Tue, 26 Apr 2011 19:54:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi, all.

I'm Wei Liu, a graduate student from Wuhan University, Hubei, China.
I'm accepted to GSoC 2011 for Xen and responsible for the project
VirtIO on Xen. It's my honor to get accepted and involved in this
wonderful community. I've been doing Xen development for my lab since
late 2009.

As you all know, VirtIO is a generic paravirtualized mainly used in
KVM now. But it should not be too hard to port VirtIO to Xen. When
done, Xen will have access to Linux kernel's VirtIO interfaces and
developers will have an alternative way to deliver PV drivers besides
from the original ring buffer flavor. This project requires: Modify
upstream QEMU, replace KVM-specific interface with generic QEMU
function; Modify Xen / Xentools to support VirtIO; Modify Linux
kernel's VirtIO interfaces.

We must take two usage scenarios into consideration:

1. PV-on-HVM;
2. Normal PV.

These two scenarios require working on different set of functions:

1. XenBus vs VirtualPCI, it's about how to create a channel;
2. PV vs HVM, it's about how events are handled.

Most of the code in VirtIO will be left as-it-is. But the notification
mechanism should be replaced with Xen's event channel. This applies to
QEMU's porting as well.

In the PV on HVM case, QEMU needs to use event channel to get / send
notification and foreign mapping / grant table functions in libxc
/libxl to map memory pages. Virtual PCI bus will be used to establish
a channel between Dom0 and DomU. In some sense, it makes no
differences on the Linux kernel side.

In the normal PV case, QEMU needs to use event channel to get / send
notification, and foreign mapping functions in libxc / libxl to map
memory pages. XenBus / Xenstore will be used to establish a channel
between Dom0 and DomU. Linux VirtIO driver should use Xen's event
channel as kick / notify function.

When the porting is finished, I will carry on some performance tests
with standardized tools such as ioperf, netperf and kernbench.
Testsuites will be run on five different configurations:

1. Native Linux
2. Xen with PV-on-HVM VirtIO support
3. Xen with normal PV VirtIO support
4. Xen with original PV driver support
5. KVM with VirtIO support

A short report will be written based on the results.

This is a brief introduction to the project. Any comments are welcomed.

Best regards
Wei Liu
Twitter: @iliuw
Site: http://liuw.name

Xen-devel mailing list