[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Does Xen hypervisor overwrite O_DIRECT setting of Linux 2.6 kernel?


I'm doing some I/O performance analysis on Xen now. I noticed some weird thing when comparing the I/O performance data of Xen domain0 and Linux native. For sequential READ performance (no matter small packet or large packet), Linux native shows better performance than Xen domain0 (this is expected). However, for sequential write of small packet (512B and 1KB packets), Xen domain0 always outperforms the Linux native a lot. I ran the performance for several times and the results are pretty consistent.

The performance data is collected on 8 SAS drives (used as physical drives) and IOMeter is used as the benchmark tool. The latest IOMeter version used O_DIRECT. We know, Linux 2.6 kernel starts supporting O_DIRECT which makes all I/O requests work around buffer cache. The good thing for O_DIRECT is it reduces the CPU utilization and cache pollution. The bad thing is O_DIRECT not only forces all I/O requests become synchronous and no I/O coalescing will happen. Thus sequential write of small packets will be impacted most. For Xen, however, I believe Xen hypervisor overwrites this O_DIRECT setting and maybe it favors better performance over CPU and FSB utilization. Thus Xen domain0 can have better write performance than Linux native.

Is this correct?



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.