|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Loopback Performance (Was Re: Disk naming)
Ian Pratt wrote:
I think I'd prefer not to complicate blkback, unless something's
fundamentally wrong with the design of the loopback device. Anyone know
about this? The trick with this kind of thing is avoiding deadlock under
low memory situations...
I poked through the loopback code and it seems to be doing the
reasonable thing. I decided to investigate for myself what the
performance issues with the loopback device were. My theory was that
the real cost was the double inode lookups (looking up the inodes in the
filesystem on the loopback and then looking up the inodes on the host
filesystem).
To verify, I ran a series of primitive tests with dd. First I baselined
the performance of writing to a large file (by running dd if=/dev/zero
conv=notrunc) on the host filesystem. Then I created a loopback device
with the same file and ran the same tests writing directly to the
loopback device.
I then created a filesystem on the loopback device, mounted it, then ran
the same test on a file within the mount.
The results are what I expected. Writing directly to the loopback
device was equivalent to writing directly to the file (usually faster
actually--I attribute that to buffering). Writing to the file within
the filesystem on the loopback device was significantly slower (about a
~70% slowdown).
If my hypothesis is right, that the slowdown is caused by the double
inode lookups, then I don't think there's anything we could do in the
blkback drivers to help that. This is another good reason to use LVM.
This was all pretty primitive so take it with a grain of salt.
Regards,
Anthony Liguori
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|