|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Solution for problems with HyperSCSI and vbds ?
> Thinking about just 3 DOM0 HyperSCSI clients connecting
> to the HyperSCSI-Server directly feels somehow more comfortable.
> (e.g. a lot easier administration, less points of failiure.)
> The 3 DOM0s in this example can then export the HyperSCSI
> device(s) via whatever means to the domains > 0.
Of course, the absolutely proper solution is to put HyperSCSI
into Xen, so that Xen's block device interface could be used by
guest OSes to talk directly with the remote disk.
However, I wouldn't want to contemplate putting a big gob of code
like HyperSCSI into Xen until we have implemented the plan for
ring-1 loadable modules support. This would then give us a
shared-memory block device interface between guestOSes and the
HyperSCSI driver (also running in ring1). The HyperSCSI driver
would then talk to the network interface, again using
shared-memory.
> Thanks a lot for pointing me to this solution !
> I will look into it during the next days (especially performance ;-).
I'm looking forward to hearing how you get on.
> A propos:
> Did you ever make benchmarks about the average or maximum
> throughput of your VFR implementation in XEN ?
The throughput between domains and the real network interface is
_very_ good, easily able to saturate a 1Gb/s NIC, probably good
for rather more.
However, I'm afraid to say that we recently discovered that our
inter domain performance is pretty abysmal -- worse than our
performance over the real network, which is simultaneously
amusing and sad.
The problem is that we currently don't get the asynchronous
`pipelining' when doing inter-domain networking that gives good
performance when going to an external interface: since the
communication is currently synchronous we don't get back pressure
to allow a queue to build up as would happen with a real NIC. The
net result is that we end up bouncing in and out of xen several
times for each packet.
I volunteered to fix this, but I'm afraid I haven't had time as
yet. I'm confident we should end up with really good inter domain
networking performance, using pipelining and page flipping.
> Also, did you make some benchmarks about the amount
> of performance degradation by using vbds/vds for disk access
> compared with using the block device directly (test in DOM0)?
Performance of vbds and raw partitions should be identical. Disks
are slow -- you have to really work at it to cock the performance
up ;-)
> Could mounting /dev/sda via enbd be more performant or
> at least nearly equally performant to using vds and vbds
> because of the additional overhead of vd/vbd use... ??
Performance using enbd should be pretty good once we've sorted
out inter domain networking.
Ian
-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|
|
|
|
|