On 15.10.2003 at 18:48 Ian Pratt wrote:
>> >[Ian:]The main thing would be turning the VFR into more of an L2 switch
>> >than a router, with each domain having its own MAC[*]. We could then
>> >add a rule to grant a domain TX permission for a particular 802
>> >protocol number. HyperSCSI presumably has some high-level
>> >server-based authentication and privilege verification? If so, it
Yes, it has (even encryption, if needed).
>> >should be pretty straightforward.
>>
>> This is much better, though more complicated too ;-)
>>
>> However, I wouldn't do this based on protocols or routing HyperSCSI
>> ether packets or the need to use HyperSCSI kernel modules in
>> domains > 0 (Perhaps too complicated and only a special solution for this
>> problem).
>
>I still like my proposal ;-)
:)) Sorry for being so rude on it ;-)
..Besides my other points mentioned, I just want to avoid the
necessity to load a kernel module in domains > 0 in order to use
the /dev/sda device.
It should just be usable like a standard hardware device supported
by the kernel -- KISS-Principle, at least from the point of view of
domains > 0 or clients using domains >0.
(Yes, sometimes I am a very restrictive purist ;-).
>
>It's pretty straight forward to implement, is relatively clean,
>and will have good performance.
I would like to build up a "production strength" environment
with as high remote access disk performance (speed) as reasonably
possible.
But if I accept thinking about loading a kernel module in
domains > 0 in order to get HyperSCSI attached devices
to work somehow, then your proposal (VFR routing of ether packets
to and from domains > 0) is perhaps likely to result in better
performance than using enbd devices additionaly.
However, I somehow don't like the thought of 100+ domains
from e.g. 3 different physical servers to connect to the HyperSCSI
physical server directly.
Thinking about just 3 DOM0 HyperSCSI clients connecting
to the HyperSCSI-Server directly feels somehow more comfortable.
(e.g. a lot easier administration, less points of failiure.)
The 3 DOM0s in this example can then export the HyperSCSI
device(s) via whatever means to the domains > 0.
>
>However, if you're exporting a single disk from the HyperSCSI
>server its not much help.
>
>> The virtual block device driver mapps this to /dev/sda and forwards
>> the request to Xen (perhaps it also tags this request as a request
>> to a "special device" before forwarding the request to Xen).
>> Xen realizes that there is no physical device connected to /dev/sda
>> (or registered with Xen ? Maybe it can then also recognize that
>> the request was marked as targeting a "special device").
>> Because of that condition, it forwards this block device request
>> to DOM0 now in which a "request handler" kernel module will listen for
>> block device requests which may be forwarded to DOM0 from
>> Xen to be handled in DOM0 (It will need to register a callback
>> function with Xen in order to do so).
>
>I think your best solution is not to use Xen vbd's at all. If
>you don't like NFS, how about having domains >0 using "enhanced
>network block devices" which talk to a simple server running in
>domain0. The storage for the nbd server can be files, partitions
>or logical volumes on /dev/sda.
>
>This should require writing no code, and will give pretty good
>performance. It gives good control over storage allocations etc.
>
>http://www.it.uc3m.es/~ptb/nbd/
Thanks a lot for pointing me to this solution !
I will look into it during the next days (especially performance ;-).
A propos:
Did you ever make benchmarks about the average or maximum
throughput of your VFR implementation in XEN ?
This would be interesting when routing enbd IP packets
from DOM0 to the other domains on the same
machine (in terms of a possible average/maximum reachable
performance).
Also, did you make some benchmarks about the amount
of performance degradation by using vbds/vds for disk access
compared with using the block device directly (test in DOM0)?
Could mounting /dev/sda via enbd be more performant or
at least nearly equally performant to using vds and vbds
because of the additional overhead of vd/vbd use... ??
>
>[It appears to work as a rootfs, but I haven't verified]
I'll try it....(initrd required, I think :-( ) ;-)
Best regards,
Sven
-------------------------------------------------------
This SF.net email is sponsored by: SF.net Giveback Program.
SourceForge.net hosts over 70,000 Open Source Projects.
See the people who have HELPED US provide better services:
Click here: http://sourceforge.net/supporters.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|