Hi List,
I don't know if this is something that is a simple matter of opinion or if there are strong reasons to take one route or the other. I have dom0 nodes with dedicated bond interfaces that connect to a storage traffic-only VLAN. Currently I have a few domUs running that have large-ish volumes on the iSCSI SAN, and to present the volumes I'm connecting the dom0s to the storage, then in the domU config I give it the iSCSI volumes as 'phy:/dev/disk/by-path/ip-10.10.10.5-iqn....:volume1' and so on.
I'm wondering if instead it's considered better to simply have the domU make those iSCSI connections, by giving it a second interface that connects to a bridge with access to the iSCSI VLAN? I set one up that way instead, and it works as expected though it seems to perform less well in some admittedly simple tests using dd with 'oflag=direct' set. The dom0 can write at about 110MB/s sustained, but if the domU makes the iSCSI connection instead it's doing well to manage 60MB/s.
So I'm looking to the wisdom of the list, is it crazy to set things up one way or the other, or are they both an OK approach? It seems like having the dom0 handle the iSCSI connection is a big win performance-wise, but perhaps that comes at a big negative cost? Having the domU make the connection simplifies the Pacemaker configuration, but with an apparent loss in disk throughput.
This setup is initially just some fairly busy MySQL servers. Running as they are with dom0 iSCSI volumes, they're very easily handling the load but I had to reboot one of them today because it started throwing errors accessing a volume:
Oct 16 12:00:36 slave3 kernel: end_request: I/O error, dev xvdc, sector 983424
The dom0 still showed the iSCSI connection as active and appeared fine, but the domU couldn't access the disk. A reboot fixed it right up, further indicating to me that the underlying iSCSI connection from dom0 was fine all along since I did nothing with it at all. Perhaps there's some xm command that could have cleared the problem for domU as well, and I just don't know it.
Thanks for any insight, Mark
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|