WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Sharing space on a SAN?

Both.

Degradation is inherent to the concept of shared storage. Either you trust the 
fact that you're alone on a disk and use diskcaching or you're not and you need 
to use some other mechanism to sync the changes amongst the participants. This 
can be via I/O or via the network, so you always have delay and bandwith usage.

So, if you would compare ext3 on san disk to a gfs or clvm volume, you would 
get a performance hit. But, would you say, those are things you can't compare 
as gfs en clvm does things an ext3 on a san disk can't. True. But do we _need_ 
those things ? No. In fact, we do _not_want_ the ability to start the same domu 
several times on different machines. And by letting the domu use the san disk 
as a dedicated disk, it can use its diskcache without taking any other machines 
in to account. Even live migration is not a problem, as the memory ( and thus 
the disk cache ) just get migrated too ...

I simply do not see the added value of using a clustered filesystem for a domu. 
And in that light, any additional overhead is too much. Why make things complex 
? Complex setups have complex problems in my experience. We solved the problem 
with a very simple script, checking all cluster members to see if a specific 
domu was running or not. Does this weigh up to having to learn,administer,tune 
en possibly debug a clustered file system ?

Lastly, I really don't see the $/GB argument. A GB cost the same, although its 
a bit slower on a clustered filesystem, that's all.

Peter.
Ps: nice line-up of acronyms, btw 8-)

On Monday 08 September 2008 17:30:25 Javier Guerra wrote:
> On Mon, Sep 8, 2008 at 10:23 AM, Peter Van Biesen
> <peter.vanbiesen@xxxxxxx> wrote:
> > We tested that too. It is overkill and degrades performance.
> 
> do you mean GFS/OCFS, or CLVM/EVMS?
> 
> the former have obviously higher overhead, the later shouldn't....
> 
> it's important because if you don't have netapp-level storage
> subsystems with great administration, then you can do iSCSI/AoE/gnbd
> and CLVM, and get great $/GB results with still really good
> administrability.
> 



-- 
Peter Van Biesen
Sysadmin VAPH

tel: +32 (0) 2 225 85 70
fax: +32 (0) 2 225 85 88
e-mail: peter.vanbiesen@xxxxxxx
PGP: http://www.vaph.be/pgpkeys

Opgelet ! De domeinnaam van het Vlaams Agentschap is vanaf heden
vaph.be. Dit betekent dat u uw correspondent kan bereiken via
voornaam.naam@xxxxxxxx Gelieve aub dit aan te passen in uw adresboek.

DISCLAIMER
-------------------------------------------------------------------------------
De personeelsleden van het agentschap doen hun best om in e-mails
betrouwbare informatie te geven. 
Toch kan niemand rechten doen gelden op basis van deze inhoud. 
Als in de e-mail een stellingname voorkomt, is dat niet noodzakelijk het
standpunt van het agentschap. 
Rechtsgeldige beslissingen of officiele standpunten worden alleen per
brief toegestuurd.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users