WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen + SAN

On Fri, Jun 24, 2011 at 2:17 AM, Fajar A. Nugraha <list@xxxxxxxxx> wrote:
> However it's also the only filesystem that I know of that can
> use SSD to speed up both read and write. It's something that LVM/ext4
> can't do. So it just might be able to offset performance penalty of
> zfs + nfs + file-based-image.

there are a few SSD cache projects for
Linux(https://github.com/facebook/flashcache,
http://bcache.evilpiepirate.org/,
http://users.cis.fiu.edu/~zhaom/dmcache/index.html), these operate at
block level, accelerating any block device.  while the obvious place
to put them is in the target box, i've been thinking about using on
the initiator boxes, to accelerate the iSCSI LUNs :

obvious:

harddisks ==> RAID/LVM ==> SSD ==> iSCSI target-----(network)--->
iSCSI initiator ==> Xen VM

food for thought:

harddists ==> RAID/LVM ==> iSCSI target ----(network)---> iSCSI
initiator ==> SSD cache ==>Xen VM


this is feasible for VM images, since each LUN is used by only one
machine at a time.  clustered filesystems would immediately choke on
such a setup.

pros: could be _much_ faster, since the cache is local to the user.
cons: you have to flush/disable the cache while live-migrating


it's still just an idea, i haven't found the time (nor machines) to
test it.  but sounds doable, no?

-- 
Javier

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>