"Fajar A. Nugraha" <fajar@xxxxxxxxx> writes:
> With that in mind, it should be easier to simply use snapshot without
> the need of xm save/restore. It will save some domU "downtime" (the time
> needed to save and restore domU).
the idea is that 'xm save' saves your ram, including any write-back
disk cash that has not been flushed. So if I do a 'xm save' and save the
savefile, and take a bit-for-bit copy of the backing device while the
domain is still frozen, I should be able to restore the bit-for-bit copy
of the backing device at some point in the future, and then 'xm restore' the
savefile I saved, and end up exactly where I was, with no inconsistancies
or corruptions, as all disk writes that had not been flushed to disk are
still in ram.
(I can reduce downtime by only taking a snapshot while the domU is down, then
doing the bit-for-bit copy off the snapshot.)
Of course, xm save/restore is pretty picky about things like CPU archatecture
(and, for that matter, the path to the disk) so as always, you want to test
restoring your backup to another server. A backup that isn't tested is
no backup at all.
> Another thing to consider, when the question "how to backup domU" arised
> on this list in the past (and it comes up quite often, search the list
> archive) I'd generally reply "try using zfs snapshot". Which means :
> - for backup in domU, you either need an opensolaris or zfs-fuse/linux
> running on domU
Yeah, that's great if you are using opensolaris in the DomU (or something
else that supports zfs well) but from what I understand, the linux zfs-fuse
stuff is pretty slow.
> - for backup in dom0, you need opensolaris dom0 (using zfs volume),
> whatever the OS/fs running on domU.
This does sound interesting, though I haven't tried it.
> Another alternative is to have an opensolaris server exporting zfs
> volumes via iscsi, have dom0/domU import it, and do all backups on the
> storage server.
this is also interesting. Software ISCSI is obvously going to be slower
than native disk, but how much slower? it is an interesting question.
Right now, all my storage is local to the Dom0, and many hosts have
excess disk. I've been thinking about exporting the excess disk via iscsi
or NFS so that customers who want to buy more storage can do so without me
worrying about balancing the local storage on various Dom0 hosts.
and it would be easy enough to do that from within a OpenSolaris DomU.
the big question in my mind is 'how much of the zfs benifits do I retain
if I export over iscsi and format the block device ext3?'
> The benefit is that :
> - zfs snapshot is much faster than lvm snapshot (when using lvm snapshot
> disk writes will be doubled : to the original lv and the snapshot lv)
LVM snapshots do have... performance consiquences.
> - subsequent zfs snapshot is much faster since zfs tracks changes
> between snapshots internally (compared to rsync/blocksync which needs to
> read all files/blocks and compare their stats/checksum, thus eating lots
> of disk read i/o during backup process)
>
> > An alternative solution would be to bring the domUs down for a cold
> > block-level backup each night, but that is just a little more downtime
> > than I would like.
>
> Your current backup solution uses lots of disk I/O, which might result
> in severe performance degradation during backup. Depending on your
> requirements, this might be okay, but you'll get bettere performance
> with zfs.
Unless you are willing to move to a system with good ZFS support, I doubt it.
I bet (though I don't know for sure) that the iscsi overhead is going to
be greater than the difference between zfs snapshots and lvm snapshots.
If the dd is causing performance problems, use ionice, and set it to the
'idle' class. your backup will be really, really slow but will not
interfere with other I/O. (I have tested that, and it does seem to work
as advertised.) Now, you're right about lvm snapshots being slow, so the
domain being backed up is going to be slow until the backup finishes and the
snapshot is cleared, but ionice makes a huge difference for the other
domains on the box.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|