WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Block level domU backup

Luke S Crawford wrote:
> "Fajar A. Nugraha" <fajar@xxxxxxxxx> writes:
>   
>> With that in mind, it should be easier to simply use snapshot without
>> the need of xm save/restore. It will save some domU "downtime" (the time
>> needed to save and restore domU).
>>     
>
> the idea is that 'xm save' saves your ram, including any write-back
> disk cash that has not been flushed.  So if I do a 'xm save' and save the 
> savefile, and take a bit-for-bit copy of the backing device while the 
> domain is still frozen, I should be able to restore the bit-for-bit copy 
> of the backing device at some point in the future, and then 'xm restore' the
> savefile I saved, and end up exactly where I was, with no inconsistancies 
> or corruptions, as all disk writes that had not been flushed to disk are 
> still in ram.  
>
> (I can reduce downtime by only taking a snapshot while the domU is down, then
> doing the bit-for-bit copy off the snapshot.) 
>
>   
Even with snapshot, there's still the time required to "xm save" and "xm
restore". I guess it's more about choice, really.

If I snapshot without xm save-restore, I get a "dirty" filesystem
backup, but services would run as usual. If I do xm save-restore, I get
a "clean" backup, but that also means all services on that domU would be
unavailable for (at least) the duration of xm save-restore. I choose the
first one.

> you want to test 
> restoring your backup to another server.   A backup that isn't tested is 
> no backup at all.  
>
>   

Good point on that.
>> Another thing to consider, when the question "how to backup domU" arised
>> on this list in the past (and it comes up quite often, search the list
>> archive) I'd generally reply "try using zfs snapshot". Which means :
>> - for backup in domU, you either need an opensolaris or zfs-fuse/linux
>> running on domU
>>     
>
> Yeah, that's great if you are using opensolaris in the DomU (or something
> else that supports zfs well)  but from what I understand, the linux zfs-fuse
> stuff is pretty slow.  
>
>   

Not really. zfs-fuse is slow if you let it handle raid (about half
lvm/md throughput). Since I mostly need the snapshot feature, I use
zfs-fuse on top of lvm. Performance-wise, depending on how you use it,
it's similar to ext3. Best case scenario, if you :
- disable checksum
- enable compression
- set application block size to match zfs block size (or vice versa)
you can actually get better read i/o performance (with cpu usage tradeoff).

>> - for backup in dom0, you need opensolaris dom0 (using zfs volume),
>> whatever the OS/fs running on domU.
>>     
>
> This does sound interesting, though I haven't tried it.  
>
>   

I'm using opensolaris snv_98 dom0, and it works fine for the most part.
There are differences from linux dom0 though, like the fact that (for
now) you can't bridge a vlan interface to dom0 (you can only bridge
physical interfaces).

>> Another alternative is to have an opensolaris server exporting zfs
>> volumes via iscsi, have dom0/domU import it, and do all backups on the
>> storage server.
>>     
>
> this is also interesting.  Software ISCSI is obvously going to be slower
> than native disk, but how much slower?  it is an interesting question.
>
>   

This thread might give some info
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-October/051749.html

> Right now, all my storage is local to the Dom0, and many hosts have 
> excess disk.  I've been thinking about exporting the excess disk via iscsi
> or NFS so that customers who want to buy more storage can do so without me
> worrying about balancing the local storage on various Dom0 hosts.  
>
> and it would be easy enough to do that from within a OpenSolaris DomU.
>
> the big question in my mind is 'how much of the zfs benifits do I retain
> if I export over iscsi and format the block device ext3?'
>
>   
You can get :
- zfs checksum and raidz, which would ensure data integrity (up to the
exported block-level anyway)
- transparent compression. Having compressed ext3 volumes is nice for
certain usage.
- snapshot and clone. Similar to qcow, but with block-device benefits.


> If the dd is causing performance problems, use ionice, and set it to the
> 'idle' class.  your backup will be really, really slow but will not
> interfere with other I/O.  (I have tested that, and it does seem to work 
> as advertised.)  Now, you're right about lvm snapshots being slow, so the 
> domain being backed up is going to be slow until the backup finishes and the 
> snapshot is cleared,  but ionice makes a huge difference for the other 
> domains on the box.  
>
>   

Good hint on ionice. At least it can isolate the performance penalty.

Regards,

Fajar

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users