:wq - blog » backup http://writequit.org/blog Tu fui, ego eris Mon, 22 Dec 2014 14:54:59 +0000 en-US hourly 1 http://wordpress.org/?v=4.1.5 Submission: Ralf’s updated zfs backup script (with tutorial!) http://writequit.org/blog/2007/06/20/submission-ralfs-updated-zfs-backup-script-with-tutorial/ http://writequit.org/blog/2007/06/20/submission-ralfs-updated-zfs-backup-script-with-tutorial/#comments Wed, 20 Jun 2007 17:02:50 +0000 http://writequit.org/blog/?p=62 The following comes to you from Ralf Ramge, who has graciously allowed me to post his script and all the instructions below:

“I have a small update. I’ve made the number of backups of each
filesystem easier to handle by replacing the hardcoded number with a
variable. I also added some comments so everybody should be able adjust
both the path of the snapshot directory and the number of backups easily.

I decided to show you the disaster scenario for which this script is
being used.

Let’s take a simple server with the following ZFS file systems:


root@static:/> zfs list
NAME                   USED   AVAIL   REFER  MOUNTPOINT
tank                   8,78G  101G   28,5K   /export
tank/backup            4,93G  101G   27,5K   /export/backup
tank/backup/snapshots  4,93G  101G   4,93G   /export/backup/snapshots
tank/backup/sysdata    68K    101G   41,5K   /export/backup/sysdata
tank/repository        205M   101G   205M    repository/packages
tank/zones             3,65G  101G   25,5K   /export/zones
tank/zones/ffxi-sites  3,65G  101G   3,63G   /export/zones/ffxi-sites
root@static:/>

tank/backup: usual file system, but with compression=on. Very useful for
snapshots, compression rate is 1.66:1
tank/backup/snapshots: The snapshot directory I use in the scripts
tank/backup/sysdata: That’s a backup directory in which essential system
data is stored. Most important: the complete contents of /etc/zones and
perhaps some stuff like /etc/netmasks, /etc/nsswitch.conf, whatever
could be of importance during reconstruction of the host.
tank/repository/packages: That’s where I keep my packages, scripts,
whatever. Just a repository which is shared (sharenfs=ro,anon=o) and
re-mounted in the local zone as `/var/spool/pkg` … makes things easier.
tank/zones/ffxi-sites: That’s the zonepath of the zone `ffxi-sites-zone`.

I bet it’s getting interesting now, because I’ll explain you how to
backup this zone using the script … and how to reconstruct it from
scratch, on another hardware with different network drivers and such.

Okay. I make backups of tank/backup/sysdata (we need the contents of
/etc/zones for reconstruction), tank/repository/packages and, of
course, the entire local zone itself by backing up
tank/zones/ffxi-sites. Use my script for it, e.g. by executing it for
each file system in your crontab and sending it to a backup server.

This will result in two copies. the local copy in
/export/backup/snapshots (you can edit this path in the script) and one
on a remote server. We us the local copy in case someone shot the
database in the local zone or whatever. And the remote copy is needed in
case of a necessary re-installation of the entire server.

Btw: My private ISP only offers plain FTP for accessing backup servers.
FTP isn’t supported by the backup script. I make the local backups
between 1 an 3 am, and I use the following (imperfect) script at 5am to
transfer all backups to the FTP server:


#!/bin/bash
HOST='<ip of ftp server>'
USER='<user>'
PASSWD='<pass>'

BACKUPDIR="/export/backup/snapshots"

if [ ! -d $BACKUPDIR ]; then
echo "Backup Directory doesn't exist"
exit 1
fi

cd $BACKUPDIR

# cleanup ftp server, kamikaze style

ftp -n $HOST << END_CLEANUP > /dev/null
quote USER $USER
quote PASS $PASSWD
mdel *.zfs
bye
END_CLEANUP

# transfer all ZFS images to the ftp server

#for FILE in `ls -rt1 *.zfs`; do
# ftp -n $HOST << END_SCRIPT
# quote USER $USER
# quote PASS $PASSWD
# binary
# put $FILE
# bye
# END_SCRIPT
#done

#for FILE in `ls -rt1 *.zfs`; do
ftp -n $HOST << END_SCRIPT >/dev/null
quote USER $USER
quote PASS $PASSWD
binary
mput *.zfs
# put $FILE
quit
END_SCRIPT
#done

exit 0

This script isn’t recommended for production use, there’s a plain text
password in it. I only use it because the ftp server is firewalled and
cannot be accessed from other servers using this login – and if someone
hacked into my machine, he won’t need the backups anyway.

Okay, now disaster happens. We have to reconstruct *everything*. So
insert the Solaris DVD into the drive, install Solaris as usual, apply
the Recommend Patches cluster, and so on. Finally, we’re ready create
the ZFS pool again, we create tank/backup/snapshots and copy the ZFS
images from our remote server backup into this directory. We have our
local copies back.

Now: deploy the three filesystems using `zfs receive`, e.g. `zfs receive
tank/backup/sysdata < tank_backup_sysdata-070611-033000.zfs
`

We have our zone configuration back now. We `cd` to
/export/backup/sysdata. Then we’re going to copy the `index` file plus
the ?.xml` files back to /etc/zones, replacing the default ones.

We still have to fix the network interface, which changed due to the
(imaginary) new hardware we have to use now. Enter the zone
configuration using `zonecfg`, e.g.:

—-
root@static:/> zonecfg -z ffxi-sites-zone
zonecfg:ffxi-sites-zone> info
zonename: ffxi-sites-zone
zonepath: /export/zones/ffxi-sites
autoboot: false
pool: pool_default
limitpriv:
net:
address: <your zone ip>
physical: rtls0
rctl:
name: zone.cpu-shares
value: (priv=privileged,limit=10,action=none)
attr:
name: comment
type: string
value: ffxi-sites
zonecfg:ffxi-sites-zone>

We have to change the “physical” entry to something else, let’s say
we’re using a X4100 now and so we need `e1000g0` instead of `rtls0`.
`ifconfig -a` shows us the device name.

Type `select net address=<your zone ip>`. Then `set pyhsical=e1000g0`.
Then `end`.

We still have to commit the changes. We do this by typing `commit` and
then we `exit`.

All we have to do now is `zoneadm boot ffxi-sites-zone` and we’re online
again, without having to deploy the zone, too.

Okay, done. We’re online again.

Only continue reading if you want to join some unsupported “hacking
procedures”.

Q: What to do in case our zonepath changed, or we want it to be changed?
A: That’s easy. Grab your `vi` and edit /etc/zones/index. Ignore the “DO
NOT EDIT” warning, that’s for girls only. See below on what to do.

Q: I only want to clone a zone, what to do now?
A: `zfs send <source filesystem> | zfs receive <destination
filesystem>
`. Or clone and promote a filesystem, your choice. Then grab
you `vi` and edit /etc/zones/index again. Change the IP like we I shows
you earlier before you boot the cloned zone. And don’t forget chmod 700
your new zonepath.

(Matthew Says: Another way to clone as zone, depending on your patch level, can be seen further down here: http://uadmin.blogspot.com/2006/08/day-in-life-of-solaris-11-admin.html)

Let’s have a look at the index file:


root@static:/> cat /etc/zones/index
# Copyright 2004 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)zones-index 1.2 04/04/01 SMI"
#
# DO NOT EDIT: this file is automatically generated by zoneadm(1M)
# and zonecfg(1M). Any manual changes will be lost.
#
global:installed:/
ffxi-sites-zone:installed:/export/zones/ffxi-sites:6cffc060-de2f-c972-f548-f36320bcfccf
root@static:/>

“ffxi-sites-zone” is the name of my zone. enter the name of your clone here.
“installed” is the zone’s status. Our filesystems contains a bootable
zone, so “installed” is okay. If you configured a new zone manually and
didn’t install it using `zoneadm` yet, it’ll be “configured” … then
deploy a new, bootable zonepath and set the mode to “installed”. I use
this because I use completely installed zone templates for deploying new
zones, that’s much faster than the official (and supported) `zoneadm -z
<zone> install
` way.
/export/zones/ffxi-sites is the zonepath. you may change it. Just make
sure the new zonepath exists and it has mode 700. The latter is very
important.

Make sure you made a backup of your orginal before editing it manually.
You edit it on your own risk.

The entry for a cloned zone may look like this:


ffxi-sites2-zone:installed:/export/zones/ffxi-sites2:6cffc060-de2f-c972-f548-f36320bcfccf

Okay, and now forget what I told you, because Sun won’t give you the
least bit of support if something goes wrong :-) The fact that I use
such methods for production use won’t help you much if you have a typo
where you shouldn’t have made one.

Hope you had some fun.”

Thanks for the submission Ralf!

]]>
http://writequit.org/blog/2007/06/20/submission-ralfs-updated-zfs-backup-script-with-tutorial/feed/ 2
Submission: local/remote zfs snapshot script http://writequit.org/blog/2007/06/06/submission-localremote-zfs-snapshot-script/ http://writequit.org/blog/2007/06/06/submission-localremote-zfs-snapshot-script/#comments Wed, 06 Jun 2007 19:09:09 +0000 http://writequit.org/blog/?p=57 Here’s a nifty little submission from Ralf Ramge. It will do a ZFS snapshot backup to a local directory, a remote machine and also clone and promote the filesystem on the remote machine. It keeps the last 7 backups around. Take a look:

#!/bin/bash
# backup_zfssnap.sh, (c) 2007 ralf [dot] ramge [at] webde [dot] de

BACKUPDIR="/export/backup/snapshots"
DSTAMP=`date '+%y%m%d-%H%M%S'`
FILESYS=$1
DEST=$2
REPLICA=$3
BACKUPNAME=`echo $FILESYS | sed 's/\//_/g'`
BACKUPFILE=$BACKUPNAME"-"$DSTAMP".zfs"
SNAPSHOT=$FILESYS"@backup-"$DSTAMP

if [ ! -d $BACKUPDIR ]; then
echo "Backup Directory doesn't exist"
exit 1
fi

cd $BACKUPDIR

# Check here if we have 7 backup files, create them if we don't
COUNT_FILES=`ls -1 $BACKUPNAME* | wc -l`
if [ $COUNT_FILES -le 1 ]; then
for COUNT in 1 2 3 4 5 6 7
do
if [ ! -f $BACKUPNAME"-000000-00000"$COUNT".zfs" ]; then
touch $BACKUPNAME"-000000-00000"$COUNT".zfs"
sleep 1
fi
done
fi

# Check here that we have less than 8 backup files
COUNT_FILES=`ls -1 $BACKUPNAME* | wc -l`
if [ $COUNT_FILES -gt 7 ]; then
# echo "More than 7 backup files exist"
# exit 1
while [ $COUNT_FILES -gt 7 ]
do
OLDEST_BACKUP_FILE=`ls -rt1 $BACKUPNAME* | head -1`
rm $OLDEST_BACKUP_FILE
let COUNT_FILES=COUNT_FILES-1
done
fi

# Find the oldest backup file to delete
OLDEST_BACKUP_FILE=`ls -rt1 $BACKUPNAME* | head -1`

# Create the snapshot
zfs snapshot $SNAPSHOT

# Create a filesystem image in the local backup directory
zfs send $SNAPSHOT > $BACKUPDIR"/"$BACKUPFILE

# Check for $2 and, if exists, create a second copy on a remote host for tape archival
if [ ! -z $2 ]; then
`zfs send $SNAPSHOT | ssh root@$2 "cat >$BACKUPDIR/$BACKUPFILE"`
fi

# Check for $3 and, if exists, mirror the filesystem on the remote host
if [ ! -z $3 ]; then
`ssh root@$2 "zfs receive $3 < $BACKUPDIR/$BACKUPFILE"`
fi

# Check for $4 and, if exists, clone and promote the filesystem on the remote host
if [ ! -z $4 ]; then
`ssh root@$2 "zfs clone $SNAPSHOT $4; sleep 30; zfs promote $4"`
fi

# Get the trash out of the house
rm $OLDEST_BACKUP_FILE
if [ ! -z $2 ]; then
ssh root@$2 "rm $BACKUPDIR/$OLDEST_BACKUP_FILE"
fi

SNAPLIST=`zfs list -H | grep $FILESYS | grep @backup | cut -f1`
for i in $SNAPLIST; do
zfs destroy $i
done

# Exit cleanly
exit 0

Thanks for the submission Ralf! (I changed your email address in the script comments so you wouldn’t get spam)

]]>
http://writequit.org/blog/2007/06/06/submission-localremote-zfs-snapshot-script/feed/ 0