Submission: Ralf’s updated zfs backup script (with tutorial!)

June 20, 2007

The following comes to you from Ralf Ramge, who has graciously allowed me to post his script and all the instructions below:

“I have a small update. I’ve made the number of backups of each
filesystem easier to handle by replacing the hardcoded number with a
variable. I also added some comments so everybody should be able adjust
both the path of the snapshot directory and the number of backups easily.

I decided to show you the disaster scenario for which this script is
being used.

Let’s take a simple server with the following ZFS file systems:


root@static:/> zfs list
NAME                   USED   AVAIL   REFER  MOUNTPOINT
tank                   8,78G  101G   28,5K   /export
tank/backup            4,93G  101G   27,5K   /export/backup
tank/backup/snapshots  4,93G  101G   4,93G   /export/backup/snapshots
tank/backup/sysdata    68K    101G   41,5K   /export/backup/sysdata
tank/repository        205M   101G   205M    repository/packages
tank/zones             3,65G  101G   25,5K   /export/zones
tank/zones/ffxi-sites  3,65G  101G   3,63G   /export/zones/ffxi-sites
root@static:/>

tank/backup: usual file system, but with compression=on. Very useful for
snapshots, compression rate is 1.66:1
tank/backup/snapshots: The snapshot directory I use in the scripts
tank/backup/sysdata: That’s a backup directory in which essential system
data is stored. Most important: the complete contents of /etc/zones and
perhaps some stuff like /etc/netmasks, /etc/nsswitch.conf, whatever
could be of importance during reconstruction of the host.
tank/repository/packages: That’s where I keep my packages, scripts,
whatever. Just a repository which is shared (sharenfs=ro,anon=o) and
re-mounted in the local zone as `/var/spool/pkg` … makes things easier.
tank/zones/ffxi-sites: That’s the zonepath of the zone `ffxi-sites-zone`.

I bet it’s getting interesting now, because I’ll explain you how to
backup this zone using the script … and how to reconstruct it from
scratch, on another hardware with different network drivers and such.

Okay. I make backups of tank/backup/sysdata (we need the contents of
/etc/zones for reconstruction), tank/repository/packages and, of
course, the entire local zone itself by backing up
tank/zones/ffxi-sites. Use my script for it, e.g. by executing it for
each file system in your crontab and sending it to a backup server.

This will result in two copies. the local copy in
/export/backup/snapshots (you can edit this path in the script) and one
on a remote server. We us the local copy in case someone shot the
database in the local zone or whatever. And the remote copy is needed in
case of a necessary re-installation of the entire server.

Btw: My private ISP only offers plain FTP for accessing backup servers.
FTP isn’t supported by the backup script. I make the local backups
between 1 an 3 am, and I use the following (imperfect) script at 5am to
transfer all backups to the FTP server:


#!/bin/bash
HOST='<ip of ftp server>'
USER='<user>'
PASSWD='<pass>'

BACKUPDIR="/export/backup/snapshots"

if [ ! -d $BACKUPDIR ]; then
echo "Backup Directory doesn't exist"
exit 1
fi

cd $BACKUPDIR

# cleanup ftp server, kamikaze style

ftp -n $HOST << END_CLEANUP > /dev/null
quote USER $USER
quote PASS $PASSWD
mdel *.zfs
bye
END_CLEANUP

# transfer all ZFS images to the ftp server

#for FILE in `ls -rt1 *.zfs`; do
# ftp -n $HOST << END_SCRIPT
# quote USER $USER
# quote PASS $PASSWD
# binary
# put $FILE
# bye
# END_SCRIPT
#done

#for FILE in `ls -rt1 *.zfs`; do
ftp -n $HOST << END_SCRIPT >/dev/null
quote USER $USER
quote PASS $PASSWD
binary
mput *.zfs
# put $FILE
quit
END_SCRIPT
#done

exit 0

This script isn’t recommended for production use, there’s a plain text
password in it. I only use it because the ftp server is firewalled and
cannot be accessed from other servers using this login – and if someone
hacked into my machine, he won’t need the backups anyway.

Okay, now disaster happens. We have to reconstruct *everything*. So
insert the Solaris DVD into the drive, install Solaris as usual, apply
the Recommend Patches cluster, and so on. Finally, we’re ready create
the ZFS pool again, we create tank/backup/snapshots and copy the ZFS
images from our remote server backup into this directory. We have our
local copies back.

Now: deploy the three filesystems using `zfs receive`, e.g. `zfs receive
tank/backup/sysdata < tank_backup_sysdata-070611-033000.zfs
`

We have our zone configuration back now. We `cd` to
/export/backup/sysdata. Then we’re going to copy the `index` file plus
the ?.xml` files back to /etc/zones, replacing the default ones.

We still have to fix the network interface, which changed due to the
(imaginary) new hardware we have to use now. Enter the zone
configuration using `zonecfg`, e.g.:

—-
root@static:/> zonecfg -z ffxi-sites-zone
zonecfg:ffxi-sites-zone> info
zonename: ffxi-sites-zone
zonepath: /export/zones/ffxi-sites
autoboot: false
pool: pool_default
limitpriv:
net:
address: <your zone ip>
physical: rtls0
rctl:
name: zone.cpu-shares
value: (priv=privileged,limit=10,action=none)
attr:
name: comment
type: string
value: ffxi-sites
zonecfg:ffxi-sites-zone>

We have to change the “physical” entry to something else, let’s say
we’re using a X4100 now and so we need `e1000g0` instead of `rtls0`.
`ifconfig -a` shows us the device name.

Type `select net address=<your zone ip>`. Then `set pyhsical=e1000g0`.
Then `end`.

We still have to commit the changes. We do this by typing `commit` and
then we `exit`.

All we have to do now is `zoneadm boot ffxi-sites-zone` and we’re online
again, without having to deploy the zone, too.

Okay, done. We’re online again.

Only continue reading if you want to join some unsupported “hacking
procedures”.

Q: What to do in case our zonepath changed, or we want it to be changed?
A: That’s easy. Grab your `vi` and edit /etc/zones/index. Ignore the “DO
NOT EDIT” warning, that’s for girls only. See below on what to do.

Q: I only want to clone a zone, what to do now?
A: `zfs send <source filesystem> | zfs receive <destination
filesystem>
`. Or clone and promote a filesystem, your choice. Then grab
you `vi` and edit /etc/zones/index again. Change the IP like we I shows
you earlier before you boot the cloned zone. And don’t forget chmod 700
your new zonepath.

(Matthew Says: Another way to clone as zone, depending on your patch level, can be seen further down here: http://uadmin.blogspot.com/2006/08/day-in-life-of-solaris-11-admin.html)

Let’s have a look at the index file:


root@static:/> cat /etc/zones/index
# Copyright 2004 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)zones-index 1.2 04/04/01 SMI"
#
# DO NOT EDIT: this file is automatically generated by zoneadm(1M)
# and zonecfg(1M). Any manual changes will be lost.
#
global:installed:/
ffxi-sites-zone:installed:/export/zones/ffxi-sites:6cffc060-de2f-c972-f548-f36320bcfccf
root@static:/>

“ffxi-sites-zone” is the name of my zone. enter the name of your clone here.
“installed” is the zone’s status. Our filesystems contains a bootable
zone, so “installed” is okay. If you configured a new zone manually and
didn’t install it using `zoneadm` yet, it’ll be “configured” … then
deploy a new, bootable zonepath and set the mode to “installed”. I use
this because I use completely installed zone templates for deploying new
zones, that’s much faster than the official (and supported) `zoneadm -z
<zone> install
` way.
/export/zones/ffxi-sites is the zonepath. you may change it. Just make
sure the new zonepath exists and it has mode 700. The latter is very
important.

Make sure you made a backup of your orginal before editing it manually.
You edit it on your own risk.

The entry for a cloned zone may look like this:


ffxi-sites2-zone:installed:/export/zones/ffxi-sites2:6cffc060-de2f-c972-f548-f36320bcfccf

Okay, and now forget what I told you, because Sun won’t give you the
least bit of support if something goes wrong :-) The fact that I use
such methods for production use won’t help you much if you have a typo
where you shouldn’t have made one.

Hope you had some fun.”

Thanks for the submission Ralf!

posted in backup, bash, script, solaris, sun, tutorials, zfs by Lee

2 Comments to "Submission: Ralf’s updated zfs backup script (with tutorial!)"

  1. Ralf Ramge wrote:

    There’s a small bug in the script, concerning the number of dummy backup files the script deploys during it’s first run. And sadly, Solaris 10 doesn’t come with the `seq` command, so I have to provide you with that one, too. Of course, the old version of the script works, too, but just not as smooth.

    You’ll find the new version here: http://sunstrokes.de/index.php?/archives/1-Transferring-ZFS-snapshots-to-remote-locations.html

  2. Jeff wrote:

    Very good post! Ralf’s backup script seems very interesting. Unfortunately, I can’t find the actual backup script in this post. I see a FTP script and zones information, and the link contained in Ralf’s comment is no longer valid. I’m trying to find a script that can create a snapshot (at a given time and/or after a rsync transfer) and keep a given number of snapshots (a certain number of hourly, dailies, weeklies, etc.). I’m planning on using a Solaris box as a backup server, and would like to rsync to the Solaris server and then have a snapshot created following the rsync session. I’m not sure if Ralf’s script performs the snapshot management functionality I’m looking for, but it sounds like it would be a good starting point.

 
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org