ZFS Uploader is a simple program for backing up full and incremental ZFS
snapshots to Amazon S3. It supports CRON based scheduling and can
automatically remove old snapshots and backups. A helpful CLI (zfsup
) lets
you run jobs, restore, and list backups.
- Backup/restore ZFS file systems
- Create incremental and full backups
- Automatically remove old snapshots and backups
- Use any S3 storage class type
- Helpful CLI
- Python 3.6 or higher
- ZFS 0.8.1 or higher (untested on earlier versions)
Commands should be run as root.
- Create a directory and virtual environment
mkdir /etc/zfs_uploader
cd /etc/zfs_uploader
virtualenv --python python3 env
- Install ZFS Uploader
source env/bin/activate
pip install zfs_uploader
ln -sf /etc/zfs_uploader/env/bin/zfsup /usr/local/sbin/zfsup
- Write configuration file
Please see the Configuration File section below for helpful configuration examples.
vi config.cfg
chmod 600 config.cfg
- Start service
cp zfs_uploader.service /etc/systemd/system/zfs_uploader.service
sudo systemctl enable --now zfs_uploader
- List backups
zfsup list
The program reads backup job parameters from a configuration file. Default parameters may be set which then apply to all backup jobs. Multiple backup jobs can be set in one file.
S3 bucket name.
S3 access key.
S3 secret key.
ZFS filesystem.
Prefix to be prepended to the s3 key.
S3 region.
S3 endpoint for alternative services
Cron schedule. Example: * 0 * * *
Maximum number of snapshots.
Maximum number of full and incremental backups.
Maximum number of incremental backups per full backup.
S3 storage class.
Maximum number of parts to use in a multipart S3 upload.
[DEFAULT]
bucket_name = BUCKET_NAME
region = us-east-1
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_incremental_backups_per_full = 6
max_backups = 7
Filesystem is backed up at 02:00 daily. Only the most recent 7 snapshots are kept. The oldest backup without dependents is removed once there are more than 7 backups.
[DEFAULT]
bucket_name = BUCKET_NAME
region = eu-central-003
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
endpoint = https://s3.eu-central-003.backblazeb2.com
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_incremental_backups_per_full = 6
max_backups = 7
full backup (f), incremental backup (i)
- f
- f i
- f i i
- f i i i
- f i i i i
- f i i i i i
- f i i i i i i
- f i i i i i f
- f i i i i f i
- f i i i f i i
- f i i f i i i
- f i f i i i i
- f f i i i i i
- f i i i i i i
[DEFAULT]
bucket_name = BUCKET_NAME
region = us-east-1
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_backups = 7
Filesystem is backed up at 02:00 daily. Only the most recent 7 snapshots are kept. The oldest incremental backup is removed once there are more than 7 backups. The full backup is never removed.
full backup (f), incremental backup (i)
- f
- f i
- f i i
- f i i i
- f i i i i
- f i i i i i
- f i i i i i i
[DEFAULT]
bucket_name = BUCKET_NAME
region = us-east-1
access_key = ACCESS_KEY
secret_key = SECRET_KEY
storage_class = STANDARD
[pool/filesystem]
cron = 0 2 * * *
max_snapshots = 7
max_incremental_backups_per_full = 0
max_backups = 7
Filesystem is backed up at 02:00 daily. Only the most recent 7 snapshots are kept. The oldest full backup is removed once there are more than 7 backups. No incremental backups are taken.
full backup (f)
- f
- f f
- f f f
- f f f f
- f f f f f
- f f f f f f
- f f f f f f f
- STANDARD
- REDUCED_REDUNDANCY
- STANDARD_IA
- ONEZONE_IA
- INTELLIGENT_TIERING
- GLACIER
- DEEP_ARCHIVE
- OUTPOSTS
-
Increment version in
__init__.py
file -
Update
CHANGELOG.md
with new version -
Tag release in GitHub when ready. Add changelog items to release description. GitHub Action workflow will automatically build and push the release to PyPi.