If you run Proxmox in production, you already know the real enemy isn’t disk failure — it’s half‑finished backups, overlapping jobs, and sleeping hosts in the middle of a snapshot.
This post walks through a battle‑tested Bash script I use to back up large datasets to Proxmox Backup Server (PBS) safely, cleanly, and without interfering with vzdump VM backups.
No magic. No cron roulette. Just solid Linux hygiene.
The Problem
Typical setup:
- VMs live on fast SSDs
- Large datasets live elsewhere (ZFS, HDDs, media pools)
- VM backups run via
vzdump - Dataset backups should:
- Never collide with VM backups
- Never run twice
- Never get interrupted by suspend / sleep
- Be logged properly
Most “simple” backup scripts ignore at least two of these.
The Design Goals
This script is built around a few non‑negotiables:
- PBS‑native backups using
proxmox-backup-client - Locking to prevent double runs
- Wait for vzdump (and abort if it starts mid‑backup)
- Suspend / sleep inhibition via
systemd-inhibit - Per‑run logs with timestamps
- Selective excludes for regenerable data
Environment & Authentication
We export everything PBS needs explicitly — no interactive prompts, no surprises:
export PBS_REPOSITORY="user@[email protected]:pbs-store"
export PBS_PASSWORD_FILE="/root/pbs-password"
export PBS_FINGERPRINT="XXXX:f0:e9:cd:2e:be:a7:0c:98:3c:85:99:f4:8f:66:ad:52:9f"
export PBS_NAMESPACE="pve-data"Why this matters:
- Tokens = no root passwords in scripts (See our post about creating PBS token)
- Fingerprint = MITM protection
- Namespace = clean separation inside PBS
Logging Like an Expert
Each run gets its own timestamped log:
LOG_DIR="/var/log/pbs"
DATE="$(date +'%Y-%m-%d_%H-%M-%S')"
exec > >(tee -a "$LOG_DIR/pbs-backup-${DATE}.log") 2>&1If something breaks at 3AM, you’ll thank yourself.
Hard Locking (No Double Runs)
Cron overlaps are silent killers.
We use flock:
exec 9>"/run/pbs-backup.lock"
flock -n 9 || exit 0If the script is already running → exit cleanly.
Respecting vzdump
VM backups have priority. Period.
Wait if vzdump is running
wait_for_vzdump() {
[[ -e "$VZDUMP_LOCK" ]] || return 0
echo "Waiting for vzdump..."
exec 8<"$VZDUMP_LOCK"
flock 8
exec 8>&-
}This avoids I/O storms and half‑written PBS chunks.
One Function to Rule Them All
The backup() function:
- Takes a dataset name
- Takes a path
- Accepts unlimited excludes
backup nextcloud "/mmd_server/data/nextcloud" \
"appdata_*/preview/" \
"data/*/files_versions/"This keeps the script:
- DRY
- Readable
- Easy to extend
Smart Excludes (Huge Win)
Examples:
Nextcloud
Exclude regenerable junk:
- previews
- cache
- JS/CSS
- trashbin
- versions
Immich / Photos
Exclude:
- thumbnails
- encoded video
- caches
Result:
- Faster backups
- Smaller PBS datastore
- Same restore value
Final Backup Flow
Order matters:
- Wait for vzdump
- Lock
- Block sleep
- Backup datasets sequentially
- Abort safely if needed
- Log everything
No races. No corruption. No guessing.
Why This Beats “Just Use vzdump”
Because datasets are:
- Not VMs
- Often massive
- Often shared
- Often static
PBS + pxar is perfect for this use case.
Full Backup Script
Here is the complete production-ready Bash script for backing up your datasets to PBS. Replace PBS_REPOSITORY, PBS_PASSWORD_FILE, and dataset paths with your own values before running.
#!/usr/bin/env bash
set -euo pipefail
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
### ==============================
### Configuration
### ==============================
export PBS_REPOSITORY="pve@[email protected]:pbs-store"
export PBS_PASSWORD_FILE="/path/to/pbs-password"
export PBS_FINGERPRINT="XXXXX:cd:2e:be:a7:0c:98:3c:85:99:f4:8f:66:ad:52:9f"
export PBS_NAMESPACE="pve-data"
BASE="/data"
LOG_DIR="/var/log/pbs"
DATE="$(date -Is)"
SNAP_PREFIX="pbsbackup"
TTL_HOURS=24
LOCKFILE="/run/pbs-backup.lock"
VZDUMP_LOCK="/var/run/vzdump.lock"
mkdir -p "$LOG_DIR"
exec > >(tee -a "$LOG_DIR/pbs-backup.log") 2>&1
echo "=== PBS dataset backup started at $DATE ==="
### ==============================
### Locking
### ==============================
exec 9>"$LOCKFILE"
/usr/bin/flock -n 9 || {
echo "Another PBS backup is running, exiting."
exit 0
}
### ==============================
### Wait for vzdump
### ==============================
wait_for_vzdump() {
[[ -e "$VZDUMP_LOCK" ]] || return 0
echo "Waiting for vzdump..."
exec 8<"$VZDUMP_LOCK"
flock 8
exec 8>&-
}
### ==============================
### TTL Cleanup (CRITICAL)
### ==============================
cleanup_old_snapshots() {
echo "Running snapshot TTL cleanup (>${TTL_HOURS}h)..."
local cutoff
cutoff="$(date -d "-${TTL_HOURS} hours" +%s)"
zfs list -H -t snapshot -o name,creation | \
while read -r snap created; do
[[ "$snap" == *@${SNAP_PREFIX}-* ]] || continue
snap_time=$(date -d "$created" +%s || true)
[[ "$snap_time" -lt "$cutoff" ]] || continue
echo "Destroying expired snapshot $snap"
zfs destroy -f "$snap" || true
done
}
cleanup_old_snapshots
### ==============================
### Backup function
### ==============================
backup_dataset() {
local NAME="$1"
local DATASET_PATH="$2"
shift 2
local EXCLUDES=("$@")
wait_for_vzdump
local ZFS_DATASET
ZFS_DATASET=$(zfs list -H -o name "$DATASET_PATH" 2>/dev/null || true)
local SNAP_NAME="${SNAP_PREFIX}-${NAME}-$(date +%s)"
local SRC_PATH="$DATASET_PATH"
if [[ -n "$ZFS_DATASET" ]]; then
echo "Creating snapshot ${ZFS_DATASET}@${SNAP_NAME}"
zfs snapshot "${ZFS_DATASET}@${SNAP_NAME}"
SRC_PATH="${DATASET_PATH}/.zfs/snapshot/${SNAP_NAME}"
fi
CMD=(
proxmox-backup-client backup
"${NAME}.pxar:${SRC_PATH}"
--repository "$PBS_REPOSITORY"
--ns "$PBS_NAMESPACE"
--backup-id "$NAME"
)
for ex in "${EXCLUDES[@]}"; do
CMD+=(--exclude "$ex")
done
echo "Running: ${CMD[*]}"
"${CMD[@]}"
}
### ==============================
### Execution
### ==============================
backup_dataset documents "$BASE/documents"
backup_dataset git "$BASE/git"
backup_dataset src "$BASE/src"
backup_dataset music "$BASE/music"
backup_dataset videos "$BASE/videos"
backup_dataset iso "$BASE/iso"
backup_dataset programs "$BASE/programs"
backup_dataset nextcloud "$BASE/nextcloud" \
"appdata_*/preview/" \
"appdata_*/cache/" \
"data/*/files_trashbin/" \
"data/*/files_versions/"
backup_dataset photos "$BASE/photos" \
"thumbs/" \
"cache/"
echo "=== PBS dataset backup finished successfully ==="
Final Thoughts
This script has been running quietly, reliably, and boringly — which is exactly what backups should be.
If you:
- Run Proxmox seriously
- Care about data integrity
- Hate backup surprises
…this pattern is worth adopting.
Happy backing up 🚀
Mohammad Dahamshi is a skilled Embedded Software Engineer and web developer. With experience in C/C++, Linux, WordPress, and DevOps tools, he helps businesses solve technical challenges and build reliable digital solutions. Fluent in Arabic, Hebrew, and English, he also runs Saratec, offering web design and digital marketing services.
