Backup & Recovery Procedures
Overview
This comprehensive guide covers essential backup and recovery procedures for databases, file systems, and applications. Learn how to implement the 3-2-1 backup rule, configure automated backup solutions, and establish disaster recovery procedures to protect your critical data and ensure business continuity.
Quick Reference
- Backup Types: Full, incremental, differential backups
- Recovery Strategies: RTO, RPO, disaster recovery planning
- Tools: rsync, tar, database backup tools
- Testing: Regular backup validation and recovery testing
1. Backup Strategy Fundamentals
1.1 The 3-2-1 Backup Rule
Implementing the industry-standard 3-2-1 backup strategy for maximum data protection.
3-2-1 Rule Components:
- 3 Copies: Original data + 2 backup copies
- 2 Different Media: Different storage types (disk, tape, cloud)
- 1 Offsite: At least one copy stored offsite
Backup Types:
- Full Backup: Complete copy of all data
- Incremental Backup: Only changed data since last backup
- Differential Backup: All changes since last full backup
- Snapshot Backup: Point-in-time copy of data
1.2 Recovery Objectives
Defining recovery time and point objectives for business continuity.
Key Metrics:
- RTO (Recovery Time Objective): Maximum acceptable downtime
- RPO (Recovery Point Objective): Maximum acceptable data loss
- MTBF (Mean Time Between Failures): System reliability metric
- MTTR (Mean Time To Recovery): Average recovery time
2. Database Backup Procedures
2.1 MySQL Backup and Recovery
Comprehensive MySQL backup and recovery procedures.
MySQL Full Backup:
#!/bin/bash
# mysql_full_backup.sh
DB_USER="backup_user"
DB_PASS="backup_password"
BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="mysql_full_backup_${DATE}.sql"
# Create backup directory
mkdir -p $BACKUP_DIR
# Full database backup
mysqldump -u$DB_USER -p$DB_PASS \
--single-transaction \
--routines \
--triggers \
--events \
--all-databases \
--master-data=2 \
> $BACKUP_DIR/$BACKUP_FILE
# Compress backup
gzip $BACKUP_DIR/$BACKUP_FILE
# Verify backup
if [ $? -eq 0 ]; then
echo "Backup completed successfully: $BACKUP_FILE.gz"
else
echo "Backup failed!" | mail -s "MySQL Backup Failed" admin@company.com
fi
# Clean up old backups (keep 7 days)
find $BACKUP_DIR -name "mysql_full_backup_*.sql.gz" -mtime +7 -delete
MySQL Incremental Backup:
#!/bin/bash
# mysql_incremental_backup.sh
DB_USER="backup_user"
DB_PASS="backup_password"
BACKUP_DIR="/backup/mysql/incremental"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="mysql_incremental_${DATE}.sql"
# Create backup directory
mkdir -p $BACKUP_DIR
# Get binary log position
BINLOG_POS=$(mysql -u$DB_USER -p$DB_PASS -e "SHOW MASTER STATUS\G" | grep Position | awk '{print $2}')
BINLOG_FILE=$(mysql -u$DB_USER -p$DB_PASS -e "SHOW MASTER STATUS\G" | grep File | awk '{print $2}')
# Backup binary logs
mysqlbinlog --start-position=$BINLOG_POS /var/lib/mysql/$BINLOG_FILE > $BACKUP_DIR/$BACKUP_FILE
# Compress backup
gzip $BACKUP_DIR/$BACKUP_FILE
echo "Incremental backup completed: $BACKUP_FILE.gz"
MySQL Recovery:
#!/bin/bash
# mysql_recovery.sh
BACKUP_FILE="/backup/mysql/mysql_full_backup_20231201_020000.sql.gz"
DB_USER="root"
DB_PASS="password"
# Stop MySQL service
systemctl stop mysql
# Restore from backup
gunzip -c $BACKUP_FILE | mysql -u$DB_USER -p$DB_PASS
# Start MySQL service
systemctl start mysql
# Verify recovery
mysql -u$DB_USER -p$DB_PASS -e "SHOW DATABASES;"
echo "Recovery completed successfully"
2.2 PostgreSQL Backup and Recovery
PostgreSQL backup and recovery procedures.
PostgreSQL Backup:
#!/bin/bash
# postgres_backup.sh
DB_NAME="your_database"
DB_USER="postgres"
BACKUP_DIR="/backup/postgres"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="postgres_backup_${DATE}.sql"
# Create backup directory
mkdir -p $BACKUP_DIR
# Full database backup
pg_dump -U $DB_USER -h localhost -d $DB_NAME \
--verbose \
--clean \
--no-acl \
--no-owner \
--format=custom \
--file=$BACKUP_DIR/$BACKUP_FILE
# Compress backup
gzip $BACKUP_DIR/$BACKUP_FILE
echo "PostgreSQL backup completed: $BACKUP_FILE.gz"
PostgreSQL Recovery:
#!/bin/bash
# postgres_recovery.sh
BACKUP_FILE="/backup/postgres/postgres_backup_20231201_020000.sql.gz"
DB_NAME="your_database"
DB_USER="postgres"
# Drop existing database
dropdb -U $DB_USER $DB_NAME
# Create new database
createdb -U $DB_USER $DB_NAME
# Restore from backup
gunzip -c $BACKUP_FILE | pg_restore -U $DB_USER -d $DB_NAME
echo "PostgreSQL recovery completed"
3. File System Backup
3.1 rsync Backup Solution
Using rsync for efficient file system backups.
rsync Backup Script:
#!/bin/bash
# rsync_backup.sh
SOURCE_DIR="/home"
DEST_DIR="/backup/files"
LOG_FILE="/var/log/backup.log"
DATE=$(date +%Y%m%d_%H%M%S)
# Create backup directory
mkdir -p $DEST_DIR
# rsync backup with progress
rsync -avz --progress \
--delete \
--exclude="*.tmp" \
--exclude="*.log" \
--exclude=".cache" \
--exclude=".thumbnails" \
$SOURCE_DIR/ $DEST_DIR/ \
--log-file=$LOG_FILE
# Create snapshot
tar -czf $DEST_DIR/snapshot_${DATE}.tar.gz -C $DEST_DIR .
echo "File backup completed: $DATE" >> $LOG_FILE
3.2 tar Backup Solution
Using tar for comprehensive file system backups.
tar Backup Script:
#!/bin/bash
# tar_backup.sh
SOURCE_DIR="/var/www"
BACKUP_DIR="/backup/tar"
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="www_backup_${DATE}.tar.gz"
# Create backup directory
mkdir -p $BACKUP_DIR
# Create tar backup
tar -czf $BACKUP_DIR/$BACKUP_FILE \
--exclude="*.log" \
--exclude="*.tmp" \
--exclude=".git" \
-C $SOURCE_DIR .
# Verify backup
if [ $? -eq 0 ]; then
echo "Tar backup completed successfully: $BACKUP_FILE"
else
echo "Tar backup failed!" | mail -s "Backup Failed" admin@company.com
fi
# Clean up old backups (keep 30 days)
find $BACKUP_DIR -name "www_backup_*.tar.gz" -mtime +30 -delete
4. Cloud Backup Solutions
4.1 AWS S3 Backup
Implementing AWS S3 for cloud backup storage.
AWS S3 Backup Script:
#!/bin/bash
# aws_s3_backup.sh
BUCKET_NAME="company-backups"
LOCAL_DIR="/backup/local"
DATE=$(date +%Y%m%d_%H%M%S)
# Sync local backup to S3
aws s3 sync $LOCAL_DIR s3://$BUCKET_NAME/backups/ \
--storage-class STANDARD_IA \
--delete \
--exclude "*.tmp" \
--exclude "*.log"
# Set lifecycle policy for old backups
aws s3api put-bucket-lifecycle-configuration \
--bucket $BUCKET_NAME \
--lifecycle-configuration '{
"Rules": [
{
"ID": "DeleteOldBackups",
"Status": "Enabled",
"Filter": {"Prefix": "backups/"},
"Expiration": {"Days": 90}
}
]
}'
echo "AWS S3 backup completed: $DATE"
4.2 Google Cloud Storage Backup
Using Google Cloud Storage for backup storage.
GCS Backup Script:
#!/bin/bash
# gcs_backup.sh
BUCKET_NAME="company-backups"
LOCAL_DIR="/backup/local"
DATE=$(date +%Y%m%d_%H%M%S)
# Sync local backup to GCS
gsutil -m rsync -r -d $LOCAL_DIR gs://$BUCKET_NAME/backups/
# Set lifecycle policy
gsutil lifecycle set lifecycle.json gs://$BUCKET_NAME
echo "GCS backup completed: $DATE"
5. Disaster Recovery Planning
5.1 Disaster Recovery Procedures
Establishing comprehensive disaster recovery procedures.
Recovery Procedures Checklist:
- ✓ Assess the scope and impact of the disaster
- ✓ Activate the disaster recovery team
- ✓ Notify stakeholders and customers
- ✓ Restore critical systems from backups
- ✓ Verify data integrity and system functionality
- ✓ Test all critical business processes
- ✓ Document lessons learned and improvements
5.2 Recovery Testing
Regular testing of backup and recovery procedures.
Recovery Test Script:
#!/bin/bash
# recovery_test.sh
TEST_DATE=$(date +%Y%m%d_%H%M%S)
TEST_LOG="/var/log/recovery_test_${TEST_DATE}.log"
echo "Starting recovery test: $TEST_DATE" >> $TEST_LOG
# Test database recovery
echo "Testing database recovery..." >> $TEST_LOG
./mysql_recovery.sh >> $TEST_LOG 2>&1
if [ $? -eq 0 ]; then
echo "Database recovery test: PASSED" >> $TEST_LOG
else
echo "Database recovery test: FAILED" >> $TEST_LOG
fi
# Test file system recovery
echo "Testing file system recovery..." >> $TEST_LOG
./file_recovery.sh >> $TEST_LOG 2>&1
if [ $? -eq 0 ]; then
echo "File system recovery test: PASSED" >> $TEST_LOG
else
echo "File system recovery test: FAILED" >> $TEST_LOG
fi
# Send test results
mail -s "Recovery Test Results - $TEST_DATE" admin@company.com < $TEST_LOG
echo "Recovery test completed: $TEST_DATE"
6. Backup Monitoring and Alerting
6.1 Backup Monitoring Script
Monitoring backup success and failure with automated alerts.
Backup Monitoring Script:
#!/bin/bash
# backup_monitor.sh
ALERT_EMAIL="admin@company.com"
BACKUP_DIR="/backup"
LOG_FILE="/var/log/backup_monitor.log"
# Check backup age (fail if older than 25 hours)
for backup in $(find $BACKUP_DIR -name "*.sql.gz" -o -name "*.tar.gz"); do
if [ $(find $backup -mtime +1 | wc -l) -gt 0 ]; then
echo "Backup is older than 24 hours: $backup" | mail -s "Backup Alert" $ALERT_EMAIL
fi
done
# Check backup size (fail if smaller than expected)
MIN_SIZE=1000000 # 1MB minimum
for backup in $(find $BACKUP_DIR -name "*.sql.gz" -o -name "*.tar.gz"); do
SIZE=$(stat -c%s "$backup")
if [ $SIZE -lt $MIN_SIZE ]; then
echo "Backup size is suspiciously small: $backup ($SIZE bytes)" | mail -s "Backup Alert" $ALERT_EMAIL
fi
done
# Check disk space
DISK_USAGE=$(df $BACKUP_DIR | awk 'NR==2 {print $5}' | sed 's/%//')
if [ $DISK_USAGE -gt 90 ]; then
echo "Backup disk space is low: $DISK_USAGE%" | mail -s "Backup Alert" $ALERT_EMAIL
fi
echo "Backup monitoring completed: $(date)" >> $LOG_FILE
6.2 Automated Backup Validation
Automated validation of backup integrity and completeness.
Backup Validation Script:
#!/bin/bash
# backup_validation.sh
BACKUP_FILE="/backup/mysql/mysql_full_backup_20231201_020000.sql.gz"
VALIDATION_LOG="/var/log/backup_validation.log"
echo "Starting backup validation: $(date)" >> $VALIDATION_LOG
# Test backup file integrity
if gunzip -t $BACKUP_FILE; then
echo "Backup file integrity: PASSED" >> $VALIDATION_LOG
else
echo "Backup file integrity: FAILED" >> $VALIDATION_LOG
exit 1
fi
# Test database restore (to test database)
TEST_DB="backup_test_$(date +%s)"
mysql -e "CREATE DATABASE $TEST_DB;"
gunzip -c $BACKUP_FILE | mysql $TEST_DB
if [ $? -eq 0 ]; then
echo "Database restore test: PASSED" >> $VALIDATION_LOG
mysql -e "DROP DATABASE $TEST_DB;"
else
echo "Database restore test: FAILED" >> $VALIDATION_LOG
fi
echo "Backup validation completed: $(date)" >> $VALIDATION_LOG
7. Backup Automation and Scheduling
7.1 Cron Job Configuration
Setting up automated backup schedules using cron.
Cron Job Setup:
# Add to crontab
crontab -e
# Daily full backup at 2 AM
0 2 * * * /path/to/mysql_full_backup.sh >> /var/log/backup.log 2>&1
# Hourly incremental backup
0 * * * * /path/to/mysql_incremental_backup.sh >> /var/log/backup.log 2>&1
# Weekly file system backup on Sunday at 3 AM
0 3 * * 0 /path/to/tar_backup.sh >> /var/log/backup.log 2>&1
# Daily backup monitoring at 6 AM
0 6 * * * /path/to/backup_monitor.sh >> /var/log/backup.log 2>&1
# Monthly recovery test on 1st at 4 AM
0 4 1 * * /path/to/recovery_test.sh >> /var/log/backup.log 2>&1
7.2 Backup Retention Policies
Implementing backup retention policies for optimal storage management.
Retention Policy Script:
#!/bin/bash
# backup_retention.sh
BACKUP_DIR="/backup"
RETENTION_LOG="/var/log/retention.log"
echo "Starting backup retention cleanup: $(date)" >> $RETENTION_LOG
# Keep daily backups for 7 days
find $BACKUP_DIR -name "*_daily_*" -mtime +7 -delete
# Keep weekly backups for 4 weeks
find $BACKUP_DIR -name "*_weekly_*" -mtime +28 -delete
# Keep monthly backups for 12 months
find $BACKUP_DIR -name "*_monthly_*" -mtime +365 -delete
# Keep yearly backups for 7 years
find $BACKUP_DIR -name "*_yearly_*" -mtime +2555 -delete
echo "Backup retention cleanup completed: $(date)" >> $RETENTION_LOG
Download the Complete Guide
Get the full PDF version with additional backup strategies, recovery procedures, and automation scripts.
Download PDF