mirror of
https://github.com/acedanger/shell.git
synced 2025-12-05 22:50:18 -08:00
Commit local changes before merging with remote
This commit is contained in:
294
plex/README.md
Normal file
294
plex/README.md
Normal file
@@ -0,0 +1,294 @@
|
||||
# Plex Backup and Management Scripts
|
||||
|
||||
This directory contains all scripts and documentation related to Plex Media Server backup, restoration, validation, and management.
|
||||
|
||||
## Scripts Overview
|
||||
|
||||
### Core Backup Scripts
|
||||
|
||||
#### `backup-plex.sh`
|
||||
|
||||
**Enhanced Plex backup script with advanced features**
|
||||
|
||||
- **Full backup operations** with integrity verification
|
||||
- **Performance monitoring** with JSON-based logging
|
||||
- **WAL file handling** for SQLite databases
|
||||
- **Database integrity checks** with automated repair options
|
||||
- **Parallel processing** for improved performance
|
||||
- **Multi-channel notifications** (console, webhook, email)
|
||||
- **Comprehensive logging** with color-coded output
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
./backup-plex.sh # Standard backup
|
||||
./backup-plex.sh --check-integrity # Integrity check only
|
||||
./backup-plex.sh --non-interactive # Automated mode
|
||||
./backup-plex.sh --auto-repair # Auto-repair database issues
|
||||
```
|
||||
|
||||
#### `restore-plex.sh`
|
||||
|
||||
**Safe restoration script with validation**
|
||||
|
||||
- **Backup validation** before restoration
|
||||
- **Dry-run mode** for testing
|
||||
- **Current data backup** before restoration
|
||||
- **Interactive backup selection**
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
./restore-plex.sh # List available backups
|
||||
./restore-plex.sh plex-backup-20250125_143022.tar.gz # Restore specific backup
|
||||
./restore-plex.sh --dry-run backup-file.tar.gz # Test restoration
|
||||
```
|
||||
|
||||
#### `validate-plex-backups.sh`
|
||||
|
||||
**Backup validation and health monitoring**
|
||||
|
||||
- **Archive integrity checking**
|
||||
- **Backup freshness validation**
|
||||
- **Comprehensive reporting**
|
||||
- **Automated fix suggestions**
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
./validate-plex-backups.sh # Validate all backups
|
||||
./validate-plex-backups.sh --report # Generate detailed report
|
||||
./validate-plex-backups.sh --fix # Auto-fix issues where possible
|
||||
```
|
||||
|
||||
### Testing and Monitoring
|
||||
|
||||
#### `test-plex-backup.sh`
|
||||
|
||||
**Comprehensive testing framework**
|
||||
|
||||
- **Unit tests** for core functionality
|
||||
- **Integration tests** for full system testing
|
||||
- **Performance benchmarks**
|
||||
|
||||
**Usage:**
|
||||
|
||||
```bash
|
||||
./test-plex-backup.sh all # Run all tests
|
||||
./test-plex-backup.sh unit # Unit tests only
|
||||
./test-plex-backup.sh performance # Performance benchmarks
|
||||
```
|
||||
|
||||
#### `integration-test-plex.sh`
|
||||
|
||||
**Integration testing for Plex backup system**
|
||||
|
||||
- **End-to-end testing**
|
||||
- **System integration validation**
|
||||
- **Environment compatibility checks**
|
||||
|
||||
#### `monitor-plex-backup.sh`
|
||||
|
||||
**Real-time backup monitoring**
|
||||
|
||||
- **Live backup status**
|
||||
- **Performance metrics**
|
||||
- **Error detection and alerting**
|
||||
|
||||
### Utility Scripts
|
||||
|
||||
#### `plex.sh`
|
||||
|
||||
**Plex Media Server service management**
|
||||
|
||||
- **Service start/stop/restart**
|
||||
- **Status monitoring**
|
||||
- **Safe service management**
|
||||
|
||||
#### `plex-recent-additions.sh`
|
||||
|
||||
**Recent media additions reporting**
|
||||
|
||||
- **New content detection**
|
||||
- **Addition summaries**
|
||||
- **Media library analytics**
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Key configuration parameters in `backup-plex.sh`:
|
||||
|
||||
```bash
|
||||
# Retention settings
|
||||
MAX_BACKUP_AGE_DAYS=30 # Remove backups older than 30 days
|
||||
MAX_BACKUPS_TO_KEEP=10 # Keep maximum of 10 backup archives
|
||||
|
||||
# Directory settings
|
||||
BACKUP_ROOT="/mnt/share/media/backups/plex"
|
||||
LOG_ROOT="/mnt/share/media/backups/logs"
|
||||
|
||||
# Feature toggles
|
||||
PARALLEL_VERIFICATION=true # Enable parallel verification
|
||||
PERFORMANCE_MONITORING=true # Track performance metrics
|
||||
AUTO_REPAIR=false # Automatic database repair
|
||||
```
|
||||
|
||||
### Backup Strategy
|
||||
|
||||
The enhanced backup system implements:
|
||||
|
||||
- **Archive-only structure**: Direct `.tar.gz` storage
|
||||
- **Timestamp naming**: `plex-backup-YYYYMMDD_HHMMSS.tar.gz`
|
||||
- **Automatic cleanup**: Age and count-based retention
|
||||
- **Integrity validation**: Comprehensive archive verification
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
/mnt/share/media/backups/plex/
|
||||
├── plex-backup-20250125_143022.tar.gz # Latest backup
|
||||
├── plex-backup-20250124_143011.tar.gz # Previous backup
|
||||
├── plex-backup-20250123_143008.tar.gz # Older backup
|
||||
└── logs/
|
||||
├── backup_log_20250125_143022.md
|
||||
├── plex-backup-performance.json
|
||||
└── plex-backup.json
|
||||
```
|
||||
|
||||
## Enhanced Features
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
- **JSON performance logs**: All operations timed and logged
|
||||
- **Performance reports**: Automatic generation of metrics
|
||||
- **Operation tracking**: Backup, verification, service management times
|
||||
|
||||
### Database Management
|
||||
|
||||
- **Integrity checking**: Comprehensive SQLite database validation
|
||||
- **Automated repair**: Optional auto-repair of corruption
|
||||
- **WAL file handling**: Proper SQLite Write-Ahead Logging management
|
||||
|
||||
### Notification System
|
||||
|
||||
- **Console output**: Color-coded status messages
|
||||
- **Webhook notifications**: Custom webhook URL support
|
||||
- **Email notifications**: SMTP-based email alerts
|
||||
- **Default webhook**: Automatic notifications to configured endpoint
|
||||
|
||||
### Safety Features
|
||||
|
||||
- **Pre-flight checks**: Disk space and system validation
|
||||
- **Service management**: Safe Plex service start/stop
|
||||
- **Backup verification**: Checksum and integrity validation
|
||||
- **Error handling**: Comprehensive error detection and recovery
|
||||
|
||||
## Automation and Scheduling
|
||||
|
||||
### Cron Integration
|
||||
|
||||
Example crontab entries for automated operations:
|
||||
|
||||
```bash
|
||||
# Daily Plex backup at 04:15
|
||||
15 4 * * * /home/acedanger/shell/plex/backup-plex.sh --non-interactive --auto-repair 2>&1 | logger -t plex-backup -p user.info
|
||||
|
||||
# Daily validation at 07:00
|
||||
0 7 * * * /home/acedanger/shell/plex/validate-plex-backups.sh --fix 2>&1 | logger -t plex-validation -p user.info
|
||||
```
|
||||
|
||||
### Log Monitoring
|
||||
|
||||
Monitor backup operations with:
|
||||
|
||||
```bash
|
||||
# Real-time monitoring
|
||||
sudo journalctl -f -t plex-backup -t plex-validation
|
||||
|
||||
# Historical analysis
|
||||
sudo journalctl --since '24 hours ago' -t plex-backup
|
||||
|
||||
# Performance analysis
|
||||
jq '.[] | select(.operation == "backup") | .duration_seconds' logs/plex-backup-performance.json
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Database corruption**: Use `--auto-repair` flag or manual repair
|
||||
2. **Insufficient disk space**: Check space requirements (2x backup size)
|
||||
3. **Service management**: Ensure Plex service accessibility
|
||||
4. **Archive validation**: Use validation script for integrity checks
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable verbose logging:
|
||||
|
||||
```bash
|
||||
# Add environment variable for debug output
|
||||
PLEX_DEBUG=true ./backup-plex.sh
|
||||
```
|
||||
|
||||
### Log Analysis
|
||||
|
||||
```bash
|
||||
# Check backup success rate
|
||||
grep "SUCCESS" logs/plex-backup-*.log | wc -l
|
||||
|
||||
# Analyze errors
|
||||
grep "ERROR" logs/plex-backup-*.log | tail -10
|
||||
|
||||
# Performance trends
|
||||
jq '[.[] | select(.operation == "backup") | .duration_seconds] | add/length' logs/plex-backup-performance.json
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### File Permissions
|
||||
|
||||
- Backup files created with appropriate permissions
|
||||
- Sensitive files maintain original ownership
|
||||
- Temporary files properly cleaned up
|
||||
|
||||
### Access Control
|
||||
|
||||
- Script requires appropriate sudo permissions
|
||||
- Backup locations should have restricted access
|
||||
- Log files contain operational data only
|
||||
|
||||
### Network Security
|
||||
|
||||
- Webhook notifications use HTTPS when possible
|
||||
- No sensitive data included in notifications
|
||||
- Email notifications respect system configuration
|
||||
|
||||
## Documentation
|
||||
|
||||
### Detailed Documentation
|
||||
|
||||
- **[plex-backup.md](./plex-backup.md)**: Comprehensive backup script documentation
|
||||
- **[plex-management.md](./plex-management.md)**: Plex management and administration guide
|
||||
|
||||
### Integration Notes
|
||||
|
||||
- All scripts follow repository coding standards
|
||||
- Consistent logging and error handling
|
||||
- Color-coded output for readability
|
||||
- Comprehensive help systems
|
||||
|
||||
## Migration Notes
|
||||
|
||||
When migrating from legacy backup scripts:
|
||||
|
||||
1. **Backup current configuration**: Save any custom modifications
|
||||
2. **Test new scripts**: Run with `--check-integrity` first
|
||||
3. **Update automation**: Modify cron jobs to use new options
|
||||
4. **Monitor performance**: Check performance logs for optimization
|
||||
|
||||
The enhanced scripts maintain backward compatibility while adding significant new capabilities.
|
||||
|
||||
---
|
||||
|
||||
*For additional support and advanced configuration options, refer to the detailed documentation files in this directory.*
|
||||
1188
plex/backup-plex.sh
Executable file
1188
plex/backup-plex.sh
Executable file
File diff suppressed because it is too large
Load Diff
478
plex/integration-test-plex.sh
Executable file
478
plex/integration-test-plex.sh
Executable file
@@ -0,0 +1,478 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Plex Backup Integration Test Suite
|
||||
# This script tests the enhanced backup features in a controlled environment
|
||||
# without affecting production Plex installation
|
||||
|
||||
set -e
|
||||
|
||||
# Color codes for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test configuration
|
||||
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
|
||||
TEST_DIR="/tmp/plex-integration-test-$(date +%s)"
|
||||
BACKUP_SCRIPT="$SCRIPT_DIR/backup-plex.sh"
|
||||
|
||||
# Test counters
|
||||
INTEGRATION_TEST_FUNCTIONS=0
|
||||
INTEGRATION_ASSERTIONS_PASSED=0
|
||||
INTEGRATION_ASSERTIONS_FAILED=0
|
||||
declare -a FAILED_INTEGRATION_TESTS=()
|
||||
|
||||
# Logging functions
|
||||
log_test() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${CYAN}[INTEGRATION ${timestamp}]${NC} $1"
|
||||
}
|
||||
|
||||
log_pass() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${GREEN}[PASS ${timestamp}]${NC} $1"
|
||||
INTEGRATION_ASSERTIONS_PASSED=$((INTEGRATION_ASSERTIONS_PASSED + 1))
|
||||
}
|
||||
|
||||
log_fail() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${RED}[FAIL ${timestamp}]${NC} $1"
|
||||
INTEGRATION_ASSERTIONS_FAILED=$((INTEGRATION_ASSERTIONS_FAILED + 1))
|
||||
FAILED_INTEGRATION_TESTS+=("$1")
|
||||
}
|
||||
|
||||
log_info() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${BLUE}[INFO ${timestamp}]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${YELLOW}[WARN ${timestamp}]${NC} $1"
|
||||
}
|
||||
|
||||
# Setup integration test environment
|
||||
setup_integration_environment() {
|
||||
log_info "Setting up integration test environment"
|
||||
|
||||
# Create test directories
|
||||
mkdir -p "$TEST_DIR"
|
||||
mkdir -p "$TEST_DIR/mock_plex_data"
|
||||
mkdir -p "$TEST_DIR/backup_destination"
|
||||
mkdir -p "$TEST_DIR/logs"
|
||||
|
||||
# Create mock Plex database files with realistic content
|
||||
create_mock_database "$TEST_DIR/mock_plex_data/com.plexapp.plugins.library.db"
|
||||
create_mock_database "$TEST_DIR/mock_plex_data/com.plexapp.plugins.library.blobs.db"
|
||||
|
||||
# Create mock Preferences.xml
|
||||
create_mock_preferences "$TEST_DIR/mock_plex_data/Preferences.xml"
|
||||
|
||||
# Create mock WAL files to test WAL handling
|
||||
echo "WAL data simulation" > "$TEST_DIR/mock_plex_data/com.plexapp.plugins.library.db-wal"
|
||||
echo "SHM data simulation" > "$TEST_DIR/mock_plex_data/com.plexapp.plugins.library.db-shm"
|
||||
|
||||
log_info "Integration test environment ready"
|
||||
}
|
||||
|
||||
# Create mock SQLite database for testing
|
||||
create_mock_database() {
|
||||
local db_file="$1"
|
||||
|
||||
# Create a proper SQLite database with some test data
|
||||
sqlite3 "$db_file" << 'EOF'
|
||||
CREATE TABLE library_sections (
|
||||
id INTEGER PRIMARY KEY,
|
||||
name TEXT,
|
||||
type INTEGER,
|
||||
agent TEXT
|
||||
);
|
||||
|
||||
INSERT INTO library_sections (name, type, agent) VALUES
|
||||
('Movies', 1, 'com.plexapp.agents.imdb'),
|
||||
('TV Shows', 2, 'com.plexapp.agents.thetvdb'),
|
||||
('Music', 8, 'com.plexapp.agents.lastfm');
|
||||
|
||||
CREATE TABLE metadata_items (
|
||||
id INTEGER PRIMARY KEY,
|
||||
title TEXT,
|
||||
year INTEGER,
|
||||
added_at DATETIME DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
INSERT INTO metadata_items (title, year) VALUES
|
||||
('Test Movie', 2023),
|
||||
('Another Movie', 2024),
|
||||
('Test Show', 2022);
|
||||
|
||||
-- Add some indexes to make it more realistic
|
||||
CREATE INDEX idx_metadata_title ON metadata_items(title);
|
||||
CREATE INDEX idx_library_sections_type ON library_sections(type);
|
||||
EOF
|
||||
|
||||
log_info "Created mock database: $(basename "$db_file")"
|
||||
}
|
||||
|
||||
# Create mock Preferences.xml
|
||||
create_mock_preferences() {
|
||||
local pref_file="$1"
|
||||
|
||||
cat > "$pref_file" << 'EOF'
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<Preferences OldestPreviousVersion="1.32.8.7639-fb6452ebf" MachineIdentifier="test-machine-12345" ProcessedMachineIdentifier="test-processed-12345" AnonymousMachineIdentifier="test-anon-12345" FriendlyName="Test Plex Server" ManualPortMappingMode="1" TranscoderTempDirectory="/tmp" />
|
||||
EOF
|
||||
|
||||
log_info "Created mock preferences file"
|
||||
}
|
||||
|
||||
# Test command line argument parsing
|
||||
test_command_line_parsing() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "Command Line Argument Parsing"
|
||||
|
||||
# Test help output
|
||||
if "$BACKUP_SCRIPT" --help | grep -q "Usage:"; then
|
||||
log_pass "Help output is functional"
|
||||
else
|
||||
log_fail "Help output test failed"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test invalid argument handling
|
||||
if ! "$BACKUP_SCRIPT" --invalid-option >/dev/null 2>&1; then
|
||||
log_pass "Invalid argument handling works correctly"
|
||||
else
|
||||
log_fail "Invalid argument handling test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test performance monitoring features
|
||||
test_performance_monitoring() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "Performance Monitoring Features"
|
||||
|
||||
local test_perf_log="$TEST_DIR/test-performance.json"
|
||||
|
||||
# Initialize performance log
|
||||
echo "[]" > "$test_perf_log"
|
||||
|
||||
# Simulate performance tracking
|
||||
local start_time=$(date +%s)
|
||||
sleep 1
|
||||
local end_time=$(date +%s)
|
||||
local duration=$((end_time - start_time))
|
||||
|
||||
# Create performance entry
|
||||
local entry=$(jq -n \
|
||||
--arg operation "integration_test" \
|
||||
--arg duration "$duration" \
|
||||
--arg timestamp "$(date -Iseconds)" \
|
||||
'{
|
||||
operation: $operation,
|
||||
duration_seconds: ($duration | tonumber),
|
||||
timestamp: $timestamp
|
||||
}')
|
||||
|
||||
# Add to log
|
||||
jq --argjson entry "$entry" '. += [$entry]' "$test_perf_log" > "${test_perf_log}.tmp" && \
|
||||
mv "${test_perf_log}.tmp" "$test_perf_log"
|
||||
|
||||
# Verify entry was added
|
||||
local entry_count=$(jq length "$test_perf_log")
|
||||
if [ "$entry_count" -eq 1 ]; then
|
||||
log_pass "Performance monitoring integration works"
|
||||
else
|
||||
log_fail "Performance monitoring integration failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test notification system with mock endpoints
|
||||
test_notification_system() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "Notification System Integration"
|
||||
|
||||
# Test webhook notification (mock)
|
||||
local webhook_test_log="$TEST_DIR/webhook_test.log"
|
||||
|
||||
# Mock webhook function
|
||||
test_send_webhook() {
|
||||
local url="$1"
|
||||
local payload="$2"
|
||||
|
||||
# Simulate webhook call
|
||||
echo "Webhook URL: $url" > "$webhook_test_log"
|
||||
echo "Payload: $payload" >> "$webhook_test_log"
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test notification
|
||||
if test_send_webhook "https://example.com/webhook" '{"test": "data"}'; then
|
||||
if [ -f "$webhook_test_log" ] && grep -q "Webhook URL" "$webhook_test_log"; then
|
||||
log_pass "Webhook notification integration works"
|
||||
else
|
||||
log_fail "Webhook notification integration failed"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
log_fail "Webhook notification test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test backup validation system
|
||||
test_backup_validation() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "Backup Validation System"
|
||||
|
||||
local test_backup_dir="$TEST_DIR/test_backup_20250525"
|
||||
mkdir -p "$test_backup_dir"
|
||||
|
||||
# Create test backup files
|
||||
cp "$TEST_DIR/mock_plex_data/"*.db "$test_backup_dir/"
|
||||
cp "$TEST_DIR/mock_plex_data/Preferences.xml" "$test_backup_dir/"
|
||||
|
||||
# Test validation script
|
||||
if [ -f "$SCRIPT_DIR/validate-plex-backups.sh" ]; then
|
||||
# Mock the validation by checking file presence
|
||||
local files_present=0
|
||||
for file in com.plexapp.plugins.library.db com.plexapp.plugins.library.blobs.db Preferences.xml; do
|
||||
if [ -f "$test_backup_dir/$file" ]; then
|
||||
files_present=$((files_present + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$files_present" -eq 3 ]; then
|
||||
log_pass "Backup validation system works"
|
||||
else
|
||||
log_fail "Backup validation system failed - missing files"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
log_warn "Validation script not found, skipping test"
|
||||
fi
|
||||
}
|
||||
|
||||
# Test database integrity checking
|
||||
test_database_integrity_checking() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "Database Integrity Checking"
|
||||
|
||||
# Test with good database
|
||||
local test_db="$TEST_DIR/mock_plex_data/com.plexapp.plugins.library.db"
|
||||
|
||||
# Run integrity check using sqlite3 (since we can't use Plex SQLite in test)
|
||||
if sqlite3 "$test_db" "PRAGMA integrity_check;" | grep -q "ok"; then
|
||||
log_pass "Database integrity checking works for valid database"
|
||||
else
|
||||
log_fail "Database integrity checking failed for valid database"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test with corrupted database
|
||||
local corrupted_db="$TEST_DIR/corrupted.db"
|
||||
echo "This is not a valid SQLite database" > "$corrupted_db"
|
||||
|
||||
if ! sqlite3 "$corrupted_db" "PRAGMA integrity_check;" 2>/dev/null | grep -q "ok"; then
|
||||
log_pass "Database integrity checking correctly detects corruption"
|
||||
else
|
||||
log_fail "Database integrity checking failed to detect corruption"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test parallel processing capabilities
|
||||
test_parallel_processing() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "Parallel Processing Capabilities"
|
||||
|
||||
local temp_dir=$(mktemp -d)
|
||||
local -a pids=()
|
||||
local total_jobs=3
|
||||
local completed_jobs=0
|
||||
|
||||
# Start parallel jobs
|
||||
for i in $(seq 1 $total_jobs); do
|
||||
(
|
||||
# Simulate parallel work
|
||||
sleep 0.$i
|
||||
echo "Job $i completed" > "$temp_dir/job_$i.result"
|
||||
) &
|
||||
pids+=($!)
|
||||
done
|
||||
|
||||
# Wait for all jobs
|
||||
for pid in "${pids[@]}"; do
|
||||
if wait "$pid"; then
|
||||
completed_jobs=$((completed_jobs + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# Verify results
|
||||
local result_files=$(find "$temp_dir" -name "job_*.result" | wc -l)
|
||||
|
||||
# Cleanup
|
||||
rm -rf "$temp_dir"
|
||||
|
||||
if [ "$completed_jobs" -eq "$total_jobs" ] && [ "$result_files" -eq "$total_jobs" ]; then
|
||||
log_pass "Parallel processing works correctly"
|
||||
else
|
||||
log_fail "Parallel processing test failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test checksum caching system
|
||||
test_checksum_caching() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "Checksum Caching System"
|
||||
|
||||
local test_file="$TEST_DIR/checksum_test.txt"
|
||||
local cache_file="${test_file}.md5"
|
||||
|
||||
# Create test file
|
||||
echo "checksum test content" > "$test_file"
|
||||
|
||||
# First checksum calculation (should create cache)
|
||||
local checksum1=$(md5sum "$test_file" | cut -d' ' -f1)
|
||||
echo "$checksum1" > "$cache_file"
|
||||
|
||||
# Simulate cache check
|
||||
local file_mtime=$(stat -c %Y "$test_file")
|
||||
local cache_mtime=$(stat -c %Y "$cache_file")
|
||||
|
||||
if [ "$cache_mtime" -ge "$file_mtime" ]; then
|
||||
local cached_checksum=$(cat "$cache_file")
|
||||
if [ "$cached_checksum" = "$checksum1" ]; then
|
||||
log_pass "Checksum caching system works correctly"
|
||||
else
|
||||
log_fail "Checksum caching system failed - checksum mismatch"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
log_fail "Checksum caching system failed - cache timing issue"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test WAL file handling
|
||||
test_wal_file_handling() {
|
||||
INTEGRATION_TEST_FUNCTIONS=$((INTEGRATION_TEST_FUNCTIONS + 1))
|
||||
log_test "WAL File Handling"
|
||||
|
||||
local test_db="$TEST_DIR/mock_plex_data/com.plexapp.plugins.library.db"
|
||||
local wal_file="${test_db}-wal"
|
||||
local shm_file="${test_db}-shm"
|
||||
|
||||
# Verify WAL files exist
|
||||
if [ -f "$wal_file" ] && [ -f "$shm_file" ]; then
|
||||
# Test WAL checkpoint simulation
|
||||
if sqlite3 "$test_db" "PRAGMA wal_checkpoint(FULL);" 2>/dev/null; then
|
||||
log_pass "WAL file handling works correctly"
|
||||
else
|
||||
log_pass "WAL checkpoint simulation completed (mock environment)"
|
||||
fi
|
||||
else
|
||||
log_pass "WAL file handling test completed (no WAL files in mock)"
|
||||
fi
|
||||
}
|
||||
|
||||
# Cleanup integration test environment
|
||||
cleanup_integration_environment() {
|
||||
if [ -d "$TEST_DIR" ]; then
|
||||
log_info "Cleaning up integration test environment"
|
||||
rm -rf "$TEST_DIR"
|
||||
fi
|
||||
}
|
||||
|
||||
# Generate integration test report
|
||||
generate_integration_report() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
|
||||
echo
|
||||
echo "=================================================="
|
||||
echo " PLEX BACKUP INTEGRATION TEST REPORT"
|
||||
echo "=================================================="
|
||||
echo "Test Run: $timestamp"
|
||||
echo "Test Functions: $INTEGRATION_TEST_FUNCTIONS"
|
||||
echo "Total Assertions: $((INTEGRATION_ASSERTIONS_PASSED + INTEGRATION_ASSERTIONS_FAILED))"
|
||||
echo "Assertions Passed: $INTEGRATION_ASSERTIONS_PASSED"
|
||||
echo "Assertions Failed: $INTEGRATION_ASSERTIONS_FAILED"
|
||||
echo
|
||||
|
||||
if [ $INTEGRATION_ASSERTIONS_FAILED -gt 0 ]; then
|
||||
echo "FAILED ASSERTIONS:"
|
||||
for failed_test in "${FAILED_INTEGRATION_TESTS[@]}"; do
|
||||
echo " - $failed_test"
|
||||
done
|
||||
echo
|
||||
fi
|
||||
|
||||
local success_rate=0
|
||||
local total_assertions=$((INTEGRATION_ASSERTIONS_PASSED + INTEGRATION_ASSERTIONS_FAILED))
|
||||
if [ $total_assertions -gt 0 ]; then
|
||||
success_rate=$(( (INTEGRATION_ASSERTIONS_PASSED * 100) / total_assertions ))
|
||||
fi
|
||||
|
||||
echo "Success Rate: ${success_rate}%"
|
||||
echo
|
||||
|
||||
if [ $INTEGRATION_ASSERTIONS_FAILED -eq 0 ]; then
|
||||
log_pass "All integration tests passed successfully!"
|
||||
echo
|
||||
echo "✅ The enhanced Plex backup system is ready for production use!"
|
||||
echo
|
||||
echo "Next Steps:"
|
||||
echo " 1. Test with real webhook endpoints if using webhook notifications"
|
||||
echo " 2. Test email notifications with configured sendmail"
|
||||
echo " 3. Run a test backup in a non-production environment"
|
||||
echo " 4. Set up automated backup scheduling with cron"
|
||||
echo " 5. Monitor performance logs for optimization opportunities"
|
||||
else
|
||||
log_fail "Some integration tests failed - review output above"
|
||||
fi
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
log_info "Starting Plex Backup Integration Tests"
|
||||
|
||||
# Ensure backup script exists
|
||||
if [ ! -f "$BACKUP_SCRIPT" ]; then
|
||||
log_fail "Backup script not found: $BACKUP_SCRIPT"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Setup test environment
|
||||
setup_integration_environment
|
||||
|
||||
# Trap cleanup on exit
|
||||
trap cleanup_integration_environment EXIT SIGINT SIGTERM
|
||||
|
||||
# Run integration tests
|
||||
test_command_line_parsing
|
||||
test_performance_monitoring
|
||||
test_notification_system
|
||||
test_backup_validation
|
||||
test_database_integrity_checking
|
||||
test_parallel_processing
|
||||
test_checksum_caching
|
||||
test_wal_file_handling
|
||||
|
||||
# Generate report
|
||||
generate_integration_report
|
||||
|
||||
# Return appropriate exit code
|
||||
if [ $INTEGRATION_ASSERTIONS_FAILED -eq 0 ]; then
|
||||
exit 0
|
||||
else
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
423
plex/monitor-plex-backup.sh
Executable file
423
plex/monitor-plex-backup.sh
Executable file
@@ -0,0 +1,423 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Plex Backup System Monitoring Dashboard
|
||||
# Provides real-time status and health monitoring for the enhanced backup system
|
||||
|
||||
set -e
|
||||
|
||||
# Color codes for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
MAGENTA='\033[0;35m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
|
||||
BACKUP_ROOT="/mnt/share/media/backups/plex"
|
||||
LOG_ROOT="/mnt/share/media/backups/logs"
|
||||
JSON_LOG_FILE="$SCRIPT_DIR/logs/plex-backup.json"
|
||||
PERFORMANCE_LOG_FILE="$SCRIPT_DIR/logs/plex-backup-performance.json"
|
||||
|
||||
# Display mode
|
||||
WATCH_MODE=false
|
||||
REFRESH_INTERVAL=5
|
||||
|
||||
# Parse command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--watch)
|
||||
WATCH_MODE=true
|
||||
shift
|
||||
;;
|
||||
--interval=*)
|
||||
REFRESH_INTERVAL="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo "Options:"
|
||||
echo " --watch Continuous monitoring mode (refresh every 5 seconds)"
|
||||
echo " --interval=N Set refresh interval for watch mode (seconds)"
|
||||
echo " -h, --help Show this help message"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "Unknown option: $1"
|
||||
echo "Use --help for usage information"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Utility functions
|
||||
log_status() {
|
||||
local status="$1"
|
||||
local message="$2"
|
||||
case "$status" in
|
||||
"OK") echo -e "${GREEN}✓${NC} $message" ;;
|
||||
"WARN") echo -e "${YELLOW}⚠${NC} $message" ;;
|
||||
"ERROR") echo -e "${RED}✗${NC} $message" ;;
|
||||
"INFO") echo -e "${BLUE}ℹ${NC} $message" ;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Clear screen for watch mode
|
||||
clear_screen() {
|
||||
if [ "$WATCH_MODE" = true ]; then
|
||||
clear
|
||||
fi
|
||||
}
|
||||
|
||||
# Header display
|
||||
show_header() {
|
||||
echo -e "${CYAN}╔══════════════════════════════════════════════════════════════════════════════╗${NC}"
|
||||
echo -e "${CYAN}║${NC} ${MAGENTA}PLEX BACKUP SYSTEM DASHBOARD${NC} ${CYAN}║${NC}"
|
||||
echo -e "${CYAN}║${NC} $(date '+%Y-%m-%d %H:%M:%S') ${CYAN}║${NC}"
|
||||
echo -e "${CYAN}╚══════════════════════════════════════════════════════════════════════════════╝${NC}"
|
||||
echo
|
||||
}
|
||||
|
||||
# System status check
|
||||
check_system_status() {
|
||||
echo -e "${BLUE}📊 SYSTEM STATUS${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Check Plex service
|
||||
if systemctl is-active --quiet plexmediaserver; then
|
||||
log_status "OK" "Plex Media Server is running"
|
||||
else
|
||||
log_status "ERROR" "Plex Media Server is not running"
|
||||
fi
|
||||
|
||||
# Check backup script
|
||||
if [ -f "$SCRIPT_DIR/backup-plex.sh" ]; then
|
||||
log_status "OK" "Backup script is present"
|
||||
else
|
||||
log_status "ERROR" "Backup script not found"
|
||||
fi
|
||||
|
||||
# Check directories
|
||||
if [ -d "$BACKUP_ROOT" ]; then
|
||||
log_status "OK" "Backup directory exists"
|
||||
else
|
||||
log_status "ERROR" "Backup directory missing: $BACKUP_ROOT"
|
||||
fi
|
||||
|
||||
if [ -d "$LOG_ROOT" ]; then
|
||||
log_status "OK" "Log directory exists"
|
||||
else
|
||||
log_status "WARN" "Log directory missing: $LOG_ROOT"
|
||||
fi
|
||||
|
||||
# Check dependencies
|
||||
for cmd in jq sqlite3 curl; do
|
||||
if command -v "$cmd" >/dev/null 2>&1; then
|
||||
log_status "OK" "$cmd is available"
|
||||
else
|
||||
log_status "WARN" "$cmd is not installed"
|
||||
fi
|
||||
done
|
||||
|
||||
echo
|
||||
}
|
||||
|
||||
# Backup status
|
||||
check_backup_status() {
|
||||
echo -e "${BLUE}💾 BACKUP STATUS${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Count total backups
|
||||
local backup_count=0
|
||||
if [ -d "$BACKUP_ROOT" ]; then
|
||||
backup_count=$(find "$BACKUP_ROOT" -maxdepth 1 -type f -name "plex-backup-*.tar.gz" 2>/dev/null | wc -l)
|
||||
fi
|
||||
|
||||
if [ "$backup_count" -gt 0 ]; then
|
||||
log_status "OK" "Total backups: $backup_count"
|
||||
|
||||
# Find latest backup
|
||||
local latest_backup=$(find "$BACKUP_ROOT" -maxdepth 1 -type f -name "plex-backup-*.tar.gz" 2>/dev/null | sort | tail -1)
|
||||
if [ -n "$latest_backup" ]; then
|
||||
local backup_filename=$(basename "$latest_backup")
|
||||
# Extract date from filename: plex-backup-YYYYMMDD_HHMMSS.tar.gz
|
||||
local backup_date=$(echo "$backup_filename" | sed 's/plex-backup-//' | sed 's/_.*$//')
|
||||
local readable_date=$(date -d "${backup_date:0:4}-${backup_date:4:2}-${backup_date:6:2}" '+%B %d, %Y' 2>/dev/null || echo "Invalid date")
|
||||
local backup_age_days=$(( ($(date +%s) - $(date -d "${backup_date:0:4}-${backup_date:4:2}-${backup_date:6:2}" +%s 2>/dev/null || echo "0")) / 86400 ))
|
||||
|
||||
if [ "$backup_age_days" -le 1 ]; then
|
||||
log_status "OK" "Latest backup: $readable_date ($backup_age_days days ago)"
|
||||
elif [ "$backup_age_days" -le 7 ]; then
|
||||
log_status "WARN" "Latest backup: $readable_date ($backup_age_days days ago)"
|
||||
else
|
||||
log_status "ERROR" "Latest backup: $readable_date ($backup_age_days days ago)"
|
||||
fi
|
||||
|
||||
# Check backup size
|
||||
local backup_size=$(du -sh "$latest_backup" 2>/dev/null | cut -f1)
|
||||
log_status "INFO" "Latest backup size: $backup_size"
|
||||
|
||||
# Check backup contents (via tar listing)
|
||||
local file_count=$(tar -tzf "$latest_backup" 2>/dev/null | wc -l)
|
||||
log_status "INFO" "Files in latest backup: $file_count"
|
||||
fi
|
||||
else
|
||||
log_status "WARN" "No backups found"
|
||||
fi
|
||||
|
||||
# Disk usage
|
||||
if [ -d "$BACKUP_ROOT" ]; then
|
||||
local total_backup_size=$(du -sh "$BACKUP_ROOT" 2>/dev/null | cut -f1)
|
||||
local available_space=$(df -h "$BACKUP_ROOT" 2>/dev/null | awk 'NR==2 {print $4}')
|
||||
local used_percentage=$(df "$BACKUP_ROOT" 2>/dev/null | awk 'NR==2 {print $5}' | sed 's/%//')
|
||||
|
||||
log_status "INFO" "Total backup storage: $total_backup_size"
|
||||
log_status "INFO" "Available space: $available_space"
|
||||
|
||||
if [ -n "$used_percentage" ]; then
|
||||
if [ "$used_percentage" -lt 80 ]; then
|
||||
log_status "OK" "Disk usage: $used_percentage%"
|
||||
elif [ "$used_percentage" -lt 90 ]; then
|
||||
log_status "WARN" "Disk usage: $used_percentage%"
|
||||
else
|
||||
log_status "ERROR" "Disk usage: $used_percentage% (Critical)"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo
|
||||
}
|
||||
|
||||
# Performance metrics
|
||||
show_performance_metrics() {
|
||||
echo -e "${BLUE}⚡ PERFORMANCE METRICS${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
if [ -f "$PERFORMANCE_LOG_FILE" ]; then
|
||||
log_status "OK" "Performance log found"
|
||||
|
||||
# Recent operations
|
||||
local recent_count=$(jq length "$PERFORMANCE_LOG_FILE" 2>/dev/null || echo "0")
|
||||
log_status "INFO" "Total logged operations: $recent_count"
|
||||
|
||||
if [ "$recent_count" -gt 0 ]; then
|
||||
# Average times for different operations
|
||||
local avg_backup=$(jq '[.[] | select(.operation == "full_backup") | .duration_seconds] | if length > 0 then add/length else 0 end' "$PERFORMANCE_LOG_FILE" 2>/dev/null || echo "0")
|
||||
local avg_verification=$(jq '[.[] | select(.operation == "verification") | .duration_seconds] | if length > 0 then add/length else 0 end' "$PERFORMANCE_LOG_FILE" 2>/dev/null || echo "0")
|
||||
local avg_service_stop=$(jq '[.[] | select(.operation == "service_stop") | .duration_seconds] | if length > 0 then add/length else 0 end' "$PERFORMANCE_LOG_FILE" 2>/dev/null || echo "0")
|
||||
local avg_service_start=$(jq '[.[] | select(.operation == "service_start") | .duration_seconds] | if length > 0 then add/length else 0 end' "$PERFORMANCE_LOG_FILE" 2>/dev/null || echo "0")
|
||||
|
||||
if [ "$avg_backup" != "0" ] && [ "$avg_backup" != "null" ]; then
|
||||
log_status "INFO" "Average backup time: ${avg_backup}s"
|
||||
fi
|
||||
if [ "$avg_verification" != "0" ] && [ "$avg_verification" != "null" ]; then
|
||||
log_status "INFO" "Average verification time: ${avg_verification}s"
|
||||
fi
|
||||
if [ "$avg_service_stop" != "0" ] && [ "$avg_service_stop" != "null" ]; then
|
||||
log_status "INFO" "Average service stop time: ${avg_service_stop}s"
|
||||
fi
|
||||
if [ "$avg_service_start" != "0" ] && [ "$avg_service_start" != "null" ]; then
|
||||
log_status "INFO" "Average service start time: ${avg_service_start}s"
|
||||
fi
|
||||
|
||||
# Last 3 operations
|
||||
echo -e "${YELLOW}Recent Operations:${NC}"
|
||||
jq -r '.[-3:] | .[] | " \(.timestamp): \(.operation) (\(.duration_seconds)s)"' "$PERFORMANCE_LOG_FILE" 2>/dev/null | sed 's/T/ /' | sed 's/+.*$//' || echo " No recent operations"
|
||||
fi
|
||||
else
|
||||
log_status "WARN" "Performance log not found (no backups run yet)"
|
||||
fi
|
||||
|
||||
echo
|
||||
}
|
||||
|
||||
# Recent activity
|
||||
show_recent_activity() {
|
||||
echo -e "${BLUE}📋 RECENT ACTIVITY${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Check JSON log for last backup times
|
||||
if [ -f "$JSON_LOG_FILE" ]; then
|
||||
log_status "OK" "Backup tracking log found"
|
||||
|
||||
local file_count=$(jq 'length' "$JSON_LOG_FILE" 2>/dev/null || echo "0")
|
||||
log_status "INFO" "Tracked files: $file_count"
|
||||
|
||||
if [ "$file_count" -gt 0 ]; then
|
||||
echo -e "${YELLOW}Last Backup Times:${NC}"
|
||||
jq -r 'to_entries | .[] | " \(.key | split("/") | .[-1]): \(.value | strftime("%Y-%m-%d %H:%M:%S"))"' "$JSON_LOG_FILE" 2>/dev/null | head -5
|
||||
fi
|
||||
else
|
||||
log_status "WARN" "Backup tracking log not found"
|
||||
fi
|
||||
|
||||
# Check recent log files
|
||||
if [ -d "$LOG_ROOT" ]; then
|
||||
local recent_log=$(find "$LOG_ROOT" -name "plex-backup-*.log" -type f 2>/dev/null | sort | tail -1)
|
||||
if [ -n "$recent_log" ]; then
|
||||
local log_date=$(basename "$recent_log" | sed 's/plex-backup-//' | sed 's/.log//')
|
||||
log_status "INFO" "Most recent log: $log_date"
|
||||
|
||||
# Check for errors in recent log
|
||||
local error_count=$(grep -c "ERROR:" "$recent_log" 2>/dev/null || echo "0")
|
||||
local warning_count=$(grep -c "WARNING:" "$recent_log" 2>/dev/null || echo "0")
|
||||
|
||||
if [ "$error_count" -eq 0 ] && [ "$warning_count" -eq 0 ]; then
|
||||
log_status "OK" "No errors or warnings in recent log"
|
||||
elif [ "$error_count" -eq 0 ]; then
|
||||
log_status "WARN" "$warning_count warnings in recent log"
|
||||
else
|
||||
log_status "ERROR" "$error_count errors, $warning_count warnings in recent log"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
echo
|
||||
}
|
||||
|
||||
# Scheduling status
|
||||
show_scheduling_status() {
|
||||
echo -e "${BLUE}⏰ SCHEDULING STATUS${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Check cron jobs
|
||||
local cron_jobs=0
|
||||
if crontab -l 2>/dev/null | grep -q "backup-plex"; then
|
||||
cron_jobs=$(crontab -l 2>/dev/null | grep -c "backup-plex")
|
||||
fi
|
||||
if [ "$cron_jobs" -gt 0 ]; then
|
||||
log_status "OK" "Cron jobs configured: $cron_jobs"
|
||||
echo -e "${YELLOW}Cron Schedule:${NC}"
|
||||
crontab -l 2>/dev/null | grep "backup-plex" | sed 's/^/ /'
|
||||
else
|
||||
log_status "WARN" "No cron jobs found for backup-plex"
|
||||
fi
|
||||
|
||||
# Check systemd timers
|
||||
if systemctl list-timers --all 2>/dev/null | grep -q "plex-backup"; then
|
||||
log_status "OK" "Systemd timer configured"
|
||||
local timer_status=$(systemctl is-active plex-backup.timer 2>/dev/null || echo "inactive")
|
||||
if [ "$timer_status" = "active" ]; then
|
||||
log_status "OK" "Timer is active"
|
||||
local next_run=$(systemctl list-timers plex-backup.timer 2>/dev/null | grep "plex-backup" | awk '{print $1, $2}')
|
||||
if [ -n "$next_run" ]; then
|
||||
log_status "INFO" "Next run: $next_run"
|
||||
fi
|
||||
else
|
||||
log_status "WARN" "Timer is inactive"
|
||||
fi
|
||||
else
|
||||
log_status "INFO" "No systemd timer configured"
|
||||
fi
|
||||
|
||||
echo
|
||||
}
|
||||
|
||||
# Health recommendations
|
||||
show_recommendations() {
|
||||
echo -e "${BLUE}💡 RECOMMENDATIONS${NC}"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
local recommendations=()
|
||||
|
||||
# Check backup age
|
||||
if [ -d "$BACKUP_ROOT" ]; then
|
||||
local latest_backup=$(find "$BACKUP_ROOT" -maxdepth 1 -type f -name "plex-backup-*.tar.gz" 2>/dev/null | sort | tail -1)
|
||||
if [ -n "$latest_backup" ]; then
|
||||
local backup_filename=$(basename "$latest_backup")
|
||||
# Extract date from filename: plex-backup-YYYYMMDD_HHMMSS.tar.gz
|
||||
local backup_date=$(echo "$backup_filename" | sed 's/plex-backup-//' | sed 's/_.*$//')
|
||||
local backup_age_days=$(( ($(date +%s) - $(date -d "${backup_date:0:4}-${backup_date:4:2}-${backup_date:6:2}" +%s 2>/dev/null || echo "0")) / 86400 ))
|
||||
if [ "$backup_age_days" -gt 7 ]; then
|
||||
recommendations+=("Consider running a manual backup - latest backup is $backup_age_days days old")
|
||||
fi
|
||||
else
|
||||
recommendations+=("No backups found - run initial backup with: sudo ./backup-plex.sh")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check scheduling
|
||||
local cron_jobs=0
|
||||
if crontab -l 2>/dev/null | grep -q "backup-plex"; then
|
||||
cron_jobs=$(crontab -l 2>/dev/null | grep -c "backup-plex")
|
||||
fi
|
||||
if [ "$cron_jobs" -eq 0 ] && ! systemctl list-timers --all 2>/dev/null | grep -q "plex-backup"; then
|
||||
recommendations+=("Set up automated backup scheduling with cron or systemd timer")
|
||||
fi
|
||||
|
||||
# Check disk space
|
||||
if [ -d "$BACKUP_ROOT" ]; then
|
||||
local used_percentage=$(df "$BACKUP_ROOT" 2>/dev/null | awk 'NR==2 {print $5}' | sed 's/%//')
|
||||
if [ -n "$used_percentage" ] && [ "$used_percentage" -gt 85 ]; then
|
||||
recommendations+=("Backup disk usage is high ($used_percentage%) - consider cleaning old backups")
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check dependencies
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
recommendations+=("Install jq for enhanced performance monitoring: sudo apt install jq")
|
||||
fi
|
||||
|
||||
# Show recommendations
|
||||
if [ ${#recommendations[@]} -eq 0 ]; then
|
||||
log_status "OK" "No immediate recommendations - system looks healthy!"
|
||||
else
|
||||
for rec in "${recommendations[@]}"; do
|
||||
log_status "INFO" "$rec"
|
||||
done
|
||||
fi
|
||||
|
||||
echo
|
||||
}
|
||||
|
||||
# Footer with refresh info
|
||||
show_footer() {
|
||||
if [ "$WATCH_MODE" = true ]; then
|
||||
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${CYAN}📡 WATCH MODE: Refreshing every ${REFRESH_INTERVAL} seconds | Press Ctrl+C to exit${NC}"
|
||||
else
|
||||
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
|
||||
echo -e "${CYAN}💡 Use --watch for continuous monitoring | Use --help for options${NC}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Main dashboard function
|
||||
show_dashboard() {
|
||||
clear_screen
|
||||
show_header
|
||||
check_system_status
|
||||
check_backup_status
|
||||
show_performance_metrics
|
||||
show_recent_activity
|
||||
show_scheduling_status
|
||||
show_recommendations
|
||||
show_footer
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
if [ "$WATCH_MODE" = true ]; then
|
||||
# Validate refresh interval
|
||||
if ! [[ "$REFRESH_INTERVAL" =~ ^[0-9]+$ ]] || [ "$REFRESH_INTERVAL" -lt 1 ]; then
|
||||
echo "Error: Invalid refresh interval. Must be a positive integer."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Continuous monitoring
|
||||
while true; do
|
||||
show_dashboard
|
||||
sleep "$REFRESH_INTERVAL"
|
||||
done
|
||||
else
|
||||
# Single run
|
||||
show_dashboard
|
||||
fi
|
||||
}
|
||||
|
||||
# Handle interrupts gracefully in watch mode
|
||||
trap 'echo -e "\n\n${YELLOW}Monitoring stopped by user${NC}"; exit 0' INT TERM
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
495
plex/plex-backup.md
Normal file
495
plex/plex-backup.md
Normal file
@@ -0,0 +1,495 @@
|
||||
# Enhanced Plex Backup Script Documentation
|
||||
|
||||
This document provides comprehensive documentation for the enhanced `backup-plex.sh` script. This advanced backup solution includes performance monitoring, parallel processing, intelligent notifications, and WAL file handling.
|
||||
|
||||
## Script Overview
|
||||
|
||||
The enhanced script performs the following advanced tasks:
|
||||
|
||||
1. **Performance Monitoring**: Tracks backup operations with JSON-based performance logging
|
||||
2. **Full Backup Operations**: Performs complete backups of all Plex files every time
|
||||
3. **WAL File Handling**: Properly handles SQLite Write-Ahead Logging files
|
||||
4. **Database Integrity Verification**: Comprehensive integrity checks with automated repair options
|
||||
5. **Parallel Processing**: Concurrent verification for improved performance
|
||||
6. **Multi-Channel Notifications**: Console, webhook, and email notification support
|
||||
7. **Enhanced Service Management**: Safe Plex service management with progress indicators
|
||||
8. **Comprehensive Logging**: Detailed logs with color-coded output and timestamps
|
||||
9. **Safe Automated Cleanup**: Retention policies based on age and backup count
|
||||
|
||||
## Enhanced Features
|
||||
|
||||
### Full Backup Operation
|
||||
|
||||
The script performs complete backups every time it runs:
|
||||
|
||||
- **What it does**: Backs up all Plex files regardless of modification status
|
||||
- **Benefits**:
|
||||
- Guarantees every backup is a complete restoration point
|
||||
- Eliminates risk of file loss from incomplete backup coverage
|
||||
- Simplifies backup management and restoration
|
||||
- **Usage**: `./backup-plex.sh` (no options needed)
|
||||
|
||||
### Performance Tracking
|
||||
|
||||
- **JSON Performance Logs**: All operations are timed and logged to `logs/plex-backup-performance.json`
|
||||
- **Performance Reports**: Automatic generation of average performance metrics
|
||||
- **Operation Monitoring**: Tracks backup, verification, service management, and overall script execution times
|
||||
|
||||
### Notification System
|
||||
|
||||
The script supports multiple notification channels:
|
||||
|
||||
#### Console Notifications
|
||||
|
||||
- Color-coded status messages (Success: Green, Error: Red, Warning: Yellow, Info: Blue)
|
||||
- Timestamped log entries with clear formatting
|
||||
|
||||
#### Webhook Notifications
|
||||
|
||||
```bash
|
||||
./backup-plex.sh --webhook=https://your-webhook-url.com/endpoint
|
||||
```
|
||||
|
||||
**Default Webhook**: The script includes a default webhook URL (`https://notify.peterwood.rocks/lab`) that will be used if no custom webhook is specified. To use a different webhook, specify it with the `--webhook` option.
|
||||
|
||||
Sends JSON payloads with backup status, hostname, and timestamps. Notifications include tags for filtering (backup, plex, hostname, and status-specific tags like "errors" or "warnings").
|
||||
|
||||
#### Email Notifications
|
||||
|
||||
```bash
|
||||
./backup-plex.sh --email=admin@example.com
|
||||
```
|
||||
|
||||
Requires `sendmail` to be configured on the system.
|
||||
|
||||
### WAL File Management
|
||||
|
||||
The script now properly handles SQLite Write-Ahead Logging files:
|
||||
|
||||
- **Automatic Detection**: Identifies and backs up `.db-wal` and `.db-shm` files when present
|
||||
- **WAL Checkpointing**: Performs `PRAGMA wal_checkpoint(FULL)` before integrity checks
|
||||
- **Safe Backup**: Ensures WAL files are properly backed up alongside main database files
|
||||
|
||||
### Database Integrity & Repair
|
||||
|
||||
Enhanced database management features:
|
||||
|
||||
- **Pre-backup Integrity Checks**: Verifies database health before backup operations
|
||||
- **Automated Repair**: Optional automatic repair of corrupted databases using advanced techniques
|
||||
- **Interactive Repair Mode**: Prompts for repair decisions when issues are detected
|
||||
- **Post-repair Verification**: Re-checks integrity after repair operations
|
||||
|
||||
### Parallel Processing
|
||||
|
||||
- **Concurrent Verification**: Parallel backup verification for improved performance
|
||||
- **Fallback Safety**: Automatically falls back to sequential processing if parallel mode fails
|
||||
- **Configurable**: Can be disabled with `--no-parallel` for maximum safety
|
||||
|
||||
## Command Line Options
|
||||
|
||||
```bash
|
||||
Usage: ./backup-plex.sh [OPTIONS]
|
||||
|
||||
Options:
|
||||
--auto-repair Automatically attempt to repair corrupted databases
|
||||
--check-integrity Only check database integrity, don't backup
|
||||
--non-interactive Run in non-interactive mode (for automation)
|
||||
--no-parallel Disable parallel verification (slower but safer)
|
||||
--no-performance Disable performance monitoring
|
||||
--webhook=URL Send notifications to webhook URL
|
||||
--email=ADDRESS Send notifications to email address
|
||||
-h, --help Show help message
|
||||
```
|
||||
|
||||
## Detailed Backup Process Steps
|
||||
|
||||
The backup script follows these detailed steps to ensure data integrity and reliability:
|
||||
|
||||
### 1. Create Log Directory
|
||||
|
||||
```bash
|
||||
mkdir -p /mnt/share/media/backups/logs || { echo "Failed to create log directory"; exit 1; }
|
||||
```
|
||||
|
||||
This command ensures that the log directory exists. If it doesn't, it creates the directory. If the directory creation fails, the script exits with an error message.
|
||||
|
||||
### 2. Define Log File
|
||||
|
||||
```bash
|
||||
LOG_FILE="/mnt/share/media/backups/logs/backup_log_$(date +%Y%m%d_%H%M%S).md"
|
||||
```
|
||||
|
||||
This line defines the log file path, including the current date and time in the filename to ensure uniqueness.
|
||||
|
||||
### 3. Stop Plex Media Server Service
|
||||
|
||||
```bash
|
||||
if systemctl is-active --quiet plexmediaserver.service; then
|
||||
/home/acedanger/shell/plex/plex.sh stop || { echo "Failed to stop plexmediaserver.service"; exit 1; }
|
||||
fi
|
||||
```
|
||||
|
||||
This block checks if the Plex Media Server service is running. If it is, the script stops the service using a custom script (`plex.sh`).
|
||||
|
||||
### 4. Backup Plex Database Files and Preferences
|
||||
|
||||
The enhanced backup system creates compressed archives directly, eliminating intermediate directories:
|
||||
|
||||
```bash
|
||||
# Files are copied to temporary staging area for verification
|
||||
cp "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db" "$BACKUP_PATH/"
|
||||
cp "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.blobs.db" "$BACKUP_PATH/"
|
||||
cp "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Preferences.xml" "$BACKUP_PATH/"
|
||||
```
|
||||
|
||||
These commands copy the Plex database files and preferences directly to the backup root directory. Each file copy operation includes integrity verification and checksum validation.
|
||||
|
||||
### 5. Create Compressed Archive
|
||||
|
||||
```bash
|
||||
# Create archive directly with timestamp naming convention
|
||||
final_archive="${BACKUP_ROOT}/plex-backup-$(date '+%Y%m%d_%H%M%S').tar.gz"
|
||||
tar -czf "$final_archive" -C "$temp_staging_dir" .
|
||||
```
|
||||
|
||||
The system creates compressed archives directly using a timestamp-based naming convention (`plex-backup-YYYYMMDD_HHMMSS.tar.gz`), eliminating the need for intermediate dated directories.
|
||||
|
||||
### 6. Archive Validation and Cleanup
|
||||
|
||||
```bash
|
||||
# Validate archive integrity
|
||||
if tar -tzf "$final_archive" >/dev/null 2>&1; then
|
||||
log_success "Archive created and validated: $(basename "$final_archive")"
|
||||
rm -rf "$temp_staging_dir"
|
||||
else
|
||||
log_error "Archive validation failed"
|
||||
rm -f "$final_archive"
|
||||
fi
|
||||
```
|
||||
|
||||
The system validates the created archive and removes temporary staging files, ensuring only valid compressed backups are retained in the backup root directory.
|
||||
|
||||
### 7. Send Notification
|
||||
|
||||
```bash
|
||||
curl \
|
||||
-H tags:popcorn,backup,plex,${HOSTNAME} \
|
||||
-d "The Plex databases have been saved to the /media/backups/plex folder as plex-backup-YYYYMMDD_HHMMSS.tar.gz" \
|
||||
https://notify.peterwood.rocks/lab || { echo "Failed to send notification"; exit 1; }
|
||||
```
|
||||
|
||||
This command sends a notification upon completion of the backup process, indicating the compressed archive has been created.
|
||||
|
||||
### 8. Restart Plex Media Server Service
|
||||
|
||||
```bash
|
||||
if systemctl is-enabled --quiet plexmediaserver.service; then
|
||||
/home/acedanger/shell/plex/plex.sh start || { echo "Failed to start plexmediaserver.service"; exit 1; }
|
||||
fi
|
||||
```
|
||||
|
||||
This block checks if the Plex Media Server service is enabled. If it is, the script restarts the service using a custom script (`plex.sh`).
|
||||
|
||||
### 9. Legacy Cleanup
|
||||
|
||||
```bash
|
||||
# Clean up any remaining dated directories from old backup structure
|
||||
find "${BACKUP_ROOT}" -maxdepth 1 -type d -name "????????" -exec rm -rf {} \; 2>/dev/null || true
|
||||
```
|
||||
|
||||
The enhanced system includes cleanup of legacy dated directories from previous backup structure versions, ensuring a clean tar.gz-only backup directory.
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### Performance Log Format
|
||||
|
||||
The performance log (`logs/plex-backup-performance.json`) contains entries like:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"operation": "backup",
|
||||
"duration_seconds": 45.3,
|
||||
"timestamp": "2025-05-25T19:45:23-05:00"
|
||||
},
|
||||
{
|
||||
"operation": "verification",
|
||||
"duration_seconds": 12.8,
|
||||
"timestamp": "2025-05-25T19:46:08-05:00"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Backup
|
||||
|
||||
```bash
|
||||
./backup-plex.sh
|
||||
```
|
||||
|
||||
Performs a standard backup with all enhanced features enabled.
|
||||
|
||||
### Integrity Check Only
|
||||
|
||||
```bash
|
||||
./backup-plex.sh --check-integrity
|
||||
```
|
||||
|
||||
Only checks database integrity without performing backup.
|
||||
|
||||
### Automated Backup with Notifications
|
||||
|
||||
```bash
|
||||
./backup-plex.sh --non-interactive --auto-repair --webhook=https://notify.example.com/backup
|
||||
```
|
||||
|
||||
Runs in automated mode with auto-repair and custom webhook notifications.
|
||||
|
||||
**Note**: If no `--webhook` option is specified, the script will use the default webhook URL (`https://notify.peterwood.rocks/lab`) for notifications.
|
||||
|
||||
### Performance-Optimized Backup
|
||||
|
||||
```bash
|
||||
./backup-plex.sh --no-parallel --no-performance
|
||||
```
|
||||
|
||||
Runs with parallel processing and performance monitoring disabled for maximum compatibility.
|
||||
|
||||
## Automation and Scheduling
|
||||
|
||||
### Cron Job Setup
|
||||
|
||||
For daily automated backups at 2 AM:
|
||||
|
||||
```bash
|
||||
# Edit crontab
|
||||
crontab -e
|
||||
|
||||
# Add this line for daily backup with email notifications
|
||||
0 2 * * * /home/acedanger/shell/backup-plex.sh --non-interactive --auto-repair --email=admin@example.com 2>&1 | logger -t plex-backup
|
||||
|
||||
# Or for daily backup with default webhook notifications (https://notify.peterwood.rocks/lab)
|
||||
0 2 * * * /home/acedanger/shell/backup-plex.sh --non-interactive --auto-repair 2>&1 | logger -t plex-backup
|
||||
```
|
||||
|
||||
**Note**: The script will automatically use the default webhook URL for notifications unless a custom webhook is specified with `--webhook=URL`.
|
||||
|
||||
### Systemd Service
|
||||
|
||||
Create a systemd service for more control:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Plex Backup Service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
User=root
|
||||
ExecStart=/home/acedanger/shell/backup-plex.sh --non-interactive --auto-repair
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### Systemd Timer
|
||||
|
||||
Create a timer for regular execution:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Daily Plex Backup
|
||||
Requires=plex-backup.service
|
||||
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
## Monitoring and Alerts
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
The script automatically tracks:
|
||||
|
||||
- Backup operation duration
|
||||
- Verification times
|
||||
- Service start/stop times
|
||||
- Overall script execution time
|
||||
|
||||
### Health Checks
|
||||
|
||||
Regular health monitoring can be implemented by checking:
|
||||
|
||||
```bash
|
||||
# Check last backup success
|
||||
jq -r '.[-1] | select(.operation == "total_script") | .timestamp' logs/plex-backup-performance.json
|
||||
|
||||
# Check average backup performance
|
||||
jq '[.[] | select(.operation == "backup") | .duration_seconds] | add/length' logs/plex-backup-performance.json
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Permission Denied Errors**
|
||||
- Ensure script runs with appropriate sudo permissions
|
||||
- Check Plex file ownership and permissions
|
||||
|
||||
2. **WAL File Warnings**
|
||||
- Now handled automatically by the enhanced script
|
||||
- WAL checkpointing ensures data consistency
|
||||
|
||||
3. **Performance Issues**
|
||||
- Use `--no-parallel` if concurrent operations cause problems
|
||||
- Monitor performance logs for bottlenecks
|
||||
|
||||
4. **Notification Failures**
|
||||
- Verify webhook URLs are accessible
|
||||
- Check sendmail configuration for email notifications
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable verbose logging by modifying the script or using:
|
||||
|
||||
```bash
|
||||
bash -x ./backup-plex.sh --check-integrity
|
||||
```
|
||||
|
||||
## Testing Framework
|
||||
|
||||
The script includes a comprehensive testing framework (`test-plex-backup.sh`):
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
./test-plex-backup.sh all
|
||||
|
||||
# Run only unit tests
|
||||
./test-plex-backup.sh unit
|
||||
|
||||
# Run performance benchmarks
|
||||
./test-plex-backup.sh performance
|
||||
```
|
||||
|
||||
### Test Categories
|
||||
|
||||
- **Unit Tests**: Core functionality verification
|
||||
- **Integration Tests**: Full system testing (requires Plex installation)
|
||||
- **Performance Tests**: Benchmarking and performance validation
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### File Permissions
|
||||
|
||||
- Backup files are created with appropriate permissions
|
||||
- Sensitive files maintain original ownership and permissions
|
||||
- Temporary files are properly cleaned up
|
||||
|
||||
### Network Security
|
||||
|
||||
- Webhook notifications use HTTPS when possible
|
||||
- Email notifications respect system sendmail configuration
|
||||
- No sensitive data is included in notifications
|
||||
|
||||
### Access Control
|
||||
|
||||
- Script requires appropriate sudo permissions
|
||||
- Backup locations should have restricted access
|
||||
- Log files contain operational data, not sensitive information
|
||||
|
||||
## Backup Strategy
|
||||
|
||||
The enhanced script implements a robust backup strategy with a streamlined tar.gz-only structure:
|
||||
|
||||
### Archive-Only Directory Structure
|
||||
|
||||
The new backup system eliminates intermediate dated directories and stores only compressed archives:
|
||||
|
||||
```text
|
||||
/mnt/share/media/backups/plex/
|
||||
├── plex-backup-20250125_143022.tar.gz # Latest backup
|
||||
├── plex-backup-20250124_143011.tar.gz # Previous backup
|
||||
├── plex-backup-20250123_143008.tar.gz # Older backup
|
||||
└── logs/
|
||||
├── backup_log_20250125_143022.md
|
||||
└── plex-backup-performance.json
|
||||
```
|
||||
|
||||
### Archive Naming Convention
|
||||
|
||||
Backup files follow the naming convention `plex-backup-YYYYMMDD_HHMMSS.tar.gz` for easy identification and sorting.
|
||||
|
||||
## Important Information
|
||||
|
||||
- Ensure that the [`plex.sh`](https://github.com/acedanger/shell/blob/main/plex.sh) script is available and executable. This script is used to stop and start the Plex Media Server service.
|
||||
- The script uses `systemctl` to manage the Plex Media Server service. Ensure that `systemctl` is available on your system.
|
||||
- **New Directory Structure**: The enhanced backup system stores only compressed `.tar.gz` files directly in the backup root directory, eliminating intermediate dated directories.
|
||||
- **Archive Naming**: Backup files follow the naming convention `plex-backup-YYYYMMDD_HHMMSS.tar.gz` for easy identification and sorting.
|
||||
- **Legacy Compatibility**: The system automatically cleans up old dated directories from previous backup versions during operation.
|
||||
- The backup directory path is configurable through the `BACKUP_ROOT` variable. Modify this path as needed to fit your environment.
|
||||
- The script logs important actions and errors to timestamped log files. Check the log files for details if any issues arise.
|
||||
- **Backup Validation**: All archives undergo integrity checking to ensure backup reliability.
|
||||
|
||||
## Final Directory Structure
|
||||
|
||||
```text
|
||||
/mnt/share/media/backups/plex/
|
||||
├── plex-backup-20250125_143022.tar.gz # Latest backup
|
||||
├── plex-backup-20250124_143011.tar.gz # Previous backup
|
||||
├── plex-backup-20250123_143008.tar.gz # Older backup
|
||||
└── logs/
|
||||
├── backup_log_20250125_143022.md
|
||||
└── plex-backup-performance.json
|
||||
```
|
||||
|
||||
Backup files follow the pattern: `plex-backup-YYYYMMDD_HHMMSS.tar.gz`
|
||||
|
||||
- **YYYYMMDD**: Date of backup (e.g., 20250125)
|
||||
- **HHMMSS**: Time of backup (e.g., 143022)
|
||||
- **tar.gz**: Compressed archive format
|
||||
|
||||
### Key Improvements
|
||||
|
||||
1. **Direct Archive Creation**: No intermediate directories required
|
||||
2. **Efficient Storage**: Only compressed files stored permanently
|
||||
3. **Easy Identification**: Timestamp-based naming for sorting
|
||||
4. **Legacy Cleanup**: Automatic removal of old dated directories
|
||||
5. **Archive Validation**: Integrity checking of compressed files
|
||||
|
||||
### 3-2-1 Backup Rule
|
||||
|
||||
1. **3 Copies**: Original data + local backup + compressed archive
|
||||
2. **2 Different Media**: Local disk + network storage capability
|
||||
3. **1 Offsite**: Ready for remote synchronization
|
||||
|
||||
### Retention Policy
|
||||
|
||||
- Configurable maximum backup age (default: 30 days)
|
||||
- Configurable maximum backup count (default: 10 backups)
|
||||
- Automatic cleanup of old backups
|
||||
|
||||
### Verification Strategy
|
||||
|
||||
- Checksum verification for all backed up files
|
||||
- Database integrity checks before and after operations
|
||||
- Optional parallel verification for improved performance
|
||||
|
||||
## Migration from Legacy Script
|
||||
|
||||
To migrate from the original backup script:
|
||||
|
||||
1. **Backup Current Configuration**: Save any custom modifications
|
||||
2. **Test New Script**: Run with `--check-integrity` first
|
||||
3. **Update Automation**: Modify cron jobs to use new options
|
||||
4. **Monitor Performance**: Check performance logs for optimization opportunities
|
||||
|
||||
The enhanced script maintains backward compatibility while adding significant new capabilities.
|
||||
83
plex/plex-management.md
Normal file
83
plex/plex-management.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Plex Management Script Documentation
|
||||
|
||||
This document provides an overview and step-by-step explanation of the `plex.sh` script. This script is used to manage the Plex Media Server service on a systemd-based Linux distribution.
|
||||
|
||||
## Script Overview
|
||||
|
||||
The script performs the following main tasks:
|
||||
|
||||
1. Starts the Plex Media Server.
|
||||
2. Stops the Plex Media Server.
|
||||
3. Restarts the Plex Media Server.
|
||||
4. Displays the current status of the Plex Media Server.
|
||||
|
||||
## Detailed Steps
|
||||
|
||||
### 1. Start Plex Media Server
|
||||
|
||||
```bash
|
||||
start_plex() {
|
||||
sudo systemctl start plexmediaserver
|
||||
echo "Plex Media Server started."
|
||||
}
|
||||
```
|
||||
|
||||
This function starts the Plex Media Server using `systemctl`.
|
||||
|
||||
### 2. Stop Plex Media Server
|
||||
|
||||
```bash
|
||||
stop_plex() {
|
||||
sudo systemctl stop plexmediaserver
|
||||
echo "Plex Media Server stopped."
|
||||
}
|
||||
```
|
||||
|
||||
This function stops the Plex Media Server using `systemctl`.
|
||||
|
||||
### 3. Restart Plex Media Server
|
||||
|
||||
```bash
|
||||
restart_plex() {
|
||||
sudo systemctl restart plexmediaserver
|
||||
echo "Plex Media Server restarted."
|
||||
}
|
||||
```
|
||||
|
||||
This function restarts the Plex Media Server using `systemctl`.
|
||||
|
||||
### 4. Display Plex Media Server Status
|
||||
|
||||
```bash
|
||||
status_plex() {
|
||||
sudo systemctl status plexmediaserver
|
||||
}
|
||||
```
|
||||
|
||||
This function displays the current status of the Plex Media Server using `systemctl`.
|
||||
|
||||
## Usage
|
||||
|
||||
To use the script, run it with one of the following parameters:
|
||||
|
||||
```shell
|
||||
./plex.sh {start|stop|restart|status}
|
||||
```
|
||||
|
||||
- `start`: Starts the Plex Media Server.
|
||||
- `stop`: Stops the Plex Media Server.
|
||||
- `restart`: Restarts the Plex Media Server.
|
||||
- `status`: Displays the current status of the Plex Media Server.
|
||||
|
||||
## Important Information
|
||||
|
||||
- Ensure that the script is executable. You can make it executable with the following command:
|
||||
|
||||
```shell
|
||||
chmod +x plex.sh
|
||||
```
|
||||
|
||||
- The script uses `systemctl` to manage the Plex Media Server service. Ensure that `systemctl` is available on your system.
|
||||
- The script requires `sudo` privileges to manage the Plex Media Server service. Ensure that you have the necessary permissions to run the script with `sudo`.
|
||||
|
||||
By following this documentation, you should be able to understand and use the `plex.sh` script effectively.
|
||||
29
plex/plex-recent-additions.sh
Executable file
29
plex/plex-recent-additions.sh
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Define the path to the Plex database
|
||||
PLEX_DB="/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db"
|
||||
|
||||
# Check if the database exists
|
||||
if [ ! -f "$PLEX_DB" ]; then
|
||||
echo "Plex database not found at $PLEX_DB"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Query the database for items added in the last 7 days
|
||||
sqlite3 "$PLEX_DB" <<EOF
|
||||
.headers on
|
||||
.mode column
|
||||
SELECT
|
||||
datetime(meta.added_at, 'unixepoch', 'localtime') AS "added_at"
|
||||
, meta.title
|
||||
, meta.year
|
||||
, lib.section_type AS "library_section_type"
|
||||
, lib.name as "library_name"
|
||||
FROM
|
||||
metadata_items meta
|
||||
left join library_sections lib on meta.library_section_id = lib.id
|
||||
WHERE
|
||||
meta.added_at >= strftime('%s', 'now', '-7 days')
|
||||
|
||||
ORDER BY meta.added_at DESC;
|
||||
EOF
|
||||
249
plex/plex.sh
Executable file
249
plex/plex.sh
Executable file
@@ -0,0 +1,249 @@
|
||||
#!/bin/bash
|
||||
|
||||
# 🎬 Plex Media Server Management Script
|
||||
# A sexy, modern script for managing Plex Media Server with style
|
||||
# Author: acedanger
|
||||
# Version: 2.0
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# 🎨 Color definitions for sexy output
|
||||
readonly RED='\033[0;31m'
|
||||
readonly GREEN='\033[0;32m'
|
||||
readonly YELLOW='\033[1;33m'
|
||||
readonly BLUE='\033[0;34m'
|
||||
readonly PURPLE='\033[0;35m'
|
||||
readonly CYAN='\033[0;36m'
|
||||
readonly WHITE='\033[1;37m'
|
||||
readonly BOLD='\033[1m'
|
||||
readonly DIM='\033[2m'
|
||||
readonly RESET='\033[0m'
|
||||
|
||||
# 🔧 Configuration
|
||||
readonly PLEX_SERVICE="plexmediaserver"
|
||||
readonly SCRIPT_NAME="$(basename "$0")"
|
||||
readonly PLEX_WEB_URL="http://localhost:32400/web"
|
||||
|
||||
# 🎭 Unicode symbols for fancy output
|
||||
readonly CHECKMARK="✅"
|
||||
readonly CROSS="❌"
|
||||
readonly ROCKET="🚀"
|
||||
readonly STOP_SIGN="🛑"
|
||||
readonly RECYCLE="♻️"
|
||||
readonly INFO="ℹ️"
|
||||
readonly HOURGLASS="⏳"
|
||||
readonly SPARKLES="✨"
|
||||
|
||||
# 📊 Function to print fancy headers
|
||||
print_header() {
|
||||
echo -e "\n${PURPLE}${BOLD}╔══════════════════════════════════════════════════════════════╗${RESET}"
|
||||
echo -e "${PURPLE}${BOLD}║ ${SPARKLES} PLEX MEDIA SERVER ${SPARKLES} ║${RESET}"
|
||||
echo -e "${PURPLE}${BOLD}╚══════════════════════════════════════════════════════════════╝${RESET}\n"
|
||||
}
|
||||
|
||||
# 🎉 Function to print completion footer
|
||||
print_footer() {
|
||||
echo -e "\n${DIM}${CYAN}╰─── Operation completed ${SPARKLES} ───╯${RESET}\n"
|
||||
}
|
||||
|
||||
# 🎯 Function to print status with style
|
||||
print_status() {
|
||||
local status="$1"
|
||||
local message="$2"
|
||||
local color="$3"
|
||||
echo -e "${color}${BOLD}[${status}]${RESET} ${message}"
|
||||
}
|
||||
|
||||
# ⏱️ Function to show loading animation
|
||||
show_loading() {
|
||||
local message="$1"
|
||||
local pid="$2"
|
||||
local spin='-\|/'
|
||||
local i=0
|
||||
|
||||
echo -ne "${CYAN}${HOURGLASS} ${message}${RESET}"
|
||||
while kill -0 "$pid" 2>/dev/null; do
|
||||
i=$(( (i+1) %4 ))
|
||||
printf "\r${CYAN}${HOURGLASS} ${message} ${spin:$i:1}${RESET}"
|
||||
sleep 0.1
|
||||
done
|
||||
printf "\r${CYAN}${HOURGLASS} ${message} ${CHECKMARK}${RESET}\n"
|
||||
}
|
||||
|
||||
# 🚀 Enhanced start function
|
||||
start_plex() {
|
||||
print_status "${ROCKET}" "Starting Plex Media Server..." "${GREEN}"
|
||||
|
||||
if systemctl is-active --quiet "$PLEX_SERVICE"; then
|
||||
print_status "${INFO}" "Plex is already running!" "${YELLOW}"
|
||||
show_detailed_status
|
||||
return 0
|
||||
fi
|
||||
|
||||
sudo systemctl start "$PLEX_SERVICE" &
|
||||
local pid=$!
|
||||
show_loading "Initializing Plex Media Server" $pid
|
||||
wait $pid
|
||||
|
||||
sleep 2 # Give it a moment to fully start
|
||||
|
||||
if systemctl is-active --quiet "$PLEX_SERVICE"; then
|
||||
print_status "${CHECKMARK}" "Plex Media Server started successfully!" "${GREEN}"
|
||||
echo -e "${DIM}${CYAN}Access your server at: ${WHITE}${PLEX_WEB_URL}${RESET}"
|
||||
print_footer
|
||||
else
|
||||
print_status "${CROSS}" "Failed to start Plex Media Server!" "${RED}"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# 🛑 Enhanced stop function
|
||||
stop_plex() {
|
||||
print_status "${STOP_SIGN}" "Stopping Plex Media Server..." "${YELLOW}"
|
||||
|
||||
if ! systemctl is-active --quiet "$PLEX_SERVICE"; then
|
||||
print_status "${INFO}" "Plex is already stopped!" "${YELLOW}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
sudo systemctl stop "$PLEX_SERVICE" &
|
||||
local pid=$!
|
||||
show_loading "Gracefully shutting down Plex" $pid
|
||||
wait $pid
|
||||
|
||||
if ! systemctl is-active --quiet "$PLEX_SERVICE"; then
|
||||
print_status "${CHECKMARK}" "Plex Media Server stopped successfully!" "${GREEN}"
|
||||
print_footer
|
||||
else
|
||||
print_status "${CROSS}" "Failed to stop Plex Media Server!" "${RED}"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# ♻️ Enhanced restart function
|
||||
restart_plex() {
|
||||
print_status "${RECYCLE}" "Restarting Plex Media Server..." "${BLUE}"
|
||||
|
||||
if systemctl is-active --quiet "$PLEX_SERVICE"; then
|
||||
stop_plex
|
||||
echo ""
|
||||
fi
|
||||
|
||||
start_plex
|
||||
}
|
||||
|
||||
# 📊 Enhanced status function with detailed info
|
||||
show_detailed_status() {
|
||||
local service_status
|
||||
service_status=$(systemctl is-active "$PLEX_SERVICE" 2>/dev/null || echo "inactive")
|
||||
|
||||
echo -e "\n${BOLD}${BLUE}╔══════════════════════════════════════════════════════════════╗${RESET}"
|
||||
echo -e "${BOLD}${BLUE}║ SERVICE STATUS ║${RESET}"
|
||||
echo -e "${BOLD}${BLUE}╚══════════════════════════════════════════════════════════════╝${RESET}"
|
||||
|
||||
case "$service_status" in
|
||||
"active")
|
||||
print_status "${CHECKMARK}" "Service Status: ${GREEN}${BOLD}ACTIVE${RESET}" "${GREEN}"
|
||||
|
||||
# Get additional info
|
||||
local uptime
|
||||
uptime=$(systemctl show "$PLEX_SERVICE" --property=ActiveEnterTimestamp --value | xargs -I {} date -d {} "+%Y-%m-%d %H:%M:%S" 2>/dev/null || echo "Unknown")
|
||||
|
||||
local memory_usage
|
||||
memory_usage=$(systemctl show "$PLEX_SERVICE" --property=MemoryCurrent --value 2>/dev/null || echo "0")
|
||||
if [[ "$memory_usage" != "0" ]] && [[ "$memory_usage" =~ ^[0-9]+$ ]]; then
|
||||
memory_usage="$(( memory_usage / 1024 / 1024 )) MB"
|
||||
else
|
||||
memory_usage="Unknown"
|
||||
fi
|
||||
|
||||
echo -e "${DIM}${CYAN} Started: ${WHITE}${uptime}${RESET}"
|
||||
echo -e "${DIM}${CYAN} Memory Usage: ${WHITE}${memory_usage}${RESET}"
|
||||
echo -e "${DIM}${CYAN} Web Interface: ${WHITE}${PLEX_WEB_URL}${RESET}"
|
||||
echo -e "${DIM}${CYAN} Service Name: ${WHITE}${PLEX_SERVICE}${RESET}"
|
||||
;;
|
||||
"inactive")
|
||||
print_status "${CROSS}" "Service Status: ${RED}${BOLD}INACTIVE${RESET}" "${RED}"
|
||||
echo -e "${DIM}${YELLOW} Use '${SCRIPT_NAME} start' to start the service${RESET}"
|
||||
;;
|
||||
"failed")
|
||||
print_status "${CROSS}" "Service Status: ${RED}${BOLD}FAILED${RESET}" "${RED}"
|
||||
echo -e "${DIM}${RED} Check logs with: ${WHITE}journalctl -u ${PLEX_SERVICE}${RESET}"
|
||||
;;
|
||||
*)
|
||||
print_status "${INFO}" "Service Status: ${YELLOW}${BOLD}${service_status^^}${RESET}" "${YELLOW}"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Show recent logs
|
||||
echo -e "\n${DIM}${CYAN}┌─── Recent Service Logs ───┐${RESET}"
|
||||
echo -e "${DIM}$(journalctl -u "$PLEX_SERVICE" --no-pager -n 3 --since "7 days ago" 2>/dev/null | tail -3 || echo "No recent logs available")${RESET}"
|
||||
echo -e "${DIM}${CYAN}└────────────────────────────┘${RESET}"
|
||||
}
|
||||
|
||||
# 🔧 Show available commands
|
||||
show_help() {
|
||||
echo -e "${BOLD}${WHITE}Usage:${RESET} ${CYAN}${SCRIPT_NAME}${RESET} ${YELLOW}<command>${RESET}"
|
||||
echo ""
|
||||
echo -e "${BOLD}${WHITE}Available Commands:${RESET}"
|
||||
echo -e " ${GREEN}${BOLD}start${RESET} ${ROCKET} Start Plex Media Server"
|
||||
echo -e " ${YELLOW}${BOLD}stop${RESET} ${STOP_SIGN} Stop Plex Media Server"
|
||||
echo -e " ${BLUE}${BOLD}restart${RESET} ${RECYCLE} Restart Plex Media Server"
|
||||
echo -e " ${CYAN}${BOLD}status${RESET} ${INFO} Show detailed service status"
|
||||
echo -e " ${PURPLE}${BOLD}help${RESET} ${SPARKLES} Show this help message"
|
||||
echo ""
|
||||
echo -e "${DIM}${WHITE}Examples:${RESET}"
|
||||
echo -e " ${DIM}${SCRIPT_NAME} start # Start the Plex service${RESET}"
|
||||
echo -e " ${DIM}${SCRIPT_NAME} status # Show current status${RESET}"
|
||||
echo ""
|
||||
}
|
||||
|
||||
# 🎯 Main script logic
|
||||
main() {
|
||||
# Check if running as root
|
||||
if [[ $EUID -eq 0 ]]; then
|
||||
print_header
|
||||
print_status "${CROSS}" "Don't run this script as root! Use your regular user account." "${RED}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if no arguments provided
|
||||
if [[ $# -eq 0 ]]; then
|
||||
print_header
|
||||
show_help
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Show header for all operations except help
|
||||
if [[ "${1,,}" != "help" ]] && [[ "${1,,}" != "--help" ]] && [[ "${1,,}" != "-h" ]]; then
|
||||
print_header
|
||||
fi
|
||||
|
||||
case "${1,,}" in # Convert to lowercase
|
||||
"start")
|
||||
start_plex
|
||||
;;
|
||||
"stop")
|
||||
stop_plex
|
||||
;;
|
||||
"restart"|"reload")
|
||||
restart_plex
|
||||
;;
|
||||
"status"|"info")
|
||||
show_detailed_status
|
||||
;;
|
||||
"help"|"--help"|"-h")
|
||||
print_header
|
||||
show_help
|
||||
;;
|
||||
*)
|
||||
print_status "${CROSS}" "Unknown command: ${RED}${BOLD}$1${RESET}" "${RED}"
|
||||
echo ""
|
||||
show_help
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# 🚀 Execute main function with all arguments
|
||||
main "$@"
|
||||
260
plex/restore-plex.sh
Executable file
260
plex/restore-plex.sh
Executable file
@@ -0,0 +1,260 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Plex Backup Restoration Script
|
||||
# Usage: ./restore-plex.sh [backup_date] [--dry-run]
|
||||
|
||||
set -e
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
|
||||
BACKUP_ROOT="/mnt/share/media/backups/plex"
|
||||
PLEX_DATA_DIR="/var/lib/plexmediaserver/Library/Application Support/Plex Media Server"
|
||||
|
||||
# Plex file locations
|
||||
declare -A RESTORE_LOCATIONS=(
|
||||
["com.plexapp.plugins.library.db"]="$PLEX_DATA_DIR/Plug-in Support/Databases/"
|
||||
["com.plexapp.plugins.library.blobs.db"]="$PLEX_DATA_DIR/Plug-in Support/Databases/"
|
||||
["Preferences.xml"]="$PLEX_DATA_DIR/"
|
||||
)
|
||||
|
||||
log_message() {
|
||||
echo -e "$(date '+%H:%M:%S') $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
log_message "${RED}ERROR: $1${NC}"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
log_message "${GREEN}SUCCESS: $1${NC}"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
log_message "${YELLOW}WARNING: $1${NC}"
|
||||
}
|
||||
|
||||
# List available backups
|
||||
list_backups() {
|
||||
log_message "Available backups:"
|
||||
find "$BACKUP_ROOT" -maxdepth 1 -type f -name "plex-backup-*.tar.gz" | sort -r | while read backup_file; do
|
||||
local backup_name=$(basename "$backup_file")
|
||||
local backup_date=$(echo "$backup_name" | sed 's/plex-backup-\([0-9]\{8\}\)_[0-9]\{6\}\.tar\.gz/\1/')
|
||||
if [[ "$backup_date" =~ ^[0-9]{8}$ ]]; then
|
||||
local readable_date=$(date -d "${backup_date:0:4}-${backup_date:4:2}-${backup_date:6:2}" '+%B %d, %Y' 2>/dev/null || echo "Unknown date")
|
||||
local file_size=$(du -h "$backup_file" 2>/dev/null | cut -f1)
|
||||
echo " $backup_name ($readable_date) - $file_size"
|
||||
else
|
||||
echo " $backup_name - $(du -h "$backup_file" 2>/dev/null | cut -f1)"
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Validate backup integrity
|
||||
validate_backup() {
|
||||
local backup_file="$1"
|
||||
|
||||
if [ ! -f "$backup_file" ]; then
|
||||
log_error "Backup file not found: $backup_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_message "Validating backup integrity for $(basename "$backup_file")..."
|
||||
|
||||
# Test archive integrity
|
||||
if tar -tzf "$backup_file" >/dev/null 2>&1; then
|
||||
log_success "Archive integrity check passed"
|
||||
|
||||
# List contents to verify expected files are present
|
||||
log_message "Archive contents:"
|
||||
tar -tzf "$backup_file" | while read file; do
|
||||
log_success " Found: $file"
|
||||
done
|
||||
return 0
|
||||
else
|
||||
log_error "Archive integrity check failed"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Create backup of current Plex data
|
||||
backup_current_data() {
|
||||
local backup_suffix=$(date '+%Y%m%d_%H%M%S')
|
||||
local current_backup_dir="$SCRIPT_DIR/plex_current_backup_$backup_suffix"
|
||||
|
||||
log_message "Creating backup of current Plex data..."
|
||||
mkdir -p "$current_backup_dir"
|
||||
|
||||
for file in "${!RESTORE_LOCATIONS[@]}"; do
|
||||
local src="${RESTORE_LOCATIONS[$file]}$file"
|
||||
if [ -f "$src" ]; then
|
||||
if sudo cp "$src" "$current_backup_dir/"; then
|
||||
log_success "Backed up current: $file"
|
||||
else
|
||||
log_error "Failed to backup current: $file"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
log_success "Current data backed up to: $current_backup_dir"
|
||||
echo "$current_backup_dir"
|
||||
}
|
||||
|
||||
# Restore files from backup
|
||||
restore_files() {
|
||||
local backup_file="$1"
|
||||
local dry_run="$2"
|
||||
|
||||
if [ ! -f "$backup_file" ]; then
|
||||
log_error "Backup file not found: $backup_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Create temporary extraction directory
|
||||
local temp_dir="/tmp/plex-restore-$(date +%Y%m%d_%H%M%S)"
|
||||
mkdir -p "$temp_dir"
|
||||
|
||||
log_message "Extracting backup archive..."
|
||||
if ! tar -xzf "$backup_file" -C "$temp_dir"; then
|
||||
log_error "Failed to extract backup archive"
|
||||
rm -rf "$temp_dir"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_message "Restoring files..."
|
||||
local restore_errors=0
|
||||
|
||||
for file in "${!RESTORE_LOCATIONS[@]}"; do
|
||||
local src_file="$temp_dir/$file"
|
||||
local dest_path="${RESTORE_LOCATIONS[$file]}"
|
||||
local dest_file="$dest_path$file"
|
||||
|
||||
if [ -f "$src_file" ]; then
|
||||
if [ "$dry_run" == "true" ]; then
|
||||
log_message "Would restore: $file to $dest_file"
|
||||
else
|
||||
log_message "Restoring: $file"
|
||||
if sudo cp "$src_file" "$dest_file"; then
|
||||
sudo chown plex:plex "$dest_file"
|
||||
log_success "Restored: $file"
|
||||
else
|
||||
log_error "Failed to restore: $file"
|
||||
restore_errors=$((restore_errors + 1))
|
||||
fi
|
||||
fi
|
||||
else
|
||||
log_warning "File not found in backup: $file"
|
||||
restore_errors=$((restore_errors + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# Clean up temporary directory
|
||||
rm -rf "$temp_dir"
|
||||
|
||||
return $restore_errors
|
||||
}
|
||||
|
||||
# Manage Plex service
|
||||
manage_plex_service() {
|
||||
local action="$1"
|
||||
log_message "$action Plex Media Server..."
|
||||
|
||||
case "$action" in
|
||||
"stop")
|
||||
sudo systemctl stop plexmediaserver.service
|
||||
sleep 3
|
||||
log_success "Plex stopped"
|
||||
;;
|
||||
"start")
|
||||
sudo systemctl start plexmediaserver.service
|
||||
sleep 3
|
||||
log_success "Plex started"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Main function
|
||||
main() {
|
||||
local backup_file="$1"
|
||||
local dry_run=false
|
||||
|
||||
# Check for dry-run flag
|
||||
if [ "$2" = "--dry-run" ] || [ "$1" = "--dry-run" ]; then
|
||||
dry_run=true
|
||||
fi
|
||||
|
||||
# If no backup file provided, list available backups
|
||||
if [ -z "$backup_file" ] || [ "$backup_file" = "--dry-run" ]; then
|
||||
list_backups
|
||||
echo
|
||||
echo "Usage: $0 <backup_file> [--dry-run]"
|
||||
echo "Example: $0 plex-backup-20250125_143022.tar.gz"
|
||||
echo " $0 /mnt/share/media/backups/plex/plex-backup-20250125_143022.tar.gz"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# If relative path, prepend BACKUP_ROOT
|
||||
if [[ "$backup_file" != /* ]]; then
|
||||
backup_file="$BACKUP_ROOT/$backup_file"
|
||||
fi
|
||||
|
||||
# Validate backup exists and is complete
|
||||
if ! validate_backup "$backup_file"; then
|
||||
log_error "Backup validation failed"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$dry_run" = "true" ]; then
|
||||
restore_files "$backup_file" true
|
||||
log_message "Dry run completed. No changes were made."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Confirm restoration
|
||||
echo
|
||||
log_warning "This will restore Plex data from backup $(basename "$backup_file")"
|
||||
log_warning "Current Plex data will be backed up before restoration"
|
||||
read -p "Continue? (y/N): " -n 1 -r
|
||||
echo
|
||||
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
log_message "Restoration cancelled"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Stop Plex service
|
||||
manage_plex_service stop
|
||||
|
||||
# Backup current data
|
||||
local current_backup=$(backup_current_data)
|
||||
if [ $? -ne 0 ]; then
|
||||
log_error "Failed to backup current data"
|
||||
manage_plex_service start
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Restore files
|
||||
if restore_files "$backup_file" false; then
|
||||
log_success "Restoration completed successfully"
|
||||
log_message "Current data backup saved at: $current_backup"
|
||||
else
|
||||
log_error "Restoration failed"
|
||||
manage_plex_service start
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Start Plex service
|
||||
manage_plex_service start
|
||||
|
||||
log_success "Plex restoration completed. Please verify your server is working correctly."
|
||||
}
|
||||
|
||||
# Trap to ensure Plex is restarted on script exit
|
||||
trap 'manage_plex_service start' EXIT
|
||||
|
||||
main "$@"
|
||||
667
plex/test-plex-backup.sh
Executable file
667
plex/test-plex-backup.sh
Executable file
@@ -0,0 +1,667 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Comprehensive Plex Backup System Test Suite
|
||||
# This script provides automated testing for all backup-related functionality
|
||||
|
||||
set -e
|
||||
|
||||
# Color codes for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Test configuration
|
||||
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
|
||||
TEST_DIR="/tmp/plex-backup-test-$(date +%s)"
|
||||
TEST_BACKUP_ROOT="$TEST_DIR/backups"
|
||||
TEST_LOG_ROOT="$TEST_DIR/logs"
|
||||
TEST_RESULTS_FILE="$TEST_DIR/test-results.json"
|
||||
|
||||
# Test counters
|
||||
TESTS_RUN=0
|
||||
TESTS_PASSED=0
|
||||
TESTS_FAILED=0
|
||||
declare -a FAILED_TESTS=()
|
||||
|
||||
# Logging functions
|
||||
log_test() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${CYAN}[TEST ${timestamp}]${NC} $1"
|
||||
}
|
||||
|
||||
log_pass() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${GREEN}[PASS ${timestamp}]${NC} $1"
|
||||
TESTS_PASSED=$((TESTS_PASSED + 1))
|
||||
}
|
||||
|
||||
log_fail() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${RED}[FAIL ${timestamp}]${NC} $1"
|
||||
TESTS_FAILED=$((TESTS_FAILED + 1))
|
||||
FAILED_TESTS+=("$1")
|
||||
}
|
||||
|
||||
log_info() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${BLUE}[INFO ${timestamp}]${NC} $1"
|
||||
}
|
||||
|
||||
log_warn() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
echo -e "${YELLOW}[WARN ${timestamp}]${NC} $1"
|
||||
}
|
||||
|
||||
# Test framework functions
|
||||
run_test() {
|
||||
local test_name="$1"
|
||||
local test_function="$2"
|
||||
|
||||
TESTS_RUN=$((TESTS_RUN + 1))
|
||||
log_test "Running: $test_name"
|
||||
|
||||
if $test_function; then
|
||||
log_pass "$test_name"
|
||||
record_test_result "$test_name" "PASS" ""
|
||||
else
|
||||
log_fail "$test_name"
|
||||
record_test_result "$test_name" "FAIL" "Test function returned non-zero exit code"
|
||||
fi
|
||||
}
|
||||
|
||||
record_test_result() {
|
||||
local test_name="$1"
|
||||
local status="$2"
|
||||
local error_message="$3"
|
||||
local timestamp=$(date -Iseconds)
|
||||
|
||||
# Initialize results file if it doesn't exist
|
||||
if [ ! -f "$TEST_RESULTS_FILE" ]; then
|
||||
echo "[]" > "$TEST_RESULTS_FILE"
|
||||
fi
|
||||
|
||||
local result=$(jq -n \
|
||||
--arg test_name "$test_name" \
|
||||
--arg status "$status" \
|
||||
--arg error_message "$error_message" \
|
||||
--arg timestamp "$timestamp" \
|
||||
'{
|
||||
test_name: $test_name,
|
||||
status: $status,
|
||||
error_message: $error_message,
|
||||
timestamp: $timestamp
|
||||
}')
|
||||
|
||||
jq --argjson result "$result" '. += [$result]' "$TEST_RESULTS_FILE" > "${TEST_RESULTS_FILE}.tmp" && \
|
||||
mv "${TEST_RESULTS_FILE}.tmp" "$TEST_RESULTS_FILE"
|
||||
}
|
||||
|
||||
# Setup test environment
|
||||
setup_test_environment() {
|
||||
log_info "Setting up test environment in $TEST_DIR"
|
||||
|
||||
# Create test directories
|
||||
mkdir -p "$TEST_DIR"
|
||||
mkdir -p "$TEST_BACKUP_ROOT"
|
||||
mkdir -p "$TEST_LOG_ROOT"
|
||||
mkdir -p "$TEST_DIR/mock_plex"
|
||||
|
||||
# Create mock Plex files for testing
|
||||
echo "PRAGMA user_version=1;" > "$TEST_DIR/mock_plex/com.plexapp.plugins.library.db"
|
||||
echo "PRAGMA user_version=1;" > "$TEST_DIR/mock_plex/com.plexapp.plugins.library.blobs.db"
|
||||
dd if=/dev/zero of="$TEST_DIR/mock_plex/Preferences.xml" bs=1024 count=1 2>/dev/null
|
||||
|
||||
# Create mock performance log
|
||||
echo "[]" > "$TEST_DIR/mock-performance.json"
|
||||
echo "{}" > "$TEST_DIR/mock-backup.json"
|
||||
|
||||
log_info "Test environment setup complete"
|
||||
}
|
||||
|
||||
# Cleanup test environment
|
||||
cleanup_test_environment() {
|
||||
if [ -d "$TEST_DIR" ]; then
|
||||
log_info "Cleaning up test environment"
|
||||
rm -rf "$TEST_DIR"
|
||||
fi
|
||||
}
|
||||
|
||||
# Mock functions to replace actual backup script functions
|
||||
mock_manage_plex_service() {
|
||||
local action="$1"
|
||||
echo "Mock: Plex service $action"
|
||||
return 0
|
||||
}
|
||||
|
||||
mock_calculate_checksum() {
|
||||
local file="$1"
|
||||
echo "$(echo "$file" | md5sum | cut -d' ' -f1)"
|
||||
return 0
|
||||
}
|
||||
|
||||
mock_verify_backup() {
|
||||
local src="$1"
|
||||
local dest="$2"
|
||||
# Always return success for testing
|
||||
return 0
|
||||
}
|
||||
|
||||
# Test: JSON log initialization
|
||||
test_json_log_initialization() {
|
||||
local test_log="$TEST_DIR/test-init.json"
|
||||
|
||||
# Remove file if it exists
|
||||
rm -f "$test_log"
|
||||
|
||||
# Test initialization
|
||||
if [ ! -f "$test_log" ] || ! jq empty "$test_log" 2>/dev/null; then
|
||||
echo "{}" > "$test_log"
|
||||
fi
|
||||
|
||||
# Verify file exists and is valid JSON
|
||||
if [ -f "$test_log" ] && jq empty "$test_log" 2>/dev/null; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Performance tracking
|
||||
test_performance_tracking() {
|
||||
local test_perf_log="$TEST_DIR/test-performance.json"
|
||||
echo "[]" > "$test_perf_log"
|
||||
|
||||
# Mock performance tracking function
|
||||
track_performance_test() {
|
||||
local operation="$1"
|
||||
local start_time="$2"
|
||||
local end_time=$(date +%s)
|
||||
local duration=$((end_time - start_time))
|
||||
|
||||
local entry=$(jq -n \
|
||||
--arg operation "$operation" \
|
||||
--arg duration "$duration" \
|
||||
--arg timestamp "$(date -Iseconds)" \
|
||||
'{
|
||||
operation: $operation,
|
||||
duration_seconds: ($duration | tonumber),
|
||||
timestamp: $timestamp
|
||||
}')
|
||||
|
||||
jq --argjson entry "$entry" '. += [$entry]' "$test_perf_log" > "${test_perf_log}.tmp" && \
|
||||
mv "${test_perf_log}.tmp" "$test_perf_log"
|
||||
}
|
||||
|
||||
# Test tracking
|
||||
local start_time=$(date +%s)
|
||||
sleep 1 # Simulate work
|
||||
track_performance_test "test_operation" "$start_time"
|
||||
|
||||
# Verify entry was added
|
||||
local entry_count=$(jq length "$test_perf_log")
|
||||
if [ "$entry_count" -eq 1 ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Notification system
|
||||
test_notification_system() {
|
||||
# Mock notification function
|
||||
send_notification_test() {
|
||||
local title="$1"
|
||||
local message="$2"
|
||||
local status="${3:-info}"
|
||||
|
||||
# Just verify parameters are received correctly
|
||||
if [ -n "$title" ] && [ -n "$message" ]; then
|
||||
echo "Notification: $title - $message ($status)" > "$TEST_DIR/notification.log"
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test notification
|
||||
send_notification_test "Test Title" "Test Message" "success"
|
||||
|
||||
# Verify notification was processed
|
||||
if [ -f "$TEST_DIR/notification.log" ] && grep -q "Test Title" "$TEST_DIR/notification.log"; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Checksum caching
|
||||
test_checksum_caching() {
|
||||
local test_file="$TEST_DIR/checksum_test.txt"
|
||||
local cache_file="${test_file}.md5"
|
||||
|
||||
# Create test file
|
||||
echo "test content" > "$test_file"
|
||||
|
||||
# Mock checksum function with caching
|
||||
calculate_checksum_test() {
|
||||
local file="$1"
|
||||
local cache_file="${file}.md5"
|
||||
local file_mtime=$(stat -c %Y "$file" 2>/dev/null || echo "0")
|
||||
|
||||
# Check cache
|
||||
if [ -f "$cache_file" ]; then
|
||||
local cache_mtime=$(stat -c %Y "$cache_file" 2>/dev/null || echo "0")
|
||||
if [ "$cache_mtime" -gt "$file_mtime" ]; then
|
||||
cat "$cache_file"
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
# Calculate and cache
|
||||
local checksum=$(md5sum "$file" | cut -d' ' -f1)
|
||||
echo "$checksum" > "$cache_file"
|
||||
echo "$checksum"
|
||||
}
|
||||
|
||||
# First calculation (should create cache)
|
||||
local checksum1=$(calculate_checksum_test "$test_file")
|
||||
|
||||
# Second calculation (should use cache)
|
||||
local checksum2=$(calculate_checksum_test "$test_file")
|
||||
|
||||
# Verify checksums match and cache file exists
|
||||
if [ "$checksum1" = "$checksum2" ] && [ -f "$cache_file" ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Backup verification
|
||||
test_backup_verification() {
|
||||
local src_file="$TEST_DIR/source.txt"
|
||||
local dest_file="$TEST_DIR/backup.txt"
|
||||
|
||||
# Create identical files
|
||||
echo "backup test content" > "$src_file"
|
||||
cp "$src_file" "$dest_file"
|
||||
|
||||
# Mock verification function
|
||||
verify_backup_test() {
|
||||
local src="$1"
|
||||
local dest="$2"
|
||||
|
||||
local src_checksum=$(md5sum "$src" | cut -d' ' -f1)
|
||||
local dest_checksum=$(md5sum "$dest" | cut -d' ' -f1)
|
||||
|
||||
if [ "$src_checksum" = "$dest_checksum" ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test verification
|
||||
if verify_backup_test "$src_file" "$dest_file"; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Parallel processing framework
|
||||
test_parallel_processing() {
|
||||
local temp_dir=$(mktemp -d)
|
||||
local -a pids=()
|
||||
local total_jobs=5
|
||||
local completed_jobs=0
|
||||
|
||||
# Simulate parallel jobs
|
||||
for i in $(seq 1 $total_jobs); do
|
||||
(
|
||||
# Simulate work
|
||||
sleep 0.$i
|
||||
echo "$i" > "$temp_dir/job_$i.result"
|
||||
) &
|
||||
pids+=($!)
|
||||
done
|
||||
|
||||
# Wait for all jobs
|
||||
for pid in "${pids[@]}"; do
|
||||
if wait "$pid"; then
|
||||
completed_jobs=$((completed_jobs + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# Verify all jobs completed
|
||||
local result_files=$(find "$temp_dir" -name "job_*.result" | wc -l)
|
||||
|
||||
# Cleanup
|
||||
rm -rf "$temp_dir"
|
||||
|
||||
if [ "$completed_jobs" -eq "$total_jobs" ] && [ "$result_files" -eq "$total_jobs" ]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Database integrity check simulation
|
||||
test_database_integrity() {
|
||||
local test_db="$TEST_DIR/test.db"
|
||||
|
||||
# Create a simple SQLite database
|
||||
sqlite3 "$test_db" "CREATE TABLE test (id INTEGER, name TEXT);"
|
||||
sqlite3 "$test_db" "INSERT INTO test VALUES (1, 'test');"
|
||||
|
||||
# Mock integrity check
|
||||
check_integrity_test() {
|
||||
local db_file="$1"
|
||||
|
||||
# Use sqlite3 instead of Plex SQLite for testing
|
||||
local result=$(sqlite3 "$db_file" "PRAGMA integrity_check;" 2>/dev/null)
|
||||
|
||||
if echo "$result" | grep -q "ok"; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test integrity check
|
||||
if check_integrity_test "$test_db"; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Configuration parsing
|
||||
test_configuration_parsing() {
|
||||
# Mock command line parsing
|
||||
parse_args_test() {
|
||||
local args=("$@")
|
||||
local auto_repair=false
|
||||
local parallel=true
|
||||
local webhook=""
|
||||
|
||||
for arg in "${args[@]}"; do
|
||||
case "$arg" in
|
||||
--auto-repair) auto_repair=true ;;
|
||||
--no-parallel) parallel=false ;;
|
||||
--webhook=*) webhook="${arg#*=}" ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Return parsed values
|
||||
echo "$auto_repair $parallel $webhook"
|
||||
}
|
||||
|
||||
# Test parsing
|
||||
local result=$(parse_args_test --auto-repair --webhook=http://example.com)
|
||||
|
||||
if echo "$result" | grep -q "true true http://example.com"; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Test: Error handling
|
||||
test_error_handling() {
|
||||
# Mock function that can fail
|
||||
test_function_with_error() {
|
||||
local should_fail="$1"
|
||||
|
||||
if [ "$should_fail" = "true" ]; then
|
||||
return 1
|
||||
else
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Test success case
|
||||
if test_function_with_error "false"; then
|
||||
# Test failure case
|
||||
if ! test_function_with_error "true"; then
|
||||
return 0 # Both cases worked as expected
|
||||
fi
|
||||
fi
|
||||
|
||||
return 1
|
||||
}
|
||||
|
||||
# Run all unit tests
|
||||
run_all_tests() {
|
||||
log_info "Setting up test environment"
|
||||
setup_test_environment
|
||||
|
||||
log_info "Starting unit tests"
|
||||
|
||||
# Core functionality tests
|
||||
run_test "JSON Log Initialization" test_json_log_initialization
|
||||
run_test "Performance Tracking" test_performance_tracking
|
||||
run_test "Notification System" test_notification_system
|
||||
run_test "Checksum Caching" test_checksum_caching
|
||||
run_test "Backup Verification" test_backup_verification
|
||||
run_test "Parallel Processing" test_parallel_processing
|
||||
run_test "Database Integrity Check" test_database_integrity
|
||||
run_test "Configuration Parsing" test_configuration_parsing
|
||||
run_test "Error Handling" test_error_handling
|
||||
|
||||
log_info "Unit tests completed"
|
||||
}
|
||||
|
||||
# Run integration tests (requires actual Plex environment)
|
||||
run_integration_tests() {
|
||||
log_info "Starting integration tests"
|
||||
log_warn "Integration tests require a working Plex installation"
|
||||
|
||||
# Check if Plex service exists
|
||||
if ! systemctl list-units --all | grep -q plexmediaserver; then
|
||||
log_warn "Plex service not found - skipping integration tests"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Test actual service management (if safe to do so)
|
||||
log_info "Integration tests would test actual Plex service management"
|
||||
log_info "Skipping for safety - implement with caution"
|
||||
}
|
||||
|
||||
# Run performance tests
|
||||
run_performance_tests() {
|
||||
log_info "Starting performance benchmarks"
|
||||
|
||||
local start_time=$(date +%s)
|
||||
|
||||
# Test file operations
|
||||
local test_file="$TEST_DIR/perf_test.dat"
|
||||
dd if=/dev/zero of="$test_file" bs=1M count=10 2>/dev/null
|
||||
|
||||
# Benchmark checksum calculation
|
||||
local checksum_start=$(date +%s)
|
||||
md5sum "$test_file" > /dev/null
|
||||
local checksum_time=$(($(date +%s) - checksum_start))
|
||||
|
||||
# Benchmark compression
|
||||
local compress_start=$(date +%s)
|
||||
tar -czf "$TEST_DIR/perf_test.tar.gz" -C "$TEST_DIR" "perf_test.dat"
|
||||
local compress_time=$(($(date +%s) - compress_start))
|
||||
|
||||
local total_time=$(($(date +%s) - start_time))
|
||||
|
||||
log_info "Performance Results:"
|
||||
log_info " Checksum (10MB): ${checksum_time}s"
|
||||
log_info " Compression (10MB): ${compress_time}s"
|
||||
log_info " Total benchmark time: ${total_time}s"
|
||||
|
||||
# Record performance data
|
||||
local perf_entry=$(jq -n \
|
||||
--arg checksum_time "$checksum_time" \
|
||||
--arg compress_time "$compress_time" \
|
||||
--arg total_time "$total_time" \
|
||||
--arg timestamp "$(date -Iseconds)" \
|
||||
'{
|
||||
benchmark: "performance_test",
|
||||
checksum_time_seconds: ($checksum_time | tonumber),
|
||||
compress_time_seconds: ($compress_time | tonumber),
|
||||
total_time_seconds: ($total_time | tonumber),
|
||||
timestamp: $timestamp
|
||||
}')
|
||||
|
||||
echo "$perf_entry" > "$TEST_DIR/performance_results.json"
|
||||
}
|
||||
|
||||
# Generate comprehensive test report
|
||||
generate_test_report() {
|
||||
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
|
||||
|
||||
echo
|
||||
echo "=============================================="
|
||||
echo " PLEX BACKUP TEST REPORT"
|
||||
echo "=============================================="
|
||||
echo "Test Run: $timestamp"
|
||||
echo "Tests Run: $TESTS_RUN"
|
||||
echo "Tests Passed: $TESTS_PASSED"
|
||||
echo "Tests Failed: $TESTS_FAILED"
|
||||
echo
|
||||
|
||||
if [ $TESTS_FAILED -gt 0 ]; then
|
||||
echo "FAILED TESTS:"
|
||||
for failed_test in "${FAILED_TESTS[@]}"; do
|
||||
echo " - $failed_test"
|
||||
done
|
||||
echo
|
||||
fi
|
||||
|
||||
local success_rate=0
|
||||
if [ $TESTS_RUN -gt 0 ]; then
|
||||
success_rate=$(( (TESTS_PASSED * 100) / TESTS_RUN ))
|
||||
fi
|
||||
|
||||
echo "Success Rate: ${success_rate}%"
|
||||
echo
|
||||
|
||||
if [ $TESTS_FAILED -eq 0 ]; then
|
||||
log_pass "All tests passed successfully!"
|
||||
else
|
||||
log_fail "Some tests failed - review output above"
|
||||
fi
|
||||
|
||||
# Save detailed results
|
||||
if [ -f "$TEST_RESULTS_FILE" ]; then
|
||||
local report_file="$TEST_DIR/test_report_$(date +%Y%m%d_%H%M%S).json"
|
||||
jq -n \
|
||||
--arg timestamp "$timestamp" \
|
||||
--arg tests_run "$TESTS_RUN" \
|
||||
--arg tests_passed "$TESTS_PASSED" \
|
||||
--arg tests_failed "$TESTS_FAILED" \
|
||||
--arg success_rate "$success_rate" \
|
||||
--argjson failed_tests "$(printf '%s\n' "${FAILED_TESTS[@]}" | jq -R . | jq -s .)" \
|
||||
--argjson test_details "$(cat "$TEST_RESULTS_FILE")" \
|
||||
'{
|
||||
test_run_timestamp: $timestamp,
|
||||
summary: {
|
||||
tests_run: ($tests_run | tonumber),
|
||||
tests_passed: ($tests_passed | tonumber),
|
||||
tests_failed: ($tests_failed | tonumber),
|
||||
success_rate_percent: ($success_rate | tonumber)
|
||||
},
|
||||
failed_tests: $failed_tests,
|
||||
detailed_results: $test_details
|
||||
}' > "$report_file"
|
||||
|
||||
log_info "Detailed test report saved to: $report_file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Integration tests (if requested)
|
||||
run_integration_tests() {
|
||||
log_info "Running integration tests..."
|
||||
|
||||
# Note: These would require actual Plex installation
|
||||
# For now, we'll just indicate what would be tested
|
||||
|
||||
log_warn "Integration tests require running Plex Media Server"
|
||||
log_warn "These tests would cover:"
|
||||
log_warn " - Service stop/start functionality"
|
||||
log_warn " - Database integrity checks"
|
||||
log_warn " - Full backup and restore cycles"
|
||||
log_warn " - Performance under load"
|
||||
}
|
||||
|
||||
# Performance benchmarks
|
||||
run_performance_tests() {
|
||||
log_info "Running performance benchmarks..."
|
||||
|
||||
local start_time=$(date +%s)
|
||||
|
||||
# Create large test files
|
||||
local large_file="$TEST_DIR/large_test.db"
|
||||
dd if=/dev/zero of="$large_file" bs=1M count=100 2>/dev/null
|
||||
|
||||
# Benchmark checksum calculation
|
||||
local checksum_start=$(date +%s)
|
||||
md5sum "$large_file" > /dev/null
|
||||
local checksum_end=$(date +%s)
|
||||
local checksum_time=$((checksum_end - checksum_start))
|
||||
|
||||
# Benchmark compression
|
||||
local compress_start=$(date +%s)
|
||||
tar -czf "$TEST_DIR/large_test.tar.gz" -C "$TEST_DIR" "large_test.db"
|
||||
local compress_end=$(date +%s)
|
||||
local compress_time=$((compress_end - compress_start))
|
||||
|
||||
local total_time=$(($(date +%s) - start_time))
|
||||
|
||||
log_info "Performance Results:"
|
||||
log_info " Checksum (100MB): ${checksum_time}s"
|
||||
log_info " Compression (100MB): ${compress_time}s"
|
||||
log_info " Total benchmark time: ${total_time}s"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
case "${1:-all}" in
|
||||
"unit")
|
||||
run_all_tests
|
||||
;;
|
||||
"integration")
|
||||
run_integration_tests
|
||||
;;
|
||||
"performance")
|
||||
run_performance_tests
|
||||
;;
|
||||
"all")
|
||||
run_all_tests
|
||||
# Uncomment for integration tests if environment supports it
|
||||
# run_integration_tests
|
||||
run_performance_tests
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [unit|integration|performance|all]"
|
||||
echo " unit - Run unit tests only"
|
||||
echo " integration - Run integration tests (requires Plex)"
|
||||
echo " performance - Run performance benchmarks"
|
||||
echo " all - Run all available tests"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
generate_test_report
|
||||
|
||||
# Exit with appropriate code
|
||||
if [ $TESTS_FAILED -gt 0 ]; then
|
||||
exit 1
|
||||
else
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Trap to ensure cleanup on exit
|
||||
trap cleanup_test_environment EXIT
|
||||
|
||||
main "$@"
|
||||
335
plex/validate-plex-backups.sh
Executable file
335
plex/validate-plex-backups.sh
Executable file
@@ -0,0 +1,335 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Plex Backup Validation and Monitoring Script
|
||||
# Usage: ./validate-plex-backups.sh [--fix] [--report]
|
||||
|
||||
set -e
|
||||
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Configuration
|
||||
SCRIPT_DIR="$(dirname "$(readlink -f "$0")")"
|
||||
BACKUP_ROOT="/mnt/share/media/backups/plex"
|
||||
JSON_LOG_FILE="$SCRIPT_DIR/logs/plex-backup.json"
|
||||
REPORT_FILE="$SCRIPT_DIR/logs/backup-validation-$(date +%Y%m%d_%H%M%S).log"
|
||||
|
||||
# Expected files in backup
|
||||
EXPECTED_FILES=(
|
||||
"com.plexapp.plugins.library.db"
|
||||
"com.plexapp.plugins.library.blobs.db"
|
||||
"Preferences.xml"
|
||||
)
|
||||
|
||||
log_message() {
|
||||
local message="$1"
|
||||
local clean_message="$2"
|
||||
|
||||
# Display colored message to terminal
|
||||
echo -e "$(date '+%H:%M:%S') $message"
|
||||
|
||||
# Strip ANSI codes and log clean version to file
|
||||
if [ -n "$clean_message" ]; then
|
||||
echo "$(date '+%H:%M:%S') $clean_message" >> "$REPORT_FILE"
|
||||
else
|
||||
# Strip ANSI escape codes for file logging
|
||||
echo "$(date '+%H:%M:%S') $message" | sed 's/\x1b\[[0-9;]*m//g' >> "$REPORT_FILE"
|
||||
fi
|
||||
}
|
||||
|
||||
log_error() {
|
||||
log_message "${RED}ERROR: $1${NC}" "ERROR: $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
log_message "${GREEN}SUCCESS: $1${NC}" "SUCCESS: $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
log_message "${YELLOW}WARNING: $1${NC}" "WARNING: $1"
|
||||
}
|
||||
|
||||
log_info() {
|
||||
log_message "${BLUE}INFO: $1${NC}" "INFO: $1"
|
||||
}
|
||||
|
||||
# Check backup directory structure
|
||||
validate_backup_structure() {
|
||||
log_info "Validating backup directory structure..."
|
||||
|
||||
if [ ! -d "$BACKUP_ROOT" ]; then
|
||||
log_error "Backup root directory not found: $BACKUP_ROOT"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local backup_count=$(find "$BACKUP_ROOT" -maxdepth 1 -type f -name "plex-backup-*.tar.gz" | wc -l)
|
||||
log_info "Found $backup_count backup files"
|
||||
|
||||
if [ "$backup_count" -eq 0 ]; then
|
||||
log_warning "No backup files found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Validate individual backup
|
||||
validate_backup() {
|
||||
local backup_file="$1"
|
||||
local backup_name=$(basename "$backup_file")
|
||||
local errors=0
|
||||
|
||||
log_info "Validating backup: $backup_name"
|
||||
|
||||
# Check if file exists and is readable
|
||||
if [ ! -f "$backup_file" ] || [ ! -r "$backup_file" ]; then
|
||||
log_error "Backup file not accessible: $backup_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Test archive integrity
|
||||
if ! tar -tzf "$backup_file" >/dev/null 2>&1; then
|
||||
log_error "Archive integrity check failed: $backup_name"
|
||||
errors=$((errors + 1))
|
||||
else
|
||||
log_success "Archive integrity check passed: $backup_name"
|
||||
|
||||
# Check for expected files in archive
|
||||
local archive_contents=$(tar -tzf "$backup_file" 2>/dev/null)
|
||||
|
||||
for file in "${EXPECTED_FILES[@]}"; do
|
||||
if echo "$archive_contents" | grep -q "^$file$"; then
|
||||
log_success " Found: $file"
|
||||
else
|
||||
log_error " Missing file: $file"
|
||||
errors=$((errors + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
# Check for unexpected files
|
||||
echo "$archive_contents" | while IFS= read -r line; do
|
||||
if [[ ! " ${EXPECTED_FILES[@]} " =~ " ${line} " ]]; then
|
||||
log_warning " Unexpected file: $line"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
return $errors
|
||||
}
|
||||
|
||||
# Check backup freshness
|
||||
check_backup_freshness() {
|
||||
log_info "Checking backup freshness..."
|
||||
|
||||
local latest_backup=$(find "$BACKUP_ROOT" -maxdepth 1 -type f -name "plex-backup-*.tar.gz" | sort | tail -1)
|
||||
|
||||
if [ -z "$latest_backup" ]; then
|
||||
log_error "No backups found"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local backup_filename=$(basename "$latest_backup")
|
||||
# Extract date from filename: plex-backup-YYYYMMDD_HHMMSS.tar.gz
|
||||
local backup_date=$(echo "$backup_filename" | sed 's/plex-backup-//' | sed 's/_.*$//')
|
||||
local backup_timestamp=$(date -d "${backup_date:0:4}-${backup_date:4:2}-${backup_date:6:2}" +%s)
|
||||
local current_timestamp=$(date +%s)
|
||||
local age_days=$(( (current_timestamp - backup_timestamp) / 86400 ))
|
||||
|
||||
log_info "Latest backup: $backup_date ($age_days days old)"
|
||||
|
||||
if [ "$age_days" -gt 7 ]; then
|
||||
log_warning "Latest backup is older than 7 days"
|
||||
return 1
|
||||
elif [ "$age_days" -gt 3 ]; then
|
||||
log_warning "Latest backup is older than 3 days"
|
||||
else
|
||||
log_success "Latest backup is recent"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Validate JSON log file
|
||||
validate_json_log() {
|
||||
log_info "Validating JSON log file..."
|
||||
|
||||
if [ ! -f "$JSON_LOG_FILE" ]; then
|
||||
log_error "JSON log file not found: $JSON_LOG_FILE"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if ! jq empty "$JSON_LOG_FILE" 2>/dev/null; then
|
||||
log_error "JSON log file is invalid"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local entry_count=$(jq 'length' "$JSON_LOG_FILE")
|
||||
log_success "JSON log file is valid ($entry_count entries)"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Check disk space
|
||||
check_disk_space() {
|
||||
log_info "Checking disk space..."
|
||||
|
||||
local backup_disk_usage=$(du -sh "$BACKUP_ROOT" | cut -f1)
|
||||
local available_space=$(df -h "$BACKUP_ROOT" | awk 'NR==2 {print $4}')
|
||||
local used_percentage=$(df "$BACKUP_ROOT" | awk 'NR==2 {print $5}' | sed 's/%//')
|
||||
|
||||
log_info "Backup disk usage: $backup_disk_usage"
|
||||
log_info "Available space: $available_space"
|
||||
log_info "Disk usage: $used_percentage%"
|
||||
|
||||
if [ "$used_percentage" -gt 90 ]; then
|
||||
log_error "Disk usage is above 90%"
|
||||
return 1
|
||||
elif [ "$used_percentage" -gt 80 ]; then
|
||||
log_warning "Disk usage is above 80%"
|
||||
else
|
||||
log_success "Disk usage is acceptable"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Generate backup report
|
||||
generate_report() {
|
||||
log_info "Generating backup report..."
|
||||
|
||||
local total_backups=0
|
||||
local valid_backups=0
|
||||
local total_errors=0
|
||||
|
||||
# Header
|
||||
echo "==================================" >> "$REPORT_FILE"
|
||||
echo "Plex Backup Validation Report" >> "$REPORT_FILE"
|
||||
echo "Generated: $(date)" >> "$REPORT_FILE"
|
||||
echo "==================================" >> "$REPORT_FILE"
|
||||
|
||||
# Validate each backup
|
||||
find "$BACKUP_ROOT" -maxdepth 1 -type f -name "plex-backup-*.tar.gz" | sort | while read backup_file; do
|
||||
total_backups=$((total_backups + 1))
|
||||
validate_backup "$backup_file"
|
||||
local backup_errors=$?
|
||||
|
||||
if [ "$backup_errors" -eq 0 ]; then
|
||||
valid_backups=$((valid_backups + 1))
|
||||
else
|
||||
total_errors=$((total_errors + backup_errors))
|
||||
fi
|
||||
done
|
||||
|
||||
# Summary
|
||||
echo >> "$REPORT_FILE"
|
||||
echo "Summary:" >> "$REPORT_FILE"
|
||||
echo " Total backups: $total_backups" >> "$REPORT_FILE"
|
||||
echo " Valid backups: $valid_backups" >> "$REPORT_FILE"
|
||||
echo " Total errors: $total_errors" >> "$REPORT_FILE"
|
||||
|
||||
log_success "Report generated: $REPORT_FILE"
|
||||
}
|
||||
|
||||
# Fix common issues
|
||||
fix_issues() {
|
||||
log_info "Attempting to fix common issues..."
|
||||
|
||||
# Fix JSON log file
|
||||
if [ ! -f "$JSON_LOG_FILE" ] || ! jq empty "$JSON_LOG_FILE" 2>/dev/null; then
|
||||
log_info "Fixing JSON log file..."
|
||||
mkdir -p "$(dirname "$JSON_LOG_FILE")"
|
||||
echo "{}" > "$JSON_LOG_FILE"
|
||||
log_success "JSON log file created/fixed"
|
||||
fi
|
||||
|
||||
# Clean up any remaining dated directories from old backup structure
|
||||
find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????????" -exec rm -rf {} \; 2>/dev/null || true
|
||||
|
||||
# Fix permissions if needed
|
||||
if [ -d "$BACKUP_ROOT" ]; then
|
||||
chmod 755 "$BACKUP_ROOT"
|
||||
find "$BACKUP_ROOT" -type f -name "plex-backup-*.tar.gz" -exec chmod 644 {} \; 2>/dev/null || true
|
||||
log_success "Fixed backup permissions"
|
||||
fi
|
||||
}
|
||||
|
||||
# Main function
|
||||
main() {
|
||||
local fix_mode=false
|
||||
local report_mode=false
|
||||
|
||||
# Parse arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--fix)
|
||||
fix_mode=true
|
||||
shift
|
||||
;;
|
||||
--report)
|
||||
report_mode=true
|
||||
shift
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [--fix] [--report]"
|
||||
echo " --fix Attempt to fix common issues"
|
||||
echo " --report Generate detailed backup report"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
log_info "Starting Plex backup validation..."
|
||||
|
||||
# Create logs directory if needed
|
||||
mkdir -p "$(dirname "$REPORT_FILE")"
|
||||
|
||||
local overall_status=0
|
||||
|
||||
# Fix issues if requested
|
||||
if [ "$fix_mode" = true ]; then
|
||||
fix_issues
|
||||
fi
|
||||
|
||||
# Validate backup structure
|
||||
if ! validate_backup_structure; then
|
||||
overall_status=1
|
||||
fi
|
||||
|
||||
# Check backup freshness
|
||||
if ! check_backup_freshness; then
|
||||
overall_status=1
|
||||
fi
|
||||
|
||||
# Validate JSON log
|
||||
if ! validate_json_log; then
|
||||
overall_status=1
|
||||
fi
|
||||
|
||||
# Check disk space
|
||||
if ! check_disk_space; then
|
||||
overall_status=1
|
||||
fi
|
||||
|
||||
# Generate detailed report if requested
|
||||
if [ "$report_mode" = true ]; then
|
||||
generate_report
|
||||
fi
|
||||
|
||||
# Final summary
|
||||
echo
|
||||
if [ "$overall_status" -eq 0 ]; then
|
||||
log_success "All validation checks passed"
|
||||
else
|
||||
log_error "Some validation checks failed"
|
||||
echo
|
||||
echo "Consider running with --fix to attempt automatic repairs"
|
||||
echo "Use --report for a detailed backup analysis"
|
||||
fi
|
||||
|
||||
exit $overall_status
|
||||
}
|
||||
|
||||
main "$@"
|
||||
Reference in New Issue
Block a user