Files
shell/docs/json-metrics-integration-guide.md.deprecated
Peter Wood 6d726cb015 feat: Add base HTML template and implement dashboard, logs, and service views
- Created a base HTML template for consistent layout across pages.
- Developed a dashboard page to display backup service metrics and statuses.
- Implemented a log viewer for detailed log file inspection.
- Added error handling page for better user experience during failures.
- Introduced service detail page to show specific service metrics and actions.
- Enhanced log filtering and viewing capabilities.
- Integrated auto-refresh functionality for real-time updates on metrics.
- Created integration and unit test scripts for backup metrics functionality.
2025-06-18 08:06:08 -04:00

228 lines
6.3 KiB
Plaintext

# Integration Guide: Adding Real-time JSON Metrics to Backup Scripts
This guide shows the minimal changes needed to integrate real-time JSON metrics into existing backup scripts.
## Quick Integration Steps
### 1. Add the JSON Logger Library
Add this line near the top of your backup script (after setting BACKUP_ROOT):
```bash
# Load JSON logging library
source "$(dirname "$0")/lib/backup-json-logger.sh"
```
### 2. Initialize JSON Logging
Add this at the start of your main backup function:
```bash
# Initialize JSON logging session
local session_id="backup_$(date +%Y%m%d_%H%M%S)"
if ! json_backup_init "your_service_name" "$BACKUP_ROOT" "$session_id"; then
echo "Warning: JSON logging initialization failed, continuing without metrics"
else
json_backup_start
echo "JSON metrics enabled - session: $session_id"
fi
```
### 3. Update Status During Backup
Replace status messages with JSON-aware logging:
```bash
# Before: Simple log message
echo "Stopping service..."
# After: Log message + JSON status update
echo "Stopping service..."
json_backup_update_status "stopping_service"
```
### 4. Track Individual Files
When processing each backup file:
```bash
# After successful file backup
if cp "$source_file" "$backup_file"; then
local file_size=$(stat -c%s "$backup_file" 2>/dev/null || echo "0")
local checksum=$(md5sum "$backup_file" 2>/dev/null | cut -d' ' -f1 || echo "")
json_backup_add_file "$source_file" "success" "$file_size" "$checksum"
echo "✓ Backed up: $(basename "$source_file")"
else
json_backup_add_file "$source_file" "failed" "0" "" "Copy operation failed"
echo "✗ Failed to backup: $(basename "$source_file")"
fi
```
### 5. Track Performance Phases
Wrap major operations with timing:
```bash
# Start of backup phase
local phase_start=$(date +%s)
json_backup_update_status "backing_up_files"
# ... backup operations ...
# End of backup phase
json_backup_time_phase "backup" "$phase_start"
```
### 6. Complete the Session
At the end of your backup function:
```bash
# Determine final status
local final_status="success"
local completion_message="Backup completed successfully"
if [ "$backup_errors" -gt 0 ]; then
final_status="partial"
completion_message="Backup completed with $backup_errors errors"
fi
# Complete JSON session
json_backup_complete "$final_status" "$completion_message"
```
## Real-World Example Integration
Here's how to modify the existing `/home/acedanger/shell/plex/backup-plex.sh`:
### Minimal Changes Required:
1. **Add library import** (line ~60):
```bash
# Load JSON logging library for real-time metrics
source "$(dirname "$0")/../lib/backup-json-logger.sh" 2>/dev/null || true
```
2. **Initialize in main() function** (line ~1150):
```bash
# Initialize JSON logging
local json_enabled=false
if json_backup_init "plex" "$BACKUP_ROOT" "backup_$(date +%Y%m%d_%H%M%S)"; then
json_backup_start
json_enabled=true
log_message "Real-time JSON metrics enabled"
fi
```
3. **Update status calls** throughout the script:
```bash
# Replace: manage_plex_service stop
# With:
[ "$json_enabled" = true ] && json_backup_update_status "stopping_service"
manage_plex_service stop
```
4. **Track file operations** in the backup loop (line ~1200):
```bash
if verify_backup "$file" "$backup_file"; then
# Existing success logic
[ "$json_enabled" = true ] && json_backup_add_file "$file" "success" "$file_size" "$checksum"
else
# Existing error logic
[ "$json_enabled" = true ] && json_backup_add_file "$file" "failed" "0" "" "Verification failed"
fi
```
5. **Complete session** at the end (line ~1460):
```bash
if [ "$json_enabled" = true ]; then
local final_status="success"
[ "$backup_errors" -gt 0 ] && final_status="partial"
json_backup_complete "$final_status" "Backup completed with $backup_errors errors"
fi
```
## JSON Output Structure
The integration produces these files:
```
/mnt/share/media/backups/metrics/
├── plex/
│ ├── metrics.json # Current status & latest backup info
│ └── history.json # Historical backup sessions
├── immich/
│ ├── metrics.json
│ └── history.json
└── env-files/
├── metrics.json
└── history.json
```
### Example metrics.json content:
```json
{
"service_name": "plex",
"backup_path": "/mnt/share/media/backups/plex",
"current_session": {
"session_id": "backup_20250605_143022",
"status": "success",
"start_time": {"epoch": 1733423422, "iso": "2024-12-05T14:30:22-05:00"},
"end_time": {"epoch": 1733423502, "iso": "2024-12-05T14:31:42-05:00"},
"duration_seconds": 80,
"files_processed": 3,
"files_successful": 3,
"files_failed": 0,
"total_size_bytes": 157286400,
"total_size_human": "150MB",
"performance": {
"backup_phase_duration": 45,
"compression_phase_duration": 25,
"service_stop_duration": 5,
"service_start_duration": 5
}
},
"latest_backup": {
"path": "/mnt/share/media/backups/plex/plex-backup-20250605_143022.tar.gz",
"filename": "plex-backup-20250605_143022.tar.gz",
"status": "success",
"size_bytes": 157286400,
"checksum": "abc123def456"
},
"generated_at": "2024-12-05T14:31:42-05:00"
}
```
## Benefits of This Approach
1. **Real-time Updates**: JSON files are updated during backup operations, not after
2. **Minimal Changes**: Existing scripts need only small modifications
3. **Backward Compatible**: Scripts continue to work even if JSON logging fails
4. **Standardized**: All backup services use the same JSON structure
5. **Web Ready**: JSON format is immediately usable by web applications
6. **Performance Tracking**: Detailed timing of each backup phase
7. **Error Handling**: Comprehensive error tracking and reporting
## Testing the Integration
1. **Test with existing script**:
```bash
# Enable debug logging
export JSON_LOGGER_DEBUG=true
# Run backup
./your-backup-script.sh
# Check JSON output
cat /mnt/share/media/backups/metrics/your_service/metrics.json | jq '.'
```
2. **Monitor real-time updates**:
```bash
# Watch metrics file during backup
watch -n 2 'cat /mnt/share/media/backups/metrics/plex/metrics.json | jq ".current_session.status, .current_session.files_processed"'
```
This integration approach provides real-time backup monitoring while requiring minimal changes to existing, well-tested backup scripts.