mirror of
https://github.com/acedanger/shell.git
synced 2026-03-24 19:11:48 -07:00
Merge branch 'main' of https://github.com/acedanger/shell
This commit is contained in:
@@ -6,9 +6,10 @@ UPLOAD_LOCATION=/mnt/share/media/immich/uploads
|
|||||||
# Notification settings
|
# Notification settings
|
||||||
WEBHOOK_URL="https://notify.peterwood.rocks/lab"
|
WEBHOOK_URL="https://notify.peterwood.rocks/lab"
|
||||||
|
|
||||||
|
SHARED_BACKUP_DIR=/mnt/share/immich-backup
|
||||||
|
|
||||||
# Backblaze B2 settings
|
# Backblaze B2 settings
|
||||||
# Get these from your B2 account: https://secure.backblaze.com/app_keys.htm
|
# Get these from your B2 account: https://secure.backblaze.com/app_keys.htm
|
||||||
K005YB4icG3edh5Z9o64ieXvepEYWoA
|
|
||||||
# B2_APPLICATION_KEY_ID=your_key_id_here
|
# B2_APPLICATION_KEY_ID=your_key_id_here
|
||||||
# B2_APPLICATION_KEY=your_application_key_here
|
# B2_APPLICATION_KEY=your_application_key_here
|
||||||
# B2_BUCKET_NAME=your_bucket_name_here
|
# B2_BUCKET_NAME=your_bucket_name_here
|
||||||
|
|||||||
@@ -636,4 +636,7 @@ def main():
|
|||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
try:
|
||||||
|
main()
|
||||||
|
finally:
|
||||||
|
console.show_cursor(True)
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ Complete backup script for Immich installation that creates backups of:
|
|||||||
|
|
||||||
**Backup Location:**
|
**Backup Location:**
|
||||||
|
|
||||||
**Primary Storage:** `/mnt/share/media/backups/immich/` (shared storage)
|
**Primary Storage:** Configurable via `SHARED_BACKUP_DIR` in `.env` (default: `/mnt/share/media/backups/immich/`)
|
||||||
|
|
||||||
- Database: `immich_db_backup_YYYYMMDD_HHMMSS.sql.gz`
|
- Database: `immich_db_backup_YYYYMMDD_HHMMSS.sql.gz`
|
||||||
- Uploads: `immich_uploads_YYYYMMDD_HHMMSS.tar.gz`
|
- Uploads: `immich_uploads_YYYYMMDD_HHMMSS.tar.gz`
|
||||||
@@ -66,17 +66,20 @@ Complete backup script for Immich installation that creates backups of:
|
|||||||
**Backup Workflow:**
|
**Backup Workflow:**
|
||||||
|
|
||||||
1. **Create local backups** in temporary directory (`../immich_backups/`)
|
1. **Create local backups** in temporary directory (`../immich_backups/`)
|
||||||
2. **Copy to shared storage** (`/mnt/share/media/backups/immich/`)
|
2. **Copy to shared storage** (configured via `SHARED_BACKUP_DIR`)
|
||||||
3. **Upload to Backblaze B2** (if configured)
|
3. **Upload to Backblaze B2** (if configured)
|
||||||
4. **Delete local copies** (shared storage copies retained)
|
4. **Delete local copies** (shared storage copies retained)
|
||||||
|
|
||||||
**Features:**
|
**Features:**
|
||||||
|
|
||||||
- **Smart backup workflow**: Creates → Copies to shared storage → Uploads to B2 → Cleans up locally
|
- **Smart backup workflow**: Creates → Copies to shared storage → Uploads to B2 → Cleans up locally
|
||||||
|
- **Configurable Storage**: Support for custom shared storage paths via `.env`
|
||||||
|
- **Robust B2 Support**: Automatically detects system-installed `b2` CLI or local binary
|
||||||
- Command-line options for flexible operation (--help, --dry-run, --no-upload, --verbose)
|
- Command-line options for flexible operation (--help, --dry-run, --no-upload, --verbose)
|
||||||
- Dry-run mode to preview operations without executing
|
- Dry-run mode to preview operations without executing
|
||||||
- Option to skip B2 upload for local-only backups
|
- Option to skip B2 upload for local-only backups
|
||||||
- **Shared storage integration**: Automatically copies backups to `/mnt/share/media/backups/immich/`
|
- **Shared storage integration**: Automatically copies backups to shared storage
|
||||||
|
- **Safety Checks**: Verifies mount points and storage availability before writing
|
||||||
- **Local cleanup**: Removes temporary files after successful copy to shared storage
|
- **Local cleanup**: Removes temporary files after successful copy to shared storage
|
||||||
- Automatic container pausing/resuming during backup
|
- Automatic container pausing/resuming during backup
|
||||||
- Comprehensive error handling and cleanup
|
- Comprehensive error handling and cleanup
|
||||||
@@ -158,11 +161,11 @@ Container Status Check:
|
|||||||
|
|
||||||
B2 Upload Configuration:
|
B2 Upload Configuration:
|
||||||
✓ B2 configured - would upload to bucket: my-immich-backups
|
✓ B2 configured - would upload to bucket: my-immich-backups
|
||||||
✓ B2 CLI found at: /home/acedanger/shell/immich/b2-linux
|
✓ B2 CLI found at: /usr/bin/b2
|
||||||
|
|
||||||
Shared Storage Check:
|
Shared Storage Check:
|
||||||
✓ Shared storage accessible: /mnt/share/media/backups
|
✓ Shared storage accessible: /mnt/share/media/backups
|
||||||
✓ Shared storage writable - would copy backups before B2 upload
|
✓ Shared storage writable - would copy backups to /mnt/share/media/backups/immich/
|
||||||
|
|
||||||
=== DRY RUN COMPLETE - No files were created or modified ===
|
=== DRY RUN COMPLETE - No files were created or modified ===
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -53,13 +53,13 @@ cleanup() {
|
|||||||
trap cleanup EXIT SIGINT SIGTERM
|
trap cleanup EXIT SIGINT SIGTERM
|
||||||
|
|
||||||
# Load environment variables from the .env file
|
# Load environment variables from the .env file
|
||||||
ENV_FILE="$(dirname "$0")/../.env"
|
ENV_FILE="${SCRIPT_DIR}/../.env"
|
||||||
if [ -f "$ENV_FILE" ]; then
|
if [ -f "$ENV_FILE" ]; then
|
||||||
echo "Loading environment variables from $ENV_FILE"
|
echo "Loading environment variables from $ENV_FILE"
|
||||||
# shellcheck source=/dev/null
|
# shellcheck source=/dev/null
|
||||||
source "$ENV_FILE"
|
source "$ENV_FILE"
|
||||||
else
|
else
|
||||||
echo "Error: .env file not found in $(dirname "$0")/.."
|
echo "Error: .env file not found in ${SCRIPT_DIR}/.."
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -109,6 +109,7 @@ EXAMPLES:
|
|||||||
$(basename "$0") --help # Show this help
|
$(basename "$0") --help # Show this help
|
||||||
$(basename "$0") --dry-run # Preview backup without executing
|
$(basename "$0") --dry-run # Preview backup without executing
|
||||||
$(basename "$0") --no-upload # Backup locally only (skip B2)
|
$(basename "$0") --no-upload # Backup locally only (skip B2)
|
||||||
|
$(basename "$0") --upload-only # Only upload the latest existing backup to B2
|
||||||
|
|
||||||
RESTORE INSTRUCTIONS:
|
RESTORE INSTRUCTIONS:
|
||||||
https://immich.app/docs/administration/backup-and-restore/
|
https://immich.app/docs/administration/backup-and-restore/
|
||||||
@@ -119,6 +120,7 @@ EOF
|
|||||||
# Parse command line arguments
|
# Parse command line arguments
|
||||||
DRY_RUN=false
|
DRY_RUN=false
|
||||||
NO_UPLOAD=false
|
NO_UPLOAD=false
|
||||||
|
UPLOAD_ONLY=false
|
||||||
VERBOSE=false
|
VERBOSE=false
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
while [[ $# -gt 0 ]]; do
|
||||||
@@ -135,6 +137,10 @@ while [[ $# -gt 0 ]]; do
|
|||||||
NO_UPLOAD=true
|
NO_UPLOAD=true
|
||||||
shift
|
shift
|
||||||
;;
|
;;
|
||||||
|
--upload-only)
|
||||||
|
UPLOAD_ONLY=true
|
||||||
|
shift
|
||||||
|
;;
|
||||||
--verbose)
|
--verbose)
|
||||||
VERBOSE=true
|
VERBOSE=true
|
||||||
shift
|
shift
|
||||||
@@ -148,7 +154,13 @@ while [[ $# -gt 0 ]]; do
|
|||||||
done
|
done
|
||||||
|
|
||||||
# B2 CLI tool path
|
# B2 CLI tool path
|
||||||
B2_CLI="$(dirname "$0")/b2-linux"
|
if [ -f "${SCRIPT_DIR}/b2-linux" ]; then
|
||||||
|
B2_CLI="${SCRIPT_DIR}/b2-linux"
|
||||||
|
elif command -v b2 &> /dev/null; then
|
||||||
|
B2_CLI=$(command -v b2)
|
||||||
|
else
|
||||||
|
B2_CLI="${SCRIPT_DIR}/b2-linux"
|
||||||
|
fi
|
||||||
|
|
||||||
# Notification function
|
# Notification function
|
||||||
send_notification() {
|
send_notification() {
|
||||||
@@ -198,17 +210,40 @@ upload_to_b2() {
|
|||||||
log_message "Uploading $filename to B2 bucket: $B2_BUCKET_NAME"
|
log_message "Uploading $filename to B2 bucket: $B2_BUCKET_NAME"
|
||||||
|
|
||||||
# Authorize B2 account
|
# Authorize B2 account
|
||||||
if ! "$B2_CLI" authorize-account "$B2_APPLICATION_KEY_ID" "$B2_APPLICATION_KEY" 2>/dev/null; then
|
local auth_output
|
||||||
|
if ! auth_output=$("$B2_CLI" authorize-account "$B2_APPLICATION_KEY_ID" "$B2_APPLICATION_KEY" 2>&1); then
|
||||||
log_message "Error: Failed to authorize B2 account"
|
log_message "Error: Failed to authorize B2 account"
|
||||||
|
log_message "B2 Output: $auth_output"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Upload file to B2
|
# Upload file to B2
|
||||||
if "$B2_CLI" upload-file "$B2_BUCKET_NAME" "$file_path" "immich-backups/$filename" 2>/dev/null; then
|
local temp_log
|
||||||
|
temp_log=$(mktemp)
|
||||||
|
|
||||||
|
# Enable pipefail to catch b2 exit code through tee
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
# Use --threads 4 to avoid "More than one concurrent upload using auth token" error
|
||||||
|
# which can happen with default thread count on large files
|
||||||
|
if "$B2_CLI" file upload --threads 4 "$B2_BUCKET_NAME" "$file_path" "immich-backups/$filename" 2>&1 | tee "$temp_log"; then
|
||||||
|
set +o pipefail
|
||||||
|
rm "$temp_log"
|
||||||
log_message "✅ Successfully uploaded $filename to B2"
|
log_message "✅ Successfully uploaded $filename to B2"
|
||||||
return 0
|
return 0
|
||||||
else
|
else
|
||||||
|
local exit_code=$?
|
||||||
|
set +o pipefail
|
||||||
log_message "❌ Failed to upload $filename to B2"
|
log_message "❌ Failed to upload $filename to B2"
|
||||||
|
|
||||||
|
# Log the last few lines of output to capture the error message
|
||||||
|
# avoiding the progress bar spam
|
||||||
|
local error_msg
|
||||||
|
error_msg=$(tail -n 20 "$temp_log")
|
||||||
|
log_message "B2 Output (last 20 lines):"
|
||||||
|
log_message "$error_msg"
|
||||||
|
|
||||||
|
rm "$temp_log"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
@@ -217,7 +252,7 @@ upload_to_b2() {
|
|||||||
IMMICH_SERVER_RUNNING=true
|
IMMICH_SERVER_RUNNING=true
|
||||||
|
|
||||||
# Set up logging to central logs directory
|
# Set up logging to central logs directory
|
||||||
LOG_DIR="$(dirname "$0")/../logs"
|
LOG_DIR="${SCRIPT_DIR}/../logs"
|
||||||
mkdir -p "$LOG_DIR"
|
mkdir -p "$LOG_DIR"
|
||||||
LOG_FILE="${LOG_DIR}/immich-backup.log"
|
LOG_FILE="${LOG_DIR}/immich-backup.log"
|
||||||
|
|
||||||
@@ -226,15 +261,48 @@ log_message() {
|
|||||||
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
|
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Function to log without timestamp (for progress/status)
|
# Function to log status (wrapper for log_message)
|
||||||
log_status() {
|
log_status() {
|
||||||
echo "$1" | tee -a "$LOG_FILE"
|
log_message "$1"
|
||||||
}
|
}
|
||||||
|
|
||||||
# Create backup directory if it doesn't exist
|
# Create backup directory if it doesn't exist
|
||||||
BACKUP_DIR="$(dirname "$0")/../immich_backups"
|
BACKUP_DIR="$(dirname "$0")/../immich_backups"
|
||||||
mkdir -p "$BACKUP_DIR"
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
|
||||||
|
# Shared backup directory (can be overridden in .env)
|
||||||
|
SHARED_BACKUP_DIR="${SHARED_BACKUP_DIR:-/mnt/share/media/backups/immich}"
|
||||||
|
|
||||||
|
# Handle upload-only mode
|
||||||
|
if [ "$UPLOAD_ONLY" = true ]; then
|
||||||
|
log_message "=== UPLOAD ONLY MODE ==="
|
||||||
|
log_message "Skipping backup creation, looking for latest backups in $SHARED_BACKUP_DIR"
|
||||||
|
|
||||||
|
# Find latest database backup
|
||||||
|
LATEST_DB=$(ls -t "$SHARED_BACKUP_DIR"/immich_db_backup_*.sql.gz 2>/dev/null | head -n1)
|
||||||
|
if [ -f "$LATEST_DB" ]; then
|
||||||
|
log_message "Found latest database backup: $LATEST_DB"
|
||||||
|
upload_to_b2 "$LATEST_DB"
|
||||||
|
else
|
||||||
|
log_message "Warning: No database backup found in $SHARED_BACKUP_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Find latest uploads backup
|
||||||
|
LATEST_UPLOADS=$(ls -t "$SHARED_BACKUP_DIR"/immich_uploads_*.tar.gz 2>/dev/null | head -n1)
|
||||||
|
if [ -f "$LATEST_UPLOADS" ]; then
|
||||||
|
log_message "Found latest uploads backup: $LATEST_UPLOADS"
|
||||||
|
upload_to_b2 "$LATEST_UPLOADS"
|
||||||
|
else
|
||||||
|
log_message "Warning: No uploads backup found in $SHARED_BACKUP_DIR"
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_message "Upload only mode completed."
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Create backup directory if it doesn't exist
|
||||||
|
BACKUP_DIR="${SCRIPT_DIR}/../immich_backups"
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
|
||||||
# Generate timestamp for the backup filename
|
# Generate timestamp for the backup filename
|
||||||
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
|
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
|
||||||
DB_BACKUP_FILENAME="immich_db_backup_${TIMESTAMP}.sql"
|
DB_BACKUP_FILENAME="immich_db_backup_${TIMESTAMP}.sql"
|
||||||
@@ -307,15 +375,16 @@ if [ "$DRY_RUN" = true ]; then
|
|||||||
# Check shared storage directory
|
# Check shared storage directory
|
||||||
echo ""
|
echo ""
|
||||||
echo "Shared Storage Check:"
|
echo "Shared Storage Check:"
|
||||||
if [ -d "/mnt/share/media/backups" ]; then
|
SHARED_PARENT=$(dirname "$SHARED_BACKUP_DIR")
|
||||||
echo " ✓ Shared storage accessible: /mnt/share/media/backups"
|
if [ -d "$SHARED_PARENT" ]; then
|
||||||
if [ -w "/mnt/share/media/backups" ]; then
|
echo " ✓ Shared storage accessible: $SHARED_PARENT"
|
||||||
echo " ✓ Shared storage writable - would copy backups before B2 upload"
|
if [ -w "$SHARED_PARENT" ]; then
|
||||||
|
echo " ✓ Shared storage writable - would copy backups to $SHARED_BACKUP_DIR"
|
||||||
else
|
else
|
||||||
echo " ⚠ Shared storage not writable - backups would remain in ${BACKUP_DIR}"
|
echo " ⚠ Shared storage not writable - backups would remain in ${BACKUP_DIR}"
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
echo " ⚠ Shared storage not accessible: /mnt/share/media/backups"
|
echo " ⚠ Shared storage not accessible: $SHARED_PARENT"
|
||||||
echo " Backups would remain in ${BACKUP_DIR}"
|
echo " Backups would remain in ${BACKUP_DIR}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -380,7 +449,7 @@ fi
|
|||||||
|
|
||||||
log_message "Taking database backup using pg_dumpall as recommended by Immich documentation..."
|
log_message "Taking database backup using pg_dumpall as recommended by Immich documentation..."
|
||||||
# Use pg_dumpall with recommended flags: --clean and --if-exists
|
# Use pg_dumpall with recommended flags: --clean and --if-exists
|
||||||
if ! docker exec -t immich_postgres pg_dumpall \
|
if ! docker exec immich_postgres pg_dumpall \
|
||||||
--clean \
|
--clean \
|
||||||
--if-exists \
|
--if-exists \
|
||||||
--username="${DB_USERNAME}" \
|
--username="${DB_USERNAME}" \
|
||||||
@@ -431,8 +500,9 @@ log_message "Creating compressed archive of upload directory..."
|
|||||||
log_message "This may take a while depending on the size of your media library..."
|
log_message "This may take a while depending on the size of your media library..."
|
||||||
|
|
||||||
# Use tar with progress indication and exclude any existing backup files in the upload location
|
# Use tar with progress indication and exclude any existing backup files in the upload location
|
||||||
if ! tar --exclude="${UPLOAD_LOCATION}/backups/*.tar.gz" \
|
# Note: Exclude patterns must match the relative path structure used by -C
|
||||||
--exclude="${UPLOAD_LOCATION}/backups/*.sql.gz" \
|
if ! tar --exclude="$(basename "${UPLOAD_LOCATION}")/backups/*.tar.gz" \
|
||||||
|
--exclude="$(basename "${UPLOAD_LOCATION}")/backups/*.sql.gz" \
|
||||||
-czf "${UPLOAD_BACKUP_PATH}" \
|
-czf "${UPLOAD_BACKUP_PATH}" \
|
||||||
-C "$(dirname "${UPLOAD_LOCATION}")" \
|
-C "$(dirname "${UPLOAD_LOCATION}")" \
|
||||||
"$(basename "${UPLOAD_LOCATION}")"; then
|
"$(basename "${UPLOAD_LOCATION}")"; then
|
||||||
@@ -454,22 +524,22 @@ if [ "${IMMICH_SERVER_RUNNING:-true}" = true ]; then
|
|||||||
log_message "Note: No need to unpause immich_server container."
|
log_message "Note: No need to unpause immich_server container."
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "=== COPYING BACKUPS TO SHARED STORAGE ==="
|
|
||||||
|
|
||||||
# Update metrics for shared storage phase
|
# Update metrics for shared storage phase
|
||||||
if [[ "$METRICS_ENABLED" == "true" ]]; then
|
if [[ "$METRICS_ENABLED" == "true" ]]; then
|
||||||
metrics_update_status "running" "Copying backups to shared storage"
|
metrics_update_status "running" "Copying backups to shared storage"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
SHARED_BACKUP_DIR="/mnt/share/media/backups/immich"
|
|
||||||
|
|
||||||
# Initialize COPY_SUCCESS before use
|
# Initialize COPY_SUCCESS before use
|
||||||
COPY_SUCCESS=false
|
COPY_SUCCESS=false
|
||||||
|
|
||||||
|
# Check if the parent directory of the shared backup dir exists (basic mount check)
|
||||||
|
SHARED_PARENT=$(dirname "$SHARED_BACKUP_DIR")
|
||||||
|
if [ ! -d "$SHARED_PARENT" ]; then
|
||||||
|
log_message "Warning: Shared storage parent directory not found: $SHARED_PARENT"
|
||||||
|
log_message "Backup files remain only in: $BACKUP_DIR"
|
||||||
|
COPY_SUCCESS=false
|
||||||
# Create shared backup directory if it doesn't exist
|
# Create shared backup directory if it doesn't exist
|
||||||
if ! mkdir -p "$SHARED_BACKUP_DIR"; then
|
elif ! mkdir -p "$SHARED_BACKUP_DIR"; then
|
||||||
log_message "Warning: Failed to create shared backup directory: $SHARED_BACKUP_DIR"
|
log_message "Warning: Failed to create shared backup directory: $SHARED_BACKUP_DIR"
|
||||||
log_message "Backup files remain only in: $BACKUP_DIR"
|
log_message "Backup files remain only in: $BACKUP_DIR"
|
||||||
COPY_SUCCESS=false
|
COPY_SUCCESS=false
|
||||||
@@ -533,12 +603,12 @@ if [ "$NO_UPLOAD" = true ]; then
|
|||||||
B2_UPLOAD_SUCCESS="skipped"
|
B2_UPLOAD_SUCCESS="skipped"
|
||||||
else
|
else
|
||||||
echo "=== UPLOADING TO BACKBLAZE B2 ==="
|
echo "=== UPLOADING TO BACKBLAZE B2 ==="
|
||||||
|
|
||||||
# Update metrics for B2 upload phase
|
# Update metrics for B2 upload phase
|
||||||
if [[ "$METRICS_ENABLED" == "true" ]]; then
|
if [[ "$METRICS_ENABLED" == "true" ]]; then
|
||||||
metrics_update_status "running" "Uploading backups to Backblaze B2"
|
metrics_update_status "running" "Uploading backups to Backblaze B2"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
B2_UPLOAD_SUCCESS=true
|
B2_UPLOAD_SUCCESS=true
|
||||||
|
|
||||||
# Upload database backup from local location
|
# Upload database backup from local location
|
||||||
|
|||||||
97
jellyfin/repair_jellyfin_db.sh
Executable file
97
jellyfin/repair_jellyfin_db.sh
Executable file
@@ -0,0 +1,97 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
CONTAINER_NAME="jellyfin"
|
||||||
|
DB_PATH_IN_CONTAINER="/config/data"
|
||||||
|
DB_FILES=("library.db" "jellyfin.db")
|
||||||
|
BACKUP_DIR="/tmp/jellyfin_db_backup_$(date +%Y%m%d_%H%M%S)"
|
||||||
|
REPAIR_DIR="/tmp/jellyfin_db_repair"
|
||||||
|
|
||||||
|
# --- Functions ---
|
||||||
|
|
||||||
|
# Function to print messages
|
||||||
|
log() {
|
||||||
|
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to stop the Jellyfin container
|
||||||
|
stop_container() {
|
||||||
|
log "Stopping Jellyfin container..."
|
||||||
|
docker stop "$CONTAINER_NAME"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to start the Jellyfin container
|
||||||
|
start_container() {
|
||||||
|
log "Starting Jellyfin container..."
|
||||||
|
docker start "$CONTAINER_NAME"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to create a backup of the database files
|
||||||
|
backup_database() {
|
||||||
|
log "Backing up database files to $BACKUP_DIR..."
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
for db_file in "${DB_FILES[@]}"; do
|
||||||
|
docker cp "${CONTAINER_NAME}:${DB_PATH_IN_CONTAINER}/${db_file}" "$BACKUP_DIR/"
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to repair a database file
|
||||||
|
repair_database() {
|
||||||
|
local db_file="$1"
|
||||||
|
local db_path_in_repair_dir="${REPAIR_DIR}/${db_file}"
|
||||||
|
local sql_dump_file="${REPAIR_DIR}/${db_file}.sql"
|
||||||
|
local new_db_file="${REPAIR_DIR}/${db_file}.new"
|
||||||
|
|
||||||
|
log "Repairing ${db_file}..."
|
||||||
|
|
||||||
|
# Check for corruption
|
||||||
|
log "Running integrity check on ${db_file}..."
|
||||||
|
if sqlite3 "$db_path_in_repair_dir" "PRAGMA integrity_check;" | grep -q "ok"; then
|
||||||
|
log "${db_file} is not corrupted. Skipping repair."
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log "Dumping ${db_file} to SQL file..."
|
||||||
|
sqlite3 "$db_path_in_repair_dir" .dump > "$sql_dump_file"
|
||||||
|
|
||||||
|
log "Creating new database from SQL dump..."
|
||||||
|
sqlite3 "$new_db_file" < "$sql_dump_file"
|
||||||
|
|
||||||
|
log "Replacing old database with the new one..."
|
||||||
|
mv "$new_db_file" "$db_path_in_repair_dir"
|
||||||
|
}
|
||||||
|
|
||||||
|
# --- Main Script ---
|
||||||
|
|
||||||
|
# Stop the container
|
||||||
|
stop_container
|
||||||
|
|
||||||
|
# Create repair directory
|
||||||
|
mkdir -p "$REPAIR_DIR"
|
||||||
|
|
||||||
|
# Copy database files to repair directory
|
||||||
|
log "Copying database files to repair directory..."
|
||||||
|
for db_file in "${DB_FILES[@]}"; do
|
||||||
|
docker cp "${CONTAINER_NAME}:${DB_PATH_IN_CONTAINER}/${db_file}" "$REPAIR_DIR/"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Repair each database file
|
||||||
|
for db_file in "${DB_FILES[@]}"; do
|
||||||
|
repair_database "$db_file"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Copy repaired files back to the container
|
||||||
|
log "Copying repaired files back to the container..."
|
||||||
|
for db_file in "${DB_FILES[@]}"; do
|
||||||
|
docker cp "${REPAIR_DIR}/${db_file}" "${CONTAINER_NAME}:${DB_PATH_IN_CONTAINER}/${db_file}"
|
||||||
|
done
|
||||||
|
|
||||||
|
# Clean up repair directory
|
||||||
|
rm -rf "$REPAIR_DIR"
|
||||||
|
|
||||||
|
# Start the container
|
||||||
|
start_container
|
||||||
|
|
||||||
|
log "Database repair process completed."
|
||||||
Reference in New Issue
Block a user