mirror of
https://github.com/acedanger/shell.git
synced 2025-12-06 05:40:11 -08:00
Compare commits
13 Commits
c9d13b940b
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ddf83a6564 | ||
|
|
144dc6acb5 | ||
|
|
aa1cfbebf9 | ||
|
|
6aa087cf0a | ||
|
|
bb945ebd42 | ||
|
|
e2112206a5 | ||
|
|
1287168961 | ||
|
|
645d10d548 | ||
|
|
fa44ab2e45 | ||
|
|
40cbecdebf | ||
|
|
8ceeeda560 | ||
|
|
deb66207b3 | ||
|
|
5b17022856 |
11
.github/prompts/removefabric.prompt.md
vendored
Normal file
11
.github/prompts/removefabric.prompt.md
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
Create a portable bash shell script to safely uninstall the Fabric AI CLI and related packages on Debian, Ubuntu, and Fedora systems. The script must:
|
||||
|
||||
- Detect the operating system and select the appropriate package manager (`apt`, `dnf`, or `yum`).
|
||||
- Uninstall Fabric packages installed via system package managers and Python package managers (`pip`, `pip3`).
|
||||
- Check for errors after each removal step; abort the script if a critical error occurs.
|
||||
- Prompt the user for confirmation before making any changes.
|
||||
- Advise the user to reboot the system if required after uninstallation.
|
||||
- Log all actions and errors to a user-specified log file.
|
||||
- Be fully self-contained and compatible with bash.
|
||||
|
||||
Reference the official [Fabric documentation](https://github.com/danielmiessler/Fabric) and your distribution’s package manager documentation for implementation details.
|
||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -23,6 +23,7 @@ _book
|
||||
|
||||
# Runtime generated files
|
||||
logs/
|
||||
uninstall-fabric.log
|
||||
immich_backups/*.gz
|
||||
# Backup files - ignore most backups but keep current state files
|
||||
crontab/crontab-backups/*/archive/
|
||||
@@ -36,6 +37,7 @@ crontab/crontab-backups/*/archive/
|
||||
# can be downloaded from <https://github.com/Backblaze/B2_Command_Line_Tool/releases/latest/download/b2-linux>
|
||||
immich/b2-linux
|
||||
|
||||
|
||||
# Generated dotfiles - these are created dynamically by bootstrap process
|
||||
dotfiles/my-aliases.zsh
|
||||
|
||||
|
||||
62
README.md
62
README.md
@@ -7,7 +7,6 @@ This repository contains various shell scripts for managing media-related tasks
|
||||
- **[Backup Scripts](#backup-scripts)** - Enterprise-grade backup solutions
|
||||
- **[Management Scripts](#management-scripts)** - System and service management
|
||||
- **[Security](#security)** - Comprehensive security framework and standards
|
||||
- **[AI Integration](#ai-integration)** - Fabric setup for AI-assisted development
|
||||
- **[Tab Completion](#tab-completion)** - Intelligent command-line completion
|
||||
- **[Documentation](#comprehensive-documentation)** - Complete guides and references
|
||||
- **[Testing](#testing)** - Docker-based validation framework
|
||||
@@ -72,51 +71,6 @@ All scripts undergo comprehensive security validation:
|
||||
|
||||
For security-related changes, refer to the security documentation and follow the established security checklist.
|
||||
|
||||
## AI Integration
|
||||
|
||||
This repository includes a complete AI development environment with Fabric integration for AI-assisted development tasks.
|
||||
|
||||
### Fabric Setup
|
||||
|
||||
The system includes:
|
||||
|
||||
- **Fabric v1.4.195** with 216+ AI patterns for text processing
|
||||
- **Google Gemini 2.5 Pro** as primary AI provider
|
||||
- **External AI providers** support for flexibility
|
||||
- **Custom shell configuration** for optimal development experience
|
||||
|
||||
### Basic Fabric Usage
|
||||
|
||||
```bash
|
||||
# List all available patterns
|
||||
fabric -l
|
||||
|
||||
# Use a pattern (configure your preferred AI provider)
|
||||
echo "Your text here" | fabric -p summarize
|
||||
|
||||
# Use with specific model
|
||||
echo "Your text here" | fabric -p summarize -m gemini-2.0-flash-exp
|
||||
|
||||
# Update patterns
|
||||
fabric -U
|
||||
```
|
||||
|
||||
### Popular AI Patterns
|
||||
|
||||
- `summarize` - Summarize text content
|
||||
- `explain_code` - Explain code snippets and logic
|
||||
- `improve_writing` - Enhance writing quality and clarity
|
||||
- `extract_wisdom` - Extract key insights from content
|
||||
- `create_quiz` - Generate quiz questions from text
|
||||
- `analyze_claims` - Analyze and fact-check claims
|
||||
|
||||
### Configuration Files
|
||||
|
||||
- **Fabric config**: `~/.config/fabric/.env` - AI provider settings and API keys
|
||||
- **Shell config**: `~/.zshrc` - Main shell configuration
|
||||
|
||||
For complete setup instructions, see the setup documentation.
|
||||
|
||||
### Development Projects
|
||||
|
||||
- **[Telegram Backup Monitoring Bot](./telegram/github-issues/README.md)**: Comprehensive Telegram bot project for monitoring and managing all backup systems with real-time notifications and control capabilities.
|
||||
@@ -423,22 +377,6 @@ This installs:
|
||||
- Tab completion for all scripts
|
||||
- Development tools (Node.js via nvm, VS Code, etc.)
|
||||
|
||||
### AI Development Environment
|
||||
|
||||
For AI-assisted development, the system includes:
|
||||
|
||||
- **Fabric** with 216+ AI patterns for text processing
|
||||
- **Google Gemini integration** as primary AI provider
|
||||
- **External AI provider support** for flexibility
|
||||
- **Custom configuration** for easy management
|
||||
|
||||
Test the AI setup:
|
||||
|
||||
```bash
|
||||
# Test Fabric integration
|
||||
echo "Test text" | fabric -p summarize
|
||||
```
|
||||
|
||||
## Dotfiles
|
||||
|
||||
The repository includes dotfiles for system configuration in the `dotfiles` directory. These can be automatically set up using the bootstrap script:
|
||||
|
||||
376
backup-gitea.sh
Normal file → Executable file
376
backup-gitea.sh
Normal file → Executable file
@@ -1,8 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# backup-gitea.sh - Backup Gitea data and PostgreSQL database
|
||||
# Author: Shell Repository
|
||||
# Description: Comprehensive backup solution for Gitea with PostgreSQL database
|
||||
# backup-gitea.sh - Backup Gitea, Postgres, and Runner
|
||||
# Enhanced for NAS support and Runner integration
|
||||
|
||||
set -e
|
||||
|
||||
@@ -13,255 +12,214 @@ RED='\033[0;31m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Configuration
|
||||
# ==========================================
|
||||
# 1. CONFIGURATION
|
||||
# ==========================================
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
COMPOSE_DIR="/home/acedanger/docker/gitea"
|
||||
|
||||
BACKUP_DIR="/home/acedanger/backups/gitea"
|
||||
COMPOSE_DIR="/home/acedanger/docker/gitea"
|
||||
NAS_DIR="/mnt/share/media/backups/gitea"
|
||||
COMPOSE_FILE="$COMPOSE_DIR/docker-compose.yml"
|
||||
LOG_FILE="$SCRIPT_DIR/logs/gitea-backup.log"
|
||||
DATE=$(date +%Y%m%d_%H%M%S)
|
||||
|
||||
# Ensure logs directory exists
|
||||
# Ensure directories exist
|
||||
mkdir -p "$(dirname "$LOG_FILE")"
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
|
||||
# Logging function
|
||||
# Load .env variables from the COMPOSE_DIR to ensure DB credentials match
|
||||
if [ -f "$COMPOSE_DIR/.env" ]; then
|
||||
export $(grep -v '^#' "$COMPOSE_DIR/.env" | xargs)
|
||||
fi
|
||||
|
||||
# Logging function (Fixed to interpret colors correctly)
|
||||
log() {
|
||||
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE"
|
||||
# Print to console with colors (interpreting escapes with -e)
|
||||
echo -e "$(date '+%Y-%m-%d %H:%M:%S') - $1"
|
||||
# Strip colors for the log file to keep it clean
|
||||
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | sed 's/\x1b\[[0-9;]*m//g' >> "$LOG_FILE"
|
||||
}
|
||||
|
||||
# Display usage information
|
||||
usage() {
|
||||
echo "Usage: $0 [OPTIONS]"
|
||||
echo ""
|
||||
echo "Backup Gitea data and PostgreSQL database"
|
||||
echo ""
|
||||
echo "Backup Gitea data, Runner, and PostgreSQL database"
|
||||
echo "Options:"
|
||||
echo " -h, --help Show this help message"
|
||||
echo " -d, --dry-run Show what would be backed up without doing it"
|
||||
echo " -f, --force Force backup even if one was recently created"
|
||||
echo " -r, --restore FILE Restore from specified backup directory"
|
||||
echo " -l, --list List available backups"
|
||||
echo " -c, --cleanup Clean up old backups (keeps last 7 days)"
|
||||
echo " --keep-days DAYS Number of days to keep backups (default: 7)"
|
||||
echo " -l, --list List available local backups"
|
||||
echo " -n, --no-nas Skip copying to NAS (Local only)"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 # Regular backup"
|
||||
echo " $0 --dry-run # See what would be backed up"
|
||||
echo " $0 --list # List available backups"
|
||||
echo " $0 --restore /path/to/backup # Restore from backup"
|
||||
}
|
||||
|
||||
# Check dependencies
|
||||
check_dependencies() {
|
||||
local missing_deps=()
|
||||
|
||||
command -v docker >/dev/null 2>&1 || missing_deps+=("docker")
|
||||
command -v docker-compose >/dev/null 2>&1 || missing_deps+=("docker-compose")
|
||||
|
||||
if [ ${#missing_deps[@]} -ne 0 ]; then
|
||||
echo -e "${RED}Error: Missing required dependencies: ${missing_deps[*]}${NC}"
|
||||
echo "Please install the missing dependencies and try again."
|
||||
if ! command -v docker &> /dev/null; then
|
||||
log "${RED}Error: docker is not installed.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if docker-compose file exists
|
||||
|
||||
# Verify the compose file exists where we expect it
|
||||
if [ ! -f "$COMPOSE_FILE" ]; then
|
||||
echo -e "${RED}Error: Docker compose file not found at $COMPOSE_FILE${NC}"
|
||||
log "${RED}Error: Docker compose file not found at: $COMPOSE_FILE${NC}"
|
||||
log "${YELLOW}Please update the COMPOSE_DIR variable in this script.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if we can access Docker
|
||||
if ! docker info >/dev/null 2>&1; then
|
||||
echo -e "${RED}Error: Cannot access Docker. Check if Docker is running and you have permissions.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Check if Gitea services are running
|
||||
check_gitea_services() {
|
||||
cd "$COMPOSE_DIR"
|
||||
|
||||
if ! docker-compose ps | grep -q "Up"; then
|
||||
echo -e "${YELLOW}Warning: Gitea services don't appear to be running${NC}"
|
||||
echo "Some backup operations may fail if services are not running."
|
||||
read -p "Continue anyway? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Backup cancelled"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# List available backups
|
||||
list_backups() {
|
||||
echo -e "${BLUE}=== Available Gitea Backups ===${NC}"
|
||||
|
||||
if [ ! -d "$BACKUP_DIR" ]; then
|
||||
echo -e "${YELLOW}No backup directory found at $BACKUP_DIR${NC}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
local count=0
|
||||
|
||||
# Find backup directories
|
||||
for backup_path in "$BACKUP_DIR"/gitea_backup_*; do
|
||||
if [ -d "$backup_path" ]; then
|
||||
local backup_name
|
||||
backup_name=$(basename "$backup_path")
|
||||
local backup_date
|
||||
backup_date=$(echo "$backup_name" | sed 's/gitea_backup_//' | sed 's/_/ /')
|
||||
local size
|
||||
size=$(du -sh "$backup_path" 2>/dev/null | cut -f1)
|
||||
local info_file="$backup_path/backup_info.txt"
|
||||
|
||||
echo -e "${GREEN}📦 $backup_name${NC}"
|
||||
echo " Date: $backup_date"
|
||||
echo " Size: $size"
|
||||
echo " Path: $backup_path"
|
||||
|
||||
if [ -f "$info_file" ]; then
|
||||
local gitea_version
|
||||
gitea_version=$(grep "Gitea Version:" "$info_file" 2>/dev/null | cut -d: -f2- | xargs)
|
||||
if [ -n "$gitea_version" ]; then
|
||||
echo " Version: $gitea_version"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
count=$((count + 1))
|
||||
fi
|
||||
done
|
||||
|
||||
if [ $count -eq 0 ]; then
|
||||
echo -e "${YELLOW}No backups found in $BACKUP_DIR${NC}"
|
||||
echo "Run a backup first to create one."
|
||||
else
|
||||
echo -e "${BLUE}Total backups found: $count${NC}"
|
||||
fi
|
||||
ls -lh "$BACKUP_DIR"/*.tar.gz 2>/dev/null || echo "No backups found."
|
||||
}
|
||||
|
||||
# Change to compose directory
|
||||
cd "$COMPOSE_DIR"
|
||||
# ==========================================
|
||||
# 2. BACKUP LOGIC
|
||||
# ==========================================
|
||||
perform_backup() {
|
||||
local SKIP_NAS=$1
|
||||
|
||||
log "Starting backup process..."
|
||||
|
||||
# Create timestamped backup directory
|
||||
BACKUP_PATH="$BACKUP_DIR/gitea_backup_$DATE"
|
||||
mkdir -p "$BACKUP_PATH"
|
||||
# Switch context to the directory where Gitea is actually running
|
||||
cd "$COMPOSE_DIR" || { log "${RED}Could not change to directory $COMPOSE_DIR${NC}"; exit 1; }
|
||||
|
||||
# Backup PostgreSQL database
|
||||
echo "Backing up PostgreSQL database..."
|
||||
docker-compose exec -T db pg_dump -U ${POSTGRES_USER:-gitea} ${POSTGRES_DB:-gitea} > "$BACKUP_PATH/database.sql"
|
||||
# PRE-FLIGHT CHECK: Is the DB actually running?
|
||||
if ! docker compose ps --services --filter "status=running" | grep -q "db"; then
|
||||
log "${RED}CRITICAL ERROR: The 'db' service is not running in $COMPOSE_DIR${NC}"
|
||||
log "${YELLOW}Docker sees these running services:$(docker compose ps --services --filter "status=running" | xargs)${NC}"
|
||||
log "Aborting backup to prevent empty files."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Backup Gitea data volume
|
||||
echo "Backing up Gitea data volume..."
|
||||
docker run --rm \
|
||||
-v gitea_gitea:/data:ro \
|
||||
-v "$BACKUP_PATH":/backup \
|
||||
alpine:latest \
|
||||
tar czf /backup/gitea_data.tar.gz -C /data .
|
||||
# Create a temporary staging directory for this specific backup
|
||||
TEMP_BACKUP_PATH="$BACKUP_DIR/temp_$DATE"
|
||||
mkdir -p "$TEMP_BACKUP_PATH"
|
||||
|
||||
# Backup PostgreSQL data volume (optional, as we have the SQL dump)
|
||||
echo "Backing up PostgreSQL data volume..."
|
||||
docker run --rm \
|
||||
-v gitea_postgres:/data:ro \
|
||||
-v "$BACKUP_PATH":/backup \
|
||||
alpine:latest \
|
||||
tar czf /backup/postgres_data.tar.gz -C /data .
|
||||
# 1. Backup Database
|
||||
log "Step 1/5: Dumping PostgreSQL database..."
|
||||
# Using -T to disable TTY allocation (fixes some cron issues)
|
||||
if docker compose exec -T db pg_dump -U "${POSTGRES_USER:-gitea}" "${POSTGRES_DB:-gitea}" > "$TEMP_BACKUP_PATH/database.sql"; then
|
||||
echo -e "${GREEN}Database dump successful.${NC}"
|
||||
else
|
||||
log "${RED}Database dump failed!${NC}"
|
||||
rm -rf "$TEMP_BACKUP_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Copy docker-compose configuration
|
||||
echo "Backing up configuration files..."
|
||||
cp "$COMPOSE_FILE" "$BACKUP_PATH/"
|
||||
if [ -f ".env" ]; then
|
||||
cp ".env" "$BACKUP_PATH/"
|
||||
fi
|
||||
# 2. Backup Gitea Data
|
||||
log "Step 2/5: Backing up Gitea data volume..."
|
||||
docker run --rm \
|
||||
--volumes-from gitea \
|
||||
-v "$TEMP_BACKUP_PATH":/backup \
|
||||
alpine tar czf /backup/gitea_data.tar.gz -C /data .
|
||||
|
||||
# Create a restore script
|
||||
cat > "$BACKUP_PATH/restore.sh" << 'EOF'
|
||||
# 3. Backup Runner Data
|
||||
log "Step 3/5: Backing up Runner data..."
|
||||
# Check if runner exists before backing up to avoid errors if you removed it
|
||||
if docker compose ps --services | grep -q "runner"; then
|
||||
docker run --rm \
|
||||
--volumes-from gitea-runner \
|
||||
-v "$TEMP_BACKUP_PATH":/backup \
|
||||
alpine tar czf /backup/runner_data.tar.gz -C /data .
|
||||
else
|
||||
log "${YELLOW}Runner service not found, skipping runner backup.${NC}"
|
||||
fi
|
||||
|
||||
# 4. Config Files & Restore Script
|
||||
log "Step 4/5: Archiving configurations and generating restore script..."
|
||||
cp "$COMPOSE_FILE" "$TEMP_BACKUP_PATH/"
|
||||
[ -f ".env" ] && cp ".env" "$TEMP_BACKUP_PATH/"
|
||||
|
||||
# Generate the Restore Script inside the backup folder
|
||||
create_restore_script "$TEMP_BACKUP_PATH"
|
||||
|
||||
# 5. Final Archive Creation
|
||||
log "Step 5/5: Compressing full backup..."
|
||||
FINAL_ARCHIVE_NAME="gitea_backup_$DATE.tar.gz"
|
||||
|
||||
# Tar the temp folder into one final file
|
||||
tar -czf "$BACKUP_DIR/$FINAL_ARCHIVE_NAME" -C "$TEMP_BACKUP_PATH" .
|
||||
|
||||
# Remove temp folder
|
||||
rm -rf "$TEMP_BACKUP_PATH"
|
||||
|
||||
log "${GREEN}Local Backup completed: $BACKUP_DIR/$FINAL_ARCHIVE_NAME${NC}"
|
||||
|
||||
# 6. NAS Transfer
|
||||
if [[ "$SKIP_NAS" != "true" ]]; then
|
||||
if [ -d "$NAS_DIR" ]; then
|
||||
log "Copying to NAS ($NAS_DIR)..."
|
||||
cp "$BACKUP_DIR/$FINAL_ARCHIVE_NAME" "$NAS_DIR/"
|
||||
if [ $? -eq 0 ]; then
|
||||
log "${GREEN}NAS Copy Successful.${NC}"
|
||||
else
|
||||
log "${RED}NAS Copy Failed. Check permissions on $NAS_DIR${NC}"
|
||||
fi
|
||||
else
|
||||
log "${YELLOW}NAS Directory $NAS_DIR not found. Skipping NAS copy.${NC}"
|
||||
fi
|
||||
else
|
||||
log "NAS copy skipped by user request."
|
||||
fi
|
||||
|
||||
# 7. Cleanup Old Local Backups (Keep 7 Days)
|
||||
find "$BACKUP_DIR" -name "gitea_backup_*.tar.gz" -mtime +7 -exec rm {} \;
|
||||
log "Cleanup of old local backups complete."
|
||||
}
|
||||
|
||||
# Function to generate the restore script
|
||||
create_restore_script() {
|
||||
local TARGET_DIR=$1
|
||||
cat > "$TARGET_DIR/restore.sh" << 'EOF'
|
||||
#!/bin/bash
|
||||
# Restore script for Gitea backup
|
||||
# RESTORE SCRIPT
|
||||
echo "WARNING: This will overwrite your current Gitea/DB/Runner data."
|
||||
read -p "Are you sure? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
|
||||
|
||||
set -e
|
||||
docker compose down
|
||||
|
||||
RESTORE_DIR="$(dirname "$0")"
|
||||
COMPOSE_DIR="/home/acedanger/docker/gitea"
|
||||
echo "Restoring Database Volume..."
|
||||
docker compose up -d db
|
||||
echo "Waiting for DB to initialize..."
|
||||
sleep 15
|
||||
cat database.sql | docker compose exec -T db psql -U ${POSTGRES_USER:-gitea} -d ${POSTGRES_DB:-gitea}
|
||||
|
||||
echo "WARNING: This will stop Gitea and replace all data!"
|
||||
read -p "Are you sure you want to continue? (yes/no): " confirm
|
||||
echo "Restoring Gitea Files..."
|
||||
docker run --rm --volumes-from gitea -v $(pwd):/backup alpine tar xzf /backup/gitea_data.tar.gz -C /data
|
||||
|
||||
if [ "$confirm" != "yes" ]; then
|
||||
echo "Restore cancelled"
|
||||
exit 1
|
||||
echo "Restoring Runner Files..."
|
||||
docker run --rm --volumes-from gitea-runner -v $(pwd):/backup alpine tar xzf /backup/runner_data.tar.gz -C /data
|
||||
|
||||
echo "Restarting stack..."
|
||||
docker compose up -d
|
||||
echo "Restore Complete."
|
||||
EOF
|
||||
chmod +x "$TARGET_DIR/restore.sh"
|
||||
}
|
||||
|
||||
# ==========================================
|
||||
# 3. EXECUTION FLOW
|
||||
# ==========================================
|
||||
|
||||
check_dependencies
|
||||
|
||||
# Parse Arguments
|
||||
if [ $# -eq 0 ]; then
|
||||
perform_backup "false"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
cd "$COMPOSE_DIR"
|
||||
|
||||
# Stop services
|
||||
echo "Stopping Gitea services..."
|
||||
docker-compose down
|
||||
|
||||
# Remove existing volumes
|
||||
echo "Removing existing volumes..."
|
||||
docker volume rm gitea_gitea gitea_postgres || true
|
||||
|
||||
# Recreate volumes
|
||||
echo "Creating volumes..."
|
||||
docker volume create gitea_gitea
|
||||
docker volume create gitea_postgres
|
||||
|
||||
# Restore Gitea data
|
||||
echo "Restoring Gitea data..."
|
||||
docker run --rm \
|
||||
-v gitea_gitea:/data \
|
||||
-v "$RESTORE_DIR":/backup:ro \
|
||||
alpine:latest \
|
||||
tar xzf /backup/gitea_data.tar.gz -C /data
|
||||
|
||||
# Start database for restore
|
||||
echo "Starting database for restore..."
|
||||
docker-compose up -d db
|
||||
|
||||
# Wait for database to be ready
|
||||
echo "Waiting for database to be ready..."
|
||||
sleep 10
|
||||
|
||||
# Restore database
|
||||
echo "Restoring database..."
|
||||
docker-compose exec -T db psql -U ${POSTGRES_USER:-gitea} -d ${POSTGRES_DB:-gitea} < "$RESTORE_DIR/database.sql"
|
||||
|
||||
# Start all services
|
||||
echo "Starting all services..."
|
||||
docker-compose up -d
|
||||
|
||||
echo "Restore completed!"
|
||||
EOF
|
||||
|
||||
chmod +x "$BACKUP_PATH/restore.sh"
|
||||
|
||||
# Create info file
|
||||
cat > "$BACKUP_PATH/backup_info.txt" << EOF
|
||||
Gitea Backup Information
|
||||
========================
|
||||
Backup Date: $(date)
|
||||
Backup Location: $BACKUP_PATH
|
||||
Gitea Version: $(docker-compose exec -T server gitea --version | head -1)
|
||||
PostgreSQL Version: $(docker-compose exec -T db postgres --version)
|
||||
|
||||
Files included:
|
||||
- database.sql: PostgreSQL database dump
|
||||
- gitea_data.tar.gz: Gitea data volume
|
||||
- postgres_data.tar.gz: PostgreSQL data volume
|
||||
- docker-compose.yml: Docker compose configuration
|
||||
- .env: Environment variables (if exists)
|
||||
- restore.sh: Restore script
|
||||
|
||||
To restore this backup, run:
|
||||
cd $BACKUP_PATH
|
||||
./restore.sh
|
||||
EOF
|
||||
|
||||
# Cleanup old backups (keep last 7 days)
|
||||
echo "Cleaning up old backups..."
|
||||
find "$BACKUP_DIR" -type d -name "gitea_backup_*" -mtime +7 -exec rm -rf {} + 2>/dev/null || true
|
||||
|
||||
echo "Backup completed successfully!"
|
||||
echo "Backup saved to: $BACKUP_PATH"
|
||||
echo "Backup size: $(du -sh "$BACKUP_PATH" | cut -f1)"
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
-h|--help) usage; exit 0 ;;
|
||||
-l|--list) list_backups; exit 0 ;;
|
||||
-n|--no-nas) perform_backup "true"; exit 0 ;;
|
||||
*) echo "Unknown parameter: $1"; usage; exit 1 ;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
@@ -18,3 +18,8 @@
|
||||
[core]
|
||||
autocrlf = input
|
||||
eol = lf
|
||||
[credential "https://git.ptrwd.com"]
|
||||
username = peterwood
|
||||
provider = generic
|
||||
[credential]
|
||||
helper = cache
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
export PATH=$PATH:$HOME/.local/bin
|
||||
|
||||
# Path to your oh-my-zsh installation.
|
||||
export ZSH="/home/acedanger/.oh-my-zsh"
|
||||
export ZSH="$HOME/.oh-my-zsh"
|
||||
|
||||
# Set name of the theme to load --- if set to "random", it will
|
||||
# load a random theme each time oh-my-zsh is loaded, in which case,
|
||||
@@ -100,24 +100,27 @@ export NVM_DIR="$HOME/.nvm"
|
||||
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
|
||||
|
||||
# Automatically use node version specified in .nvmrc if present
|
||||
autoload -U add-zsh-hook
|
||||
load-nvmrc() {
|
||||
local nvmrc_path="$(nvm_find_nvmrc)"
|
||||
if [ -n "$nvmrc_path" ]; then
|
||||
local nvmrc_node_version=$(nvm version "$(cat "${nvmrc_path}")")
|
||||
if [ "$nvmrc_node_version" = "N/A" ]; then
|
||||
nvm install
|
||||
elif [ "$nvmrc_node_version" != "$(nvm version)" ]; then
|
||||
nvm use
|
||||
# Only enable if nvm is loaded
|
||||
if command -v nvm_find_nvmrc > /dev/null 2>&1; then
|
||||
autoload -U add-zsh-hook
|
||||
load-nvmrc() {
|
||||
local nvmrc_path="$(nvm_find_nvmrc)"
|
||||
if [ -n "$nvmrc_path" ]; then
|
||||
local nvmrc_node_version=$(nvm version "$(cat "${nvmrc_path}")")
|
||||
if [ "$nvmrc_node_version" = "N/A" ]; then
|
||||
nvm install
|
||||
elif [ "$nvmrc_node_version" != "$(nvm version)" ]; then
|
||||
nvm use
|
||||
fi
|
||||
elif [ -n "$(PWD=$OLDPWD nvm_find_nvmrc)" ] && [ "$(nvm version)" != "$(nvm version default)" ]; then
|
||||
nvm use default
|
||||
fi
|
||||
elif [ -n "$(PWD=$OLDPWD nvm_find_nvmrc)" ] && [ "$(nvm version)" != "$(nvm version default)" ]; then
|
||||
nvm use default
|
||||
fi
|
||||
}
|
||||
add-zsh-hook chpwd load-nvmrc
|
||||
load-nvmrc
|
||||
}
|
||||
add-zsh-hook chpwd load-nvmrc
|
||||
load-nvmrc
|
||||
fi
|
||||
|
||||
[[ -s /home/acedanger/.autojump/etc/profile.d/autojump.sh ]] && source /home/acedanger/.autojump/etc/profile.d/autojump.sh
|
||||
[[ -s $HOME/.autojump/etc/profile.d/autojump.sh ]] && source $HOME/.autojump/etc/profile.d/autojump.sh
|
||||
|
||||
# Enable bash completion compatibility in zsh
|
||||
autoload -U +X bashcompinit && bashcompinit
|
||||
@@ -138,69 +141,32 @@ if [ -f "$HOME/shell/completions/env-backup-completion.bash" ]; then
|
||||
source "$HOME/shell/completions/env-backup-completion.bash"
|
||||
fi
|
||||
|
||||
# Go environment variables (required for Fabric and other Go tools)
|
||||
# Go environment variables
|
||||
# GOROOT is auto-detected by Go when installed via package manager
|
||||
export GOPATH=$HOME/go
|
||||
export PATH=$GOPATH/bin:$PATH
|
||||
|
||||
# Fabric AI - Pattern aliases and helper functions
|
||||
if command -v fabric &> /dev/null; then
|
||||
# Loop through all directories in the ~/.config/fabric/patterns directory to create aliases
|
||||
if [ -d "$HOME/.config/fabric/patterns" ]; then
|
||||
for pattern_dir in $HOME/.config/fabric/patterns/*/; do
|
||||
if [ -d "$pattern_dir" ]; then
|
||||
# Get the base name of the directory (i.e., remove the directory path)
|
||||
pattern_name=$(basename "$pattern_dir")
|
||||
|
||||
# Create an alias in the form: alias pattern_name="fabric --pattern pattern_name"
|
||||
alias_command="alias $pattern_name='fabric --pattern $pattern_name'"
|
||||
|
||||
# Evaluate the alias command to add it to the current shell
|
||||
eval "$alias_command"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# YouTube transcript helper function
|
||||
yt() {
|
||||
if [ "$#" -eq 0 ] || [ "$#" -gt 2 ]; then
|
||||
echo "Usage: yt [-t | --timestamps] youtube-link"
|
||||
echo "Use the '-t' flag to get the transcript with timestamps."
|
||||
return 1
|
||||
fi
|
||||
|
||||
transcript_flag="--transcript"
|
||||
if [ "$1" = "-t" ] || [ "$1" = "--timestamps" ]; then
|
||||
transcript_flag="--transcript-with-timestamps"
|
||||
shift
|
||||
fi
|
||||
|
||||
local video_link="$1"
|
||||
fabric -y "$video_link" $transcript_flag
|
||||
}
|
||||
fi
|
||||
|
||||
# SSH Agent Management - Start only if needed and working properly
|
||||
ssh_agent_start() {
|
||||
local ssh_agent_env="$HOME/.ssh-agent-env"
|
||||
|
||||
|
||||
# Function to check if ssh-agent is running and responsive
|
||||
ssh_agent_running() {
|
||||
[ -n "$SSH_AUTH_SOCK" ] && [ -S "$SSH_AUTH_SOCK" ] && ssh-add -l >/dev/null 2>&1
|
||||
}
|
||||
|
||||
|
||||
# Load existing agent environment if it exists
|
||||
if [ -f "$ssh_agent_env" ]; then
|
||||
source "$ssh_agent_env" >/dev/null 2>&1
|
||||
fi
|
||||
|
||||
|
||||
# Check if agent is running and responsive
|
||||
if ! ssh_agent_running; then
|
||||
# Start new agent only if ssh key exists
|
||||
if [ -f "$HOME/.ssh/id_ed25519" ]; then
|
||||
# Clean up any stale agent environment
|
||||
[ -f "$ssh_agent_env" ] && rm -f "$ssh_agent_env"
|
||||
|
||||
|
||||
# Start new agent and save environment
|
||||
ssh-agent -s > "$ssh_agent_env" 2>/dev/null
|
||||
if [ $? -eq 0 ]; then
|
||||
|
||||
124
restore-gitea.sh
Executable file
124
restore-gitea.sh
Executable file
@@ -0,0 +1,124 @@
|
||||
#!/bin/bash
|
||||
|
||||
# restore-gitea.sh
|
||||
# Usage: ./restore-gitea.sh <path_to_backup.tar.gz> <destination_directory>
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[0;33m'
|
||||
RED='\033[0;31m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Check Arguments
|
||||
if [ "$#" -ne 2 ]; then
|
||||
echo -e "${RED}Usage: $0 <path_to_backup_file> <destination_directory>${NC}"
|
||||
echo "Example: $0 ./backups/gitea_backup.tar.gz ~/docker/gitea_restore"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BACKUP_FILE=$(realpath "$1")
|
||||
DEST_DIR="$2"
|
||||
|
||||
# 1. Validation
|
||||
if [ ! -f "$BACKUP_FILE" ]; then
|
||||
echo -e "${RED}Error: Backup file not found at $BACKUP_FILE${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -d "$DEST_DIR" ]; then
|
||||
echo -e "${YELLOW}Warning: Destination directory '$DEST_DIR' already exists.${NC}"
|
||||
echo -e "${RED}This process will overwrite files and STOP containers in that directory.${NC}"
|
||||
read -p "Are you sure you want to continue? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
echo "Restore cancelled."
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo -e "${BLUE}Creating destination directory: $DEST_DIR${NC}"
|
||||
mkdir -p "$DEST_DIR"
|
||||
fi
|
||||
|
||||
# Switch to destination directory
|
||||
cd "$DEST_DIR" || exit 1
|
||||
|
||||
# 2. Extract Backup Archive
|
||||
echo -e "${BLUE}Step 1/6: Extracting backup archive...${NC}"
|
||||
tar -xzf "$BACKUP_FILE"
|
||||
echo "Extraction complete."
|
||||
|
||||
# Load environment variables from the extracted .env (if it exists)
|
||||
if [ -f ".env" ]; then
|
||||
echo "Loading .env configuration..."
|
||||
export $(grep -v '^#' .env | xargs)
|
||||
fi
|
||||
|
||||
# 3. Stop Existing Services & Clean Volumes
|
||||
echo -e "${BLUE}Step 2/6: Preparing Docker environment...${NC}"
|
||||
# We stop containers and remove volumes to ensure a clean restore state
|
||||
docker compose down -v 2>/dev/null || true
|
||||
echo "Environment cleaned."
|
||||
|
||||
# 4. Restore Volume Data (Files)
|
||||
echo -e "${BLUE}Step 3/6: Restoring Gitea Data Volume...${NC}"
|
||||
# We must create the containers (no-start) first so the volume exists
|
||||
docker compose create gitea
|
||||
|
||||
# Helper container to extract data into the volume
|
||||
docker run --rm \
|
||||
--volumes-from gitea \
|
||||
-v "$DEST_DIR":/backup \
|
||||
alpine tar xzf /backup/gitea_data.tar.gz -C /data
|
||||
|
||||
echo "Gitea data restored."
|
||||
|
||||
# Restore Runner Data (if present)
|
||||
if [ -f "runner_data.tar.gz" ]; then
|
||||
echo -e "${BLUE}Step 4/6: Restoring Runner Data Volume...${NC}"
|
||||
docker compose create runner 2>/dev/null || true
|
||||
if docker compose ps -a | grep -q "runner"; then
|
||||
docker run --rm \
|
||||
--volumes-from gitea-runner \
|
||||
-v "$DEST_DIR":/backup \
|
||||
alpine tar xzf /backup/runner_data.tar.gz -C /data
|
||||
echo "Runner data restored."
|
||||
else
|
||||
echo -e "${YELLOW}Runner service not defined in compose file. Skipping.${NC}"
|
||||
fi
|
||||
else
|
||||
echo "No runner backup found. Skipping."
|
||||
fi
|
||||
|
||||
# 5. Restore Database
|
||||
echo -e "${BLUE}Step 5/6: Restoring Database...${NC}"
|
||||
# Start only the DB container
|
||||
docker compose up -d db
|
||||
|
||||
# Wait for Postgres to be ready
|
||||
echo "Waiting for Database to initialize (15s)..."
|
||||
sleep 15
|
||||
|
||||
if [ -f "database.sql" ]; then
|
||||
echo "Importing SQL dump..."
|
||||
cat database.sql | docker compose exec -T db psql -U "${POSTGRES_USER:-gitea}" -d "${POSTGRES_DB:-gitea}"
|
||||
echo "Database import successful."
|
||||
else
|
||||
echo -e "${RED}Error: database.sql not found in backup!${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 6. Start All Services
|
||||
echo -e "${BLUE}Step 6/6: Starting Gitea...${NC}"
|
||||
docker compose up -d
|
||||
|
||||
# Cleanup extracted files (Optional - comment out if you want to inspect them)
|
||||
# echo "Cleaning up temporary extraction files..."
|
||||
# rm database.sql gitea_data.tar.gz runner_data.tar.gz
|
||||
|
||||
echo -e "${GREEN}=======================================${NC}"
|
||||
echo -e "${GREEN}✅ Restore Complete!${NC}"
|
||||
echo -e "${GREEN}Gitea is running at: $DEST_DIR${NC}"
|
||||
echo -e "${GREEN}=======================================${NC}"
|
||||
@@ -61,14 +61,24 @@ if ! command -v git &>/dev/null; then
|
||||
esac
|
||||
fi
|
||||
|
||||
# Create shell directory if it doesn't exist
|
||||
mkdir -p "$HOME/shell"
|
||||
|
||||
# Clone or update repository
|
||||
if [ -d "$DOTFILES_DIR" ]; then
|
||||
if [ -d "$DOTFILES_DIR/.git" ]; then
|
||||
echo -e "${YELLOW}Updating existing shell repository...${NC}"
|
||||
cd "$DOTFILES_DIR"
|
||||
git pull origin $DOTFILES_BRANCH
|
||||
elif [ -d "$DOTFILES_DIR" ]; then
|
||||
echo -e "${YELLOW}Directory exists but is not a git repository.${NC}"
|
||||
# Check if directory is empty
|
||||
if [ -z "$(ls -A "$DOTFILES_DIR")" ]; then
|
||||
echo -e "${YELLOW}Directory is empty. Cloning...${NC}"
|
||||
git clone "https://github.com/$DOTFILES_REPO.git" "$DOTFILES_DIR"
|
||||
else
|
||||
echo -e "${YELLOW}Backing up existing directory...${NC}"
|
||||
mv "$DOTFILES_DIR" "${DOTFILES_DIR}.bak.$(date +%s)"
|
||||
echo -e "${YELLOW}Cloning shell repository...${NC}"
|
||||
git clone "https://github.com/$DOTFILES_REPO.git" "$DOTFILES_DIR"
|
||||
fi
|
||||
cd "$DOTFILES_DIR"
|
||||
else
|
||||
echo -e "${YELLOW}Cloning shell repository...${NC}"
|
||||
git clone "https://github.com/$DOTFILES_REPO.git" "$DOTFILES_DIR"
|
||||
|
||||
@@ -20,5 +20,4 @@ eza // Modern ls alternative
|
||||
// Note: lazygit, lazydocker, and fabric require special installation (GitHub releases/scripts)
|
||||
// These are handled separately in the setup script
|
||||
// lazygit
|
||||
// lazydocker
|
||||
fabric
|
||||
// lazydocker
|
||||
@@ -185,14 +185,6 @@ for pkg in "${pkgs[@]}"; do
|
||||
continue
|
||||
fi
|
||||
|
||||
# Handle fabric installation
|
||||
if [ "$pkg" = "fabric" ]; then
|
||||
special_installs+=("$pkg")
|
||||
continue
|
||||
fi
|
||||
|
||||
|
||||
|
||||
# Handle lazygit - available in COPR for Fedora, special install for Debian/Ubuntu
|
||||
if [ "$pkg" = "lazygit" ] && [ "$OS_NAME" != "fedora" ]; then
|
||||
special_installs+=("$pkg")
|
||||
@@ -245,28 +237,6 @@ esac
|
||||
|
||||
echo -e "${GREEN}Package installation completed for $OS_NAME $OS_VERSION.${NC}"
|
||||
|
||||
# Install Go if not present (required for Fabric and other Go tools)
|
||||
echo -e "${YELLOW}Checking Go installation...${NC}"
|
||||
if ! command -v go &> /dev/null; then
|
||||
echo -e "${YELLOW}Installing Go programming language...${NC}"
|
||||
GO_VERSION="1.21.5" # Stable version that works well with Fabric
|
||||
|
||||
# Download and install Go
|
||||
wget -q "https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz" -O /tmp/go.tar.gz
|
||||
|
||||
# Remove any existing Go installation
|
||||
sudo rm -rf /usr/local/go
|
||||
|
||||
# Extract Go to /usr/local
|
||||
sudo tar -C /usr/local -xzf /tmp/go.tar.gz
|
||||
rm /tmp/go.tar.gz
|
||||
|
||||
echo -e "${GREEN}Go ${GO_VERSION} installed successfully!${NC}"
|
||||
echo -e "${YELLOW}Go PATH will be configured in shell configuration${NC}"
|
||||
else
|
||||
echo -e "${GREEN}Go is already installed: $(go version)${NC}"
|
||||
fi
|
||||
|
||||
# Handle special installations that aren't available through package managers
|
||||
echo -e "${YELLOW}Installing special packages...${NC}"
|
||||
for pkg in "${special_installs[@]}"; do
|
||||
@@ -285,44 +255,6 @@ for pkg in "${special_installs[@]}"; do
|
||||
echo -e "${GREEN}Lazydocker is already installed${NC}"
|
||||
fi
|
||||
;;
|
||||
"fabric")
|
||||
if ! command -v fabric &> /dev/null; then
|
||||
echo -e "${YELLOW}Installing Fabric from GitHub releases...${NC}"
|
||||
# Download and install the latest Fabric binary for Linux AMD64
|
||||
curl -L https://github.com/danielmiessler/fabric/releases/latest/download/fabric-linux-amd64 -o /tmp/fabric
|
||||
chmod +x /tmp/fabric
|
||||
sudo mv /tmp/fabric /usr/local/bin/fabric echo -e "${GREEN}Fabric binary installed successfully!${NC}"
|
||||
|
||||
# Verify installation
|
||||
if fabric --version; then
|
||||
echo -e "${GREEN}Fabric installation verified!${NC}"
|
||||
echo -e "${YELLOW}Running Fabric setup...${NC}"
|
||||
|
||||
# Create fabric config directory
|
||||
mkdir -p "$HOME/.config/fabric"
|
||||
|
||||
# Run fabric setup with proper configuration
|
||||
echo -e "${YELLOW}Setting up Fabric patterns and configuration...${NC}"
|
||||
|
||||
# Initialize fabric with default patterns
|
||||
fabric --setup || echo -e "${YELLOW}Initial fabric setup completed${NC}"
|
||||
|
||||
# Update patterns to get the latest
|
||||
echo -e "${YELLOW}Updating Fabric patterns...${NC}"
|
||||
fabric --updatepatterns || echo -e "${YELLOW}Pattern update completed${NC}"
|
||||
|
||||
echo -e "${GREEN}Fabric setup completed successfully!${NC}"
|
||||
echo -e "${YELLOW}You can test fabric with: fabric --list-patterns${NC}"
|
||||
else
|
||||
echo -e "${RED}Fabric installation verification failed${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${GREEN}Fabric is already installed${NC}"
|
||||
# Still try to update patterns
|
||||
echo -e "${YELLOW}Updating Fabric patterns...${NC}"
|
||||
fabric --updatepatterns || echo -e "${YELLOW}Pattern update completed${NC}"
|
||||
fi
|
||||
;;
|
||||
"lazygit")
|
||||
if ! command -v lazygit &> /dev/null; then
|
||||
echo -e "${YELLOW}Installing Lazygit from GitHub releases...${NC}"
|
||||
@@ -635,30 +567,8 @@ echo -e "${GREEN}OS: $OS_NAME $OS_VERSION${NC}"
|
||||
echo -e "${GREEN}Package Manager: $PKG_MANAGER${NC}"
|
||||
echo -e "${GREEN}Shell: $(basename "$SHELL") → zsh${NC}"
|
||||
|
||||
echo -e "\n${YELLOW}Testing Fabric installation...${NC}"
|
||||
if command -v fabric &> /dev/null; then
|
||||
echo -e "${GREEN}✓ Fabric is installed${NC}"
|
||||
|
||||
# Test fabric patterns
|
||||
echo -e "${YELLOW}Testing Fabric patterns...${NC}"
|
||||
if fabric --list-patterns >/dev/null 2>&1; then
|
||||
echo -e "${GREEN}✓ Fabric patterns are available${NC}"
|
||||
echo -e "${YELLOW}Number of patterns: $(fabric --list-patterns 2>/dev/null | wc -l)${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}⚠ Fabric patterns may need to be updated${NC}"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}✗ Fabric is not installed${NC}"
|
||||
fi
|
||||
|
||||
echo -e "\n${GREEN}=== Post-Installation Instructions ===${NC}"
|
||||
echo -e "${YELLOW}1. Restart your shell or run: source ~/.zshrc${NC}"
|
||||
echo -e "${YELLOW}2. Test Fabric: fabric --list-patterns${NC}"
|
||||
echo -e "${YELLOW}3. Try a Fabric pattern: echo 'Hello world' | fabric --pattern summarize${NC}"
|
||||
|
||||
echo -e "\n${GREEN}=== Useful Commands ===${NC}"
|
||||
echo -e "${YELLOW}• Fabric help: fabric --help${NC}"
|
||||
echo -e "${YELLOW}• Update patterns: fabric --updatepatterns${NC}"
|
||||
|
||||
echo -e "\n${GREEN}Setup completed successfully for $OS_NAME $OS_VERSION!${NC}"
|
||||
echo -e "${YELLOW}Note: You may need to log out and log back in for all changes to take effect.${NC}"
|
||||
|
||||
214
uninstall-fabric.sh
Executable file
214
uninstall-fabric.sh
Executable file
@@ -0,0 +1,214 @@
|
||||
#!/bin/bash
|
||||
|
||||
# uninstall-fabric.sh
|
||||
#
|
||||
# Description: Safely uninstalls the Fabric AI CLI (Daniel Miessler) and related configuration.
|
||||
# Avoids removing the 'fabric' Python deployment library.
|
||||
# Detects OS and uses appropriate package managers if applicable.
|
||||
# Logs all actions to a file.
|
||||
#
|
||||
# Usage: ./uninstall-fabric.sh
|
||||
#
|
||||
# Author: GitHub Copilot
|
||||
|
||||
set -u
|
||||
|
||||
# Configuration
|
||||
LOG_FILE="uninstall-fabric.log"
|
||||
CURRENT_DATE=$(date +'%Y-%m-%d %H:%M:%S')
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Initialize log file
|
||||
echo "Fabric AI CLI Uninstallation Log - Started at $CURRENT_DATE" > "$LOG_FILE"
|
||||
|
||||
# Logging functions
|
||||
log() {
|
||||
local message="$1"
|
||||
echo -e "[$(date +'%H:%M:%S')] $message" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
info() {
|
||||
local message="$1"
|
||||
echo -e "${BLUE}[INFO]${NC} $message" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
success() {
|
||||
local message="$1"
|
||||
echo -e "${GREEN}[SUCCESS]${NC} $message" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
warning() {
|
||||
local message="$1"
|
||||
echo -e "${YELLOW}[WARNING]${NC} $message" | tee -a "$LOG_FILE"
|
||||
}
|
||||
|
||||
error() {
|
||||
local message="$1"
|
||||
echo -e "${RED}[ERROR]${NC} $message" | tee -a "$LOG_FILE"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Function to detect Operating System
|
||||
detect_os() {
|
||||
if [[ -f /etc/os-release ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
. /etc/os-release
|
||||
OS_NAME=$ID
|
||||
VERSION_ID=$VERSION_ID
|
||||
info "Detected OS: $NAME ($ID) $VERSION_ID"
|
||||
else
|
||||
error "Could not detect operating system. /etc/os-release file not found."
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check for root privileges
|
||||
check_privileges() {
|
||||
if [[ $EUID -ne 0 ]]; then
|
||||
warning "This script is not running as root."
|
||||
warning "System package removal might fail or require sudo password."
|
||||
else
|
||||
info "Running with root privileges."
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to confirm action
|
||||
confirm_execution() {
|
||||
echo -e "\n${YELLOW}WARNING: This script will attempt to uninstall the Fabric AI CLI (Daniel Miessler).${NC}"
|
||||
echo -e "It will NOT remove the 'fabric' Python deployment library."
|
||||
echo -e "It will remove the 'fabric' binary if identified as the AI tool, and configuration files."
|
||||
echo -e "Please ensure you have backups if necessary.\n"
|
||||
|
||||
read -p "Do you want to proceed? (y/N): " -n 1 -r
|
||||
echo
|
||||
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
|
||||
info "Operation cancelled by user."
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to check if a binary is the Fabric AI tool
|
||||
is_fabric_ai_tool() {
|
||||
local bin_path="$1"
|
||||
# Check help output for keywords
|
||||
# The AI tool usually mentions 'patterns', 'context', 'session', 'model'
|
||||
if "$bin_path" --help 2>&1 | grep -qE "Daniel Miessler|patterns|context|session|model"; then
|
||||
return 0
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# Function to uninstall binary
|
||||
uninstall_binary() {
|
||||
local bin_path
|
||||
bin_path=$(command -v fabric)
|
||||
|
||||
if [[ -n "$bin_path" ]]; then
|
||||
info "Found 'fabric' binary at: $bin_path"
|
||||
|
||||
if is_fabric_ai_tool "$bin_path"; then
|
||||
info "Identified as Fabric AI CLI."
|
||||
|
||||
# Check if owned by system package
|
||||
local pkg_owner=""
|
||||
if [[ "$OS_NAME" =~ (debian|ubuntu|linuxmint|pop|kali) ]]; then
|
||||
if dpkg -S "$bin_path" &> /dev/null; then
|
||||
pkg_owner=$(dpkg -S "$bin_path" | cut -d: -f1)
|
||||
fi
|
||||
elif [[ "$OS_NAME" =~ (fedora|centos|rhel|almalinux|rocky) ]]; then
|
||||
if rpm -qf "$bin_path" &> /dev/null; then
|
||||
pkg_owner=$(rpm -qf "$bin_path")
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "$pkg_owner" ]]; then
|
||||
info "Binary is owned by system package: $pkg_owner"
|
||||
info "Removing package $pkg_owner..."
|
||||
local sudo_prefix=""
|
||||
[[ $EUID -ne 0 ]] && sudo_prefix="sudo"
|
||||
|
||||
if [[ "$OS_NAME" =~ (debian|ubuntu|linuxmint|pop|kali) ]]; then
|
||||
$sudo_prefix apt-get remove -y "$pkg_owner" >> "$LOG_FILE" 2>&1 || error "Failed to remove package $pkg_owner"
|
||||
else
|
||||
$sudo_prefix dnf remove -y "$pkg_owner" >> "$LOG_FILE" 2>&1 || error "Failed to remove package $pkg_owner"
|
||||
fi
|
||||
success "Removed system package $pkg_owner."
|
||||
else
|
||||
info "Binary is not owned by a system package. Removing manually..."
|
||||
rm -f "$bin_path" || error "Failed to remove $bin_path"
|
||||
success "Removed binary $bin_path."
|
||||
fi
|
||||
else
|
||||
warning "The binary at $bin_path does not appear to be the Fabric AI CLI. Skipping removal to be safe."
|
||||
warning "Run '$bin_path --help' to verify what it is."
|
||||
fi
|
||||
else
|
||||
info "'fabric' binary not found in PATH."
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to uninstall from pipx
|
||||
uninstall_pipx() {
|
||||
if command -v pipx &> /dev/null; then
|
||||
info "Checking pipx for 'fabric'..."
|
||||
if pipx list | grep -q "package fabric"; then
|
||||
info "Found 'fabric' installed via pipx. Uninstalling..."
|
||||
pipx uninstall fabric >> "$LOG_FILE" 2>&1 || error "Failed to uninstall fabric via pipx"
|
||||
success "Uninstalled fabric via pipx."
|
||||
else
|
||||
info "'fabric' not found in pipx."
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# Function to remove configuration files
|
||||
remove_config() {
|
||||
local config_dirs=(
|
||||
"$HOME/.config/fabric"
|
||||
"$HOME/.fabric"
|
||||
"$HOME/.local/share/fabric"
|
||||
)
|
||||
|
||||
for dir in "${config_dirs[@]}"; do
|
||||
if [[ -d "$dir" ]]; then
|
||||
info "Found configuration directory: $dir"
|
||||
rm -rf "$dir" || error "Failed to remove $dir"
|
||||
success "Removed $dir."
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Main execution flow
|
||||
main() {
|
||||
detect_os
|
||||
check_privileges
|
||||
confirm_execution
|
||||
|
||||
info "Starting uninstallation process..."
|
||||
|
||||
# Check pipx first as it manages its own binaries
|
||||
uninstall_pipx
|
||||
|
||||
# Check binary
|
||||
uninstall_binary
|
||||
|
||||
# Remove config
|
||||
remove_config
|
||||
|
||||
echo -e "\n----------------------------------------------------------------"
|
||||
success "Uninstallation steps completed."
|
||||
info "A log of this operation has been saved to: $LOG_FILE"
|
||||
echo -e "${YELLOW}Note: If you removed system-level components, a reboot might be recommended.${NC}"
|
||||
echo -e "----------------------------------------------------------------"
|
||||
}
|
||||
|
||||
# Trap interrupts
|
||||
trap 'echo -e "\n${RED}Script interrupted by user.${NC}"; exit 1' INT TERM
|
||||
|
||||
# Run main
|
||||
main
|
||||
59
update.sh
59
update.sh
@@ -59,7 +59,13 @@ readonly CYAN='\033[0;36m'
|
||||
readonly NC='\033[0m' # No Color
|
||||
|
||||
# Configuration
|
||||
readonly LOG_FILE="/var/log/system-update.log"
|
||||
if [[ -w "/var/log" ]]; then
|
||||
LOG_FILE="/var/log/system-update.log"
|
||||
else
|
||||
LOG_FILE="$HOME/.local/share/system-update.log"
|
||||
mkdir -p "$(dirname "$LOG_FILE")"
|
||||
fi
|
||||
readonly LOG_FILE
|
||||
|
||||
# Global variables
|
||||
ERRORS_DETECTED=0
|
||||
@@ -516,6 +522,9 @@ perform_system_update() {
|
||||
increment_error "Failed to upgrade packages with nala"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_message "INFO" "Cleaning up unused packages with nala..."
|
||||
sudo nala autoremove -y
|
||||
;;
|
||||
dnf)
|
||||
log_message "INFO" "Checking for updates with dnf..."
|
||||
@@ -526,6 +535,9 @@ perform_system_update() {
|
||||
increment_error "Failed to upgrade packages with dnf"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_message "INFO" "Cleaning up unused packages with dnf..."
|
||||
sudo dnf autoremove -y
|
||||
;;
|
||||
apt)
|
||||
log_message "INFO" "Updating package lists with apt..."
|
||||
@@ -539,12 +551,49 @@ perform_system_update() {
|
||||
increment_error "Failed to upgrade packages with apt"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_message "INFO" "Cleaning up unused packages with apt..."
|
||||
sudo apt autoremove -y && sudo apt autoclean
|
||||
;;
|
||||
esac
|
||||
|
||||
# Universal packages
|
||||
if command -v flatpak &> /dev/null; then
|
||||
log_message "INFO" "Updating Flatpak packages..."
|
||||
flatpak update -y
|
||||
|
||||
log_message "INFO" "Cleaning up unused Flatpak runtimes..."
|
||||
flatpak uninstall --unused -y
|
||||
fi
|
||||
|
||||
if command -v snap &> /dev/null; then
|
||||
log_message "INFO" "Updating Snap packages..."
|
||||
sudo snap refresh
|
||||
fi
|
||||
|
||||
log_message "INFO" "System package update completed successfully"
|
||||
}
|
||||
|
||||
update_signal() {
|
||||
# check if hostname is `mini`
|
||||
if [[ "$(hostname)" != "mini" ]]; then
|
||||
debug_log "Signal update is only available on host 'mini'"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# check if distrobox is installed
|
||||
if ! command -v distrobox-upgrade &> /dev/null; then
|
||||
debug_log "distrobox is not installed"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Capture failure to prevent script exit due to set -e
|
||||
# Known issue: distrobox-upgrade may throw a stat error at the end despite success
|
||||
if ! distrobox-upgrade signal; then
|
||||
log_message "WARN" "Signal update reported an error (likely benign 'stat' issue). Continuing..."
|
||||
fi
|
||||
}
|
||||
|
||||
################################################################################
|
||||
# Main Execution
|
||||
################################################################################
|
||||
@@ -583,6 +632,9 @@ main() {
|
||||
upgrade_oh_my_zsh
|
||||
perform_system_update
|
||||
|
||||
# signal is made available using distrobox and is only available on `mini`
|
||||
update_signal
|
||||
|
||||
# Restart services
|
||||
if [[ "$SKIP_SERVICES" != true ]]; then
|
||||
if [[ "$SKIP_PLEX" != true ]]; then
|
||||
@@ -594,6 +646,11 @@ main() {
|
||||
debug_log "Skipping all service management due to --skip-services flag"
|
||||
fi
|
||||
|
||||
# Check for reboot requirement
|
||||
if [[ -f /var/run/reboot-required ]]; then
|
||||
log_message "WARN" "A system reboot is required to complete the update."
|
||||
fi
|
||||
|
||||
# Final status
|
||||
if [[ $ERRORS_DETECTED -eq 0 ]]; then
|
||||
log_message "INFO" "System update completed successfully!"
|
||||
|
||||
Reference in New Issue
Block a user