Compare commits

...

15 Commits

Author SHA1 Message Date
Peter Wood
ddf83a6564 fix: Prevent script exit on benign errors from distrobox-upgrade during signal update 2025-12-01 09:52:08 -05:00
Peter Wood
144dc6acb5 feat: Add cleanup for unused Flatpak runtimes after updates 2025-11-30 19:45:54 -05:00
Peter Wood
aa1cfbebf9 remove mention of fabric ai cli 2025-11-30 19:42:28 -05:00
Peter Wood
6aa087cf0a feat: Add uninstall script for Fabric AI CLI with logging and OS detection 2025-11-30 19:39:45 -05:00
Peter Wood
bb945ebd42 refactor: Remove Fabric installation and testing from setup scripts 2025-11-30 19:26:09 -05:00
Peter Wood
e2112206a5 feat: Enhance update script with dynamic log file location and additional package management features 2025-11-30 19:19:20 -05:00
Peter Wood
1287168961 fix: Ensure nvm loading only occurs if nvm is available 2025-11-29 10:05:55 -05:00
Peter Wood
645d10d548 fix: Correct echo placement after installing Fabric binary 2025-11-29 09:52:32 -05:00
Peter Wood
fa44ab2e45 use $HOME var instead of a hardcoded username 2025-11-29 09:48:51 -05:00
Peter Wood
40cbecdebf feat: Improve bootstrap script to handle existing directories and cloning logic 2025-11-29 09:37:00 -05:00
Peter Wood
8ceeeda560 Merge branch 'main' of git.ptrwd.com:peterwood/shell 2025-11-18 20:39:55 -05:00
Peter Wood
deb66207b3 feat: Update .gitconfig to add credentials for git.ptrwd.com 2025-11-18 20:39:50 -05:00
Peter Wood
5b17022856 feat: Enhance backup-gitea.sh for NAS support and Runner integration; add restore-gitea.sh script 2025-11-18 20:36:02 -05:00
Peter Wood
c9d13b940b add download script for Gitea Tea CLI installation 2025-10-29 21:48:53 -04:00
Peter Wood
e1535c00df add helper configuration for git.ptrwd.com in .gitconfig 2025-10-29 21:48:45 -04:00
13 changed files with 960 additions and 427 deletions

11
.github/prompts/removefabric.prompt.md vendored Normal file
View File

@@ -0,0 +1,11 @@
Create a portable bash shell script to safely uninstall the Fabric AI CLI and related packages on Debian, Ubuntu, and Fedora systems. The script must:
- Detect the operating system and select the appropriate package manager (`apt`, `dnf`, or `yum`).
- Uninstall Fabric packages installed via system package managers and Python package managers (`pip`, `pip3`).
- Check for errors after each removal step; abort the script if a critical error occurs.
- Prompt the user for confirmation before making any changes.
- Advise the user to reboot the system if required after uninstallation.
- Log all actions and errors to a user-specified log file.
- Be fully self-contained and compatible with bash.
Reference the official [Fabric documentation](https://github.com/danielmiessler/Fabric) and your distributions package manager documentation for implementation details.

2
.gitignore vendored
View File

@@ -23,6 +23,7 @@ _book
# Runtime generated files # Runtime generated files
logs/ logs/
uninstall-fabric.log
immich_backups/*.gz immich_backups/*.gz
# Backup files - ignore most backups but keep current state files # Backup files - ignore most backups but keep current state files
crontab/crontab-backups/*/archive/ crontab/crontab-backups/*/archive/
@@ -36,6 +37,7 @@ crontab/crontab-backups/*/archive/
# can be downloaded from <https://github.com/Backblaze/B2_Command_Line_Tool/releases/latest/download/b2-linux> # can be downloaded from <https://github.com/Backblaze/B2_Command_Line_Tool/releases/latest/download/b2-linux>
immich/b2-linux immich/b2-linux
# Generated dotfiles - these are created dynamically by bootstrap process # Generated dotfiles - these are created dynamically by bootstrap process
dotfiles/my-aliases.zsh dotfiles/my-aliases.zsh

View File

@@ -7,7 +7,6 @@ This repository contains various shell scripts for managing media-related tasks
- **[Backup Scripts](#backup-scripts)** - Enterprise-grade backup solutions - **[Backup Scripts](#backup-scripts)** - Enterprise-grade backup solutions
- **[Management Scripts](#management-scripts)** - System and service management - **[Management Scripts](#management-scripts)** - System and service management
- **[Security](#security)** - Comprehensive security framework and standards - **[Security](#security)** - Comprehensive security framework and standards
- **[AI Integration](#ai-integration)** - Fabric setup for AI-assisted development
- **[Tab Completion](#tab-completion)** - Intelligent command-line completion - **[Tab Completion](#tab-completion)** - Intelligent command-line completion
- **[Documentation](#comprehensive-documentation)** - Complete guides and references - **[Documentation](#comprehensive-documentation)** - Complete guides and references
- **[Testing](#testing)** - Docker-based validation framework - **[Testing](#testing)** - Docker-based validation framework
@@ -72,51 +71,6 @@ All scripts undergo comprehensive security validation:
For security-related changes, refer to the security documentation and follow the established security checklist. For security-related changes, refer to the security documentation and follow the established security checklist.
## AI Integration
This repository includes a complete AI development environment with Fabric integration for AI-assisted development tasks.
### Fabric Setup
The system includes:
- **Fabric v1.4.195** with 216+ AI patterns for text processing
- **Google Gemini 2.5 Pro** as primary AI provider
- **External AI providers** support for flexibility
- **Custom shell configuration** for optimal development experience
### Basic Fabric Usage
```bash
# List all available patterns
fabric -l
# Use a pattern (configure your preferred AI provider)
echo "Your text here" | fabric -p summarize
# Use with specific model
echo "Your text here" | fabric -p summarize -m gemini-2.0-flash-exp
# Update patterns
fabric -U
```
### Popular AI Patterns
- `summarize` - Summarize text content
- `explain_code` - Explain code snippets and logic
- `improve_writing` - Enhance writing quality and clarity
- `extract_wisdom` - Extract key insights from content
- `create_quiz` - Generate quiz questions from text
- `analyze_claims` - Analyze and fact-check claims
### Configuration Files
- **Fabric config**: `~/.config/fabric/.env` - AI provider settings and API keys
- **Shell config**: `~/.zshrc` - Main shell configuration
For complete setup instructions, see the setup documentation.
### Development Projects ### Development Projects
- **[Telegram Backup Monitoring Bot](./telegram/github-issues/README.md)**: Comprehensive Telegram bot project for monitoring and managing all backup systems with real-time notifications and control capabilities. - **[Telegram Backup Monitoring Bot](./telegram/github-issues/README.md)**: Comprehensive Telegram bot project for monitoring and managing all backup systems with real-time notifications and control capabilities.
@@ -423,22 +377,6 @@ This installs:
- Tab completion for all scripts - Tab completion for all scripts
- Development tools (Node.js via nvm, VS Code, etc.) - Development tools (Node.js via nvm, VS Code, etc.)
### AI Development Environment
For AI-assisted development, the system includes:
- **Fabric** with 216+ AI patterns for text processing
- **Google Gemini integration** as primary AI provider
- **External AI provider support** for flexibility
- **Custom configuration** for easy management
Test the AI setup:
```bash
# Test Fabric integration
echo "Test text" | fabric -p summarize
```
## Dotfiles ## Dotfiles
The repository includes dotfiles for system configuration in the `dotfiles` directory. These can be automatically set up using the bootstrap script: The repository includes dotfiles for system configuration in the `dotfiles` directory. These can be automatically set up using the bootstrap script:

376
backup-gitea.sh Normal file → Executable file
View File

@@ -1,8 +1,7 @@
#!/bin/bash #!/bin/bash
# backup-gitea.sh - Backup Gitea data and PostgreSQL database # backup-gitea.sh - Backup Gitea, Postgres, and Runner
# Author: Shell Repository # Enhanced for NAS support and Runner integration
# Description: Comprehensive backup solution for Gitea with PostgreSQL database
set -e set -e
@@ -13,255 +12,214 @@ RED='\033[0;31m'
BLUE='\033[0;34m' BLUE='\033[0;34m'
NC='\033[0m' # No Color NC='\033[0m' # No Color
# Configuration # ==========================================
# 1. CONFIGURATION
# ==========================================
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
COMPOSE_DIR="/home/acedanger/docker/gitea"
BACKUP_DIR="/home/acedanger/backups/gitea" BACKUP_DIR="/home/acedanger/backups/gitea"
COMPOSE_DIR="/home/acedanger/docker/gitea" NAS_DIR="/mnt/share/media/backups/gitea"
COMPOSE_FILE="$COMPOSE_DIR/docker-compose.yml" COMPOSE_FILE="$COMPOSE_DIR/docker-compose.yml"
LOG_FILE="$SCRIPT_DIR/logs/gitea-backup.log" LOG_FILE="$SCRIPT_DIR/logs/gitea-backup.log"
DATE=$(date +%Y%m%d_%H%M%S)
# Ensure logs directory exists # Ensure directories exist
mkdir -p "$(dirname "$LOG_FILE")" mkdir -p "$(dirname "$LOG_FILE")"
mkdir -p "$BACKUP_DIR"
# Logging function # Load .env variables from the COMPOSE_DIR to ensure DB credentials match
if [ -f "$COMPOSE_DIR/.env" ]; then
export $(grep -v '^#' "$COMPOSE_DIR/.env" | xargs)
fi
# Logging function (Fixed to interpret colors correctly)
log() { log() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" # Print to console with colors (interpreting escapes with -e)
echo -e "$(date '+%Y-%m-%d %H:%M:%S') - $1"
# Strip colors for the log file to keep it clean
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | sed 's/\x1b\[[0-9;]*m//g' >> "$LOG_FILE"
} }
# Display usage information # Display usage information
usage() { usage() {
echo "Usage: $0 [OPTIONS]" echo "Usage: $0 [OPTIONS]"
echo "" echo ""
echo "Backup Gitea data and PostgreSQL database" echo "Backup Gitea data, Runner, and PostgreSQL database"
echo ""
echo "Options:" echo "Options:"
echo " -h, --help Show this help message" echo " -h, --help Show this help message"
echo " -d, --dry-run Show what would be backed up without doing it" echo " -l, --list List available local backups"
echo " -f, --force Force backup even if one was recently created" echo " -n, --no-nas Skip copying to NAS (Local only)"
echo " -r, --restore FILE Restore from specified backup directory"
echo " -l, --list List available backups"
echo " -c, --cleanup Clean up old backups (keeps last 7 days)"
echo " --keep-days DAYS Number of days to keep backups (default: 7)"
echo "" echo ""
echo "Examples:"
echo " $0 # Regular backup"
echo " $0 --dry-run # See what would be backed up"
echo " $0 --list # List available backups"
echo " $0 --restore /path/to/backup # Restore from backup"
} }
# Check dependencies # Check dependencies
check_dependencies() { check_dependencies() {
local missing_deps=() if ! command -v docker &> /dev/null; then
log "${RED}Error: docker is not installed.${NC}"
command -v docker >/dev/null 2>&1 || missing_deps+=("docker")
command -v docker-compose >/dev/null 2>&1 || missing_deps+=("docker-compose")
if [ ${#missing_deps[@]} -ne 0 ]; then
echo -e "${RED}Error: Missing required dependencies: ${missing_deps[*]}${NC}"
echo "Please install the missing dependencies and try again."
exit 1 exit 1
fi fi
# Check if docker-compose file exists # Verify the compose file exists where we expect it
if [ ! -f "$COMPOSE_FILE" ]; then if [ ! -f "$COMPOSE_FILE" ]; then
echo -e "${RED}Error: Docker compose file not found at $COMPOSE_FILE${NC}" log "${RED}Error: Docker compose file not found at: $COMPOSE_FILE${NC}"
log "${YELLOW}Please update the COMPOSE_DIR variable in this script.${NC}"
exit 1 exit 1
fi fi
# Check if we can access Docker
if ! docker info >/dev/null 2>&1; then
echo -e "${RED}Error: Cannot access Docker. Check if Docker is running and you have permissions.${NC}"
exit 1
fi
}
# Check if Gitea services are running
check_gitea_services() {
cd "$COMPOSE_DIR"
if ! docker-compose ps | grep -q "Up"; then
echo -e "${YELLOW}Warning: Gitea services don't appear to be running${NC}"
echo "Some backup operations may fail if services are not running."
read -p "Continue anyway? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Backup cancelled"
exit 1
fi
fi
} }
# List available backups # List available backups
list_backups() { list_backups() {
echo -e "${BLUE}=== Available Gitea Backups ===${NC}" echo -e "${BLUE}=== Available Gitea Backups ===${NC}"
ls -lh "$BACKUP_DIR"/*.tar.gz 2>/dev/null || echo "No backups found."
if [ ! -d "$BACKUP_DIR" ]; then
echo -e "${YELLOW}No backup directory found at $BACKUP_DIR${NC}"
return 0
fi
local count=0
# Find backup directories
for backup_path in "$BACKUP_DIR"/gitea_backup_*; do
if [ -d "$backup_path" ]; then
local backup_name
backup_name=$(basename "$backup_path")
local backup_date
backup_date=$(echo "$backup_name" | sed 's/gitea_backup_//' | sed 's/_/ /')
local size
size=$(du -sh "$backup_path" 2>/dev/null | cut -f1)
local info_file="$backup_path/backup_info.txt"
echo -e "${GREEN}📦 $backup_name${NC}"
echo " Date: $backup_date"
echo " Size: $size"
echo " Path: $backup_path"
if [ -f "$info_file" ]; then
local gitea_version
gitea_version=$(grep "Gitea Version:" "$info_file" 2>/dev/null | cut -d: -f2- | xargs)
if [ -n "$gitea_version" ]; then
echo " Version: $gitea_version"
fi
fi
echo ""
count=$((count + 1))
fi
done
if [ $count -eq 0 ]; then
echo -e "${YELLOW}No backups found in $BACKUP_DIR${NC}"
echo "Run a backup first to create one."
else
echo -e "${BLUE}Total backups found: $count${NC}"
fi
} }
# Change to compose directory # ==========================================
cd "$COMPOSE_DIR" # 2. BACKUP LOGIC
# ==========================================
perform_backup() {
local SKIP_NAS=$1
log "Starting backup process..."
# Create timestamped backup directory # Switch context to the directory where Gitea is actually running
BACKUP_PATH="$BACKUP_DIR/gitea_backup_$DATE" cd "$COMPOSE_DIR" || { log "${RED}Could not change to directory $COMPOSE_DIR${NC}"; exit 1; }
mkdir -p "$BACKUP_PATH"
# Backup PostgreSQL database # PRE-FLIGHT CHECK: Is the DB actually running?
echo "Backing up PostgreSQL database..." if ! docker compose ps --services --filter "status=running" | grep -q "db"; then
docker-compose exec -T db pg_dump -U ${POSTGRES_USER:-gitea} ${POSTGRES_DB:-gitea} > "$BACKUP_PATH/database.sql" log "${RED}CRITICAL ERROR: The 'db' service is not running in $COMPOSE_DIR${NC}"
log "${YELLOW}Docker sees these running services:$(docker compose ps --services --filter "status=running" | xargs)${NC}"
log "Aborting backup to prevent empty files."
exit 1
fi
# Backup Gitea data volume # Create a temporary staging directory for this specific backup
echo "Backing up Gitea data volume..." TEMP_BACKUP_PATH="$BACKUP_DIR/temp_$DATE"
docker run --rm \ mkdir -p "$TEMP_BACKUP_PATH"
-v gitea_gitea:/data:ro \
-v "$BACKUP_PATH":/backup \
alpine:latest \
tar czf /backup/gitea_data.tar.gz -C /data .
# Backup PostgreSQL data volume (optional, as we have the SQL dump) # 1. Backup Database
echo "Backing up PostgreSQL data volume..." log "Step 1/5: Dumping PostgreSQL database..."
docker run --rm \ # Using -T to disable TTY allocation (fixes some cron issues)
-v gitea_postgres:/data:ro \ if docker compose exec -T db pg_dump -U "${POSTGRES_USER:-gitea}" "${POSTGRES_DB:-gitea}" > "$TEMP_BACKUP_PATH/database.sql"; then
-v "$BACKUP_PATH":/backup \ echo -e "${GREEN}Database dump successful.${NC}"
alpine:latest \ else
tar czf /backup/postgres_data.tar.gz -C /data . log "${RED}Database dump failed!${NC}"
rm -rf "$TEMP_BACKUP_PATH"
exit 1
fi
# Copy docker-compose configuration # 2. Backup Gitea Data
echo "Backing up configuration files..." log "Step 2/5: Backing up Gitea data volume..."
cp "$COMPOSE_FILE" "$BACKUP_PATH/" docker run --rm \
if [ -f ".env" ]; then --volumes-from gitea \
cp ".env" "$BACKUP_PATH/" -v "$TEMP_BACKUP_PATH":/backup \
fi alpine tar czf /backup/gitea_data.tar.gz -C /data .
# Create a restore script # 3. Backup Runner Data
cat > "$BACKUP_PATH/restore.sh" << 'EOF' log "Step 3/5: Backing up Runner data..."
# Check if runner exists before backing up to avoid errors if you removed it
if docker compose ps --services | grep -q "runner"; then
docker run --rm \
--volumes-from gitea-runner \
-v "$TEMP_BACKUP_PATH":/backup \
alpine tar czf /backup/runner_data.tar.gz -C /data .
else
log "${YELLOW}Runner service not found, skipping runner backup.${NC}"
fi
# 4. Config Files & Restore Script
log "Step 4/5: Archiving configurations and generating restore script..."
cp "$COMPOSE_FILE" "$TEMP_BACKUP_PATH/"
[ -f ".env" ] && cp ".env" "$TEMP_BACKUP_PATH/"
# Generate the Restore Script inside the backup folder
create_restore_script "$TEMP_BACKUP_PATH"
# 5. Final Archive Creation
log "Step 5/5: Compressing full backup..."
FINAL_ARCHIVE_NAME="gitea_backup_$DATE.tar.gz"
# Tar the temp folder into one final file
tar -czf "$BACKUP_DIR/$FINAL_ARCHIVE_NAME" -C "$TEMP_BACKUP_PATH" .
# Remove temp folder
rm -rf "$TEMP_BACKUP_PATH"
log "${GREEN}Local Backup completed: $BACKUP_DIR/$FINAL_ARCHIVE_NAME${NC}"
# 6. NAS Transfer
if [[ "$SKIP_NAS" != "true" ]]; then
if [ -d "$NAS_DIR" ]; then
log "Copying to NAS ($NAS_DIR)..."
cp "$BACKUP_DIR/$FINAL_ARCHIVE_NAME" "$NAS_DIR/"
if [ $? -eq 0 ]; then
log "${GREEN}NAS Copy Successful.${NC}"
else
log "${RED}NAS Copy Failed. Check permissions on $NAS_DIR${NC}"
fi
else
log "${YELLOW}NAS Directory $NAS_DIR not found. Skipping NAS copy.${NC}"
fi
else
log "NAS copy skipped by user request."
fi
# 7. Cleanup Old Local Backups (Keep 7 Days)
find "$BACKUP_DIR" -name "gitea_backup_*.tar.gz" -mtime +7 -exec rm {} \;
log "Cleanup of old local backups complete."
}
# Function to generate the restore script
create_restore_script() {
local TARGET_DIR=$1
cat > "$TARGET_DIR/restore.sh" << 'EOF'
#!/bin/bash #!/bin/bash
# Restore script for Gitea backup # RESTORE SCRIPT
echo "WARNING: This will overwrite your current Gitea/DB/Runner data."
read -p "Are you sure? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then exit 1; fi
set -e docker compose down
RESTORE_DIR="$(dirname "$0")" echo "Restoring Database Volume..."
COMPOSE_DIR="/home/acedanger/docker/gitea" docker compose up -d db
echo "Waiting for DB to initialize..."
sleep 15
cat database.sql | docker compose exec -T db psql -U ${POSTGRES_USER:-gitea} -d ${POSTGRES_DB:-gitea}
echo "WARNING: This will stop Gitea and replace all data!" echo "Restoring Gitea Files..."
read -p "Are you sure you want to continue? (yes/no): " confirm docker run --rm --volumes-from gitea -v $(pwd):/backup alpine tar xzf /backup/gitea_data.tar.gz -C /data
if [ "$confirm" != "yes" ]; then echo "Restoring Runner Files..."
echo "Restore cancelled" docker run --rm --volumes-from gitea-runner -v $(pwd):/backup alpine tar xzf /backup/runner_data.tar.gz -C /data
exit 1
echo "Restarting stack..."
docker compose up -d
echo "Restore Complete."
EOF
chmod +x "$TARGET_DIR/restore.sh"
}
# ==========================================
# 3. EXECUTION FLOW
# ==========================================
check_dependencies
# Parse Arguments
if [ $# -eq 0 ]; then
perform_backup "false"
exit 0
fi fi
cd "$COMPOSE_DIR" while [[ "$#" -gt 0 ]]; do
case $1 in
# Stop services -h|--help) usage; exit 0 ;;
echo "Stopping Gitea services..." -l|--list) list_backups; exit 0 ;;
docker-compose down -n|--no-nas) perform_backup "true"; exit 0 ;;
*) echo "Unknown parameter: $1"; usage; exit 1 ;;
# Remove existing volumes esac
echo "Removing existing volumes..." shift
docker volume rm gitea_gitea gitea_postgres || true done
# Recreate volumes
echo "Creating volumes..."
docker volume create gitea_gitea
docker volume create gitea_postgres
# Restore Gitea data
echo "Restoring Gitea data..."
docker run --rm \
-v gitea_gitea:/data \
-v "$RESTORE_DIR":/backup:ro \
alpine:latest \
tar xzf /backup/gitea_data.tar.gz -C /data
# Start database for restore
echo "Starting database for restore..."
docker-compose up -d db
# Wait for database to be ready
echo "Waiting for database to be ready..."
sleep 10
# Restore database
echo "Restoring database..."
docker-compose exec -T db psql -U ${POSTGRES_USER:-gitea} -d ${POSTGRES_DB:-gitea} < "$RESTORE_DIR/database.sql"
# Start all services
echo "Starting all services..."
docker-compose up -d
echo "Restore completed!"
EOF
chmod +x "$BACKUP_PATH/restore.sh"
# Create info file
cat > "$BACKUP_PATH/backup_info.txt" << EOF
Gitea Backup Information
========================
Backup Date: $(date)
Backup Location: $BACKUP_PATH
Gitea Version: $(docker-compose exec -T server gitea --version | head -1)
PostgreSQL Version: $(docker-compose exec -T db postgres --version)
Files included:
- database.sql: PostgreSQL database dump
- gitea_data.tar.gz: Gitea data volume
- postgres_data.tar.gz: PostgreSQL data volume
- docker-compose.yml: Docker compose configuration
- .env: Environment variables (if exists)
- restore.sh: Restore script
To restore this backup, run:
cd $BACKUP_PATH
./restore.sh
EOF
# Cleanup old backups (keep last 7 days)
echo "Cleaning up old backups..."
find "$BACKUP_DIR" -type d -name "gitea_backup_*" -mtime +7 -exec rm -rf {} + 2>/dev/null || true
echo "Backup completed successfully!"
echo "Backup saved to: $BACKUP_PATH"
echo "Backup size: $(du -sh "$BACKUP_PATH" | cut -f1)"

View File

@@ -6,6 +6,8 @@
helper = !/usr/bin/gh auth git-credential helper = !/usr/bin/gh auth git-credential
[credential "https://git.ptrwd.com"] [credential "https://git.ptrwd.com"]
username = peterwood username = peterwood
helper =
helper = !tea login helper
[user] [user]
email = peter@peterwood.dev email = peter@peterwood.dev
name = Peter Wood name = Peter Wood
@@ -16,3 +18,8 @@
[core] [core]
autocrlf = input autocrlf = input
eol = lf eol = lf
[credential "https://git.ptrwd.com"]
username = peterwood
provider = generic
[credential]
helper = cache

View File

@@ -3,7 +3,7 @@
export PATH=$PATH:$HOME/.local/bin export PATH=$PATH:$HOME/.local/bin
# Path to your oh-my-zsh installation. # Path to your oh-my-zsh installation.
export ZSH="/home/acedanger/.oh-my-zsh" export ZSH="$HOME/.oh-my-zsh"
# Set name of the theme to load --- if set to "random", it will # Set name of the theme to load --- if set to "random", it will
# load a random theme each time oh-my-zsh is loaded, in which case, # load a random theme each time oh-my-zsh is loaded, in which case,
@@ -100,24 +100,27 @@ export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
# Automatically use node version specified in .nvmrc if present # Automatically use node version specified in .nvmrc if present
autoload -U add-zsh-hook # Only enable if nvm is loaded
load-nvmrc() { if command -v nvm_find_nvmrc > /dev/null 2>&1; then
local nvmrc_path="$(nvm_find_nvmrc)" autoload -U add-zsh-hook
if [ -n "$nvmrc_path" ]; then load-nvmrc() {
local nvmrc_node_version=$(nvm version "$(cat "${nvmrc_path}")") local nvmrc_path="$(nvm_find_nvmrc)"
if [ "$nvmrc_node_version" = "N/A" ]; then if [ -n "$nvmrc_path" ]; then
nvm install local nvmrc_node_version=$(nvm version "$(cat "${nvmrc_path}")")
elif [ "$nvmrc_node_version" != "$(nvm version)" ]; then if [ "$nvmrc_node_version" = "N/A" ]; then
nvm use nvm install
elif [ "$nvmrc_node_version" != "$(nvm version)" ]; then
nvm use
fi
elif [ -n "$(PWD=$OLDPWD nvm_find_nvmrc)" ] && [ "$(nvm version)" != "$(nvm version default)" ]; then
nvm use default
fi fi
elif [ -n "$(PWD=$OLDPWD nvm_find_nvmrc)" ] && [ "$(nvm version)" != "$(nvm version default)" ]; then }
nvm use default add-zsh-hook chpwd load-nvmrc
fi load-nvmrc
} fi
add-zsh-hook chpwd load-nvmrc
load-nvmrc
[[ -s /home/acedanger/.autojump/etc/profile.d/autojump.sh ]] && source /home/acedanger/.autojump/etc/profile.d/autojump.sh [[ -s $HOME/.autojump/etc/profile.d/autojump.sh ]] && source $HOME/.autojump/etc/profile.d/autojump.sh
# Enable bash completion compatibility in zsh # Enable bash completion compatibility in zsh
autoload -U +X bashcompinit && bashcompinit autoload -U +X bashcompinit && bashcompinit
@@ -138,69 +141,32 @@ if [ -f "$HOME/shell/completions/env-backup-completion.bash" ]; then
source "$HOME/shell/completions/env-backup-completion.bash" source "$HOME/shell/completions/env-backup-completion.bash"
fi fi
# Go environment variables (required for Fabric and other Go tools) # Go environment variables
# GOROOT is auto-detected by Go when installed via package manager # GOROOT is auto-detected by Go when installed via package manager
export GOPATH=$HOME/go export GOPATH=$HOME/go
export PATH=$GOPATH/bin:$PATH export PATH=$GOPATH/bin:$PATH
# Fabric AI - Pattern aliases and helper functions
if command -v fabric &> /dev/null; then
# Loop through all directories in the ~/.config/fabric/patterns directory to create aliases
if [ -d "$HOME/.config/fabric/patterns" ]; then
for pattern_dir in $HOME/.config/fabric/patterns/*/; do
if [ -d "$pattern_dir" ]; then
# Get the base name of the directory (i.e., remove the directory path)
pattern_name=$(basename "$pattern_dir")
# Create an alias in the form: alias pattern_name="fabric --pattern pattern_name"
alias_command="alias $pattern_name='fabric --pattern $pattern_name'"
# Evaluate the alias command to add it to the current shell
eval "$alias_command"
fi
done
fi
# YouTube transcript helper function
yt() {
if [ "$#" -eq 0 ] || [ "$#" -gt 2 ]; then
echo "Usage: yt [-t | --timestamps] youtube-link"
echo "Use the '-t' flag to get the transcript with timestamps."
return 1
fi
transcript_flag="--transcript"
if [ "$1" = "-t" ] || [ "$1" = "--timestamps" ]; then
transcript_flag="--transcript-with-timestamps"
shift
fi
local video_link="$1"
fabric -y "$video_link" $transcript_flag
}
fi
# SSH Agent Management - Start only if needed and working properly # SSH Agent Management - Start only if needed and working properly
ssh_agent_start() { ssh_agent_start() {
local ssh_agent_env="$HOME/.ssh-agent-env" local ssh_agent_env="$HOME/.ssh-agent-env"
# Function to check if ssh-agent is running and responsive # Function to check if ssh-agent is running and responsive
ssh_agent_running() { ssh_agent_running() {
[ -n "$SSH_AUTH_SOCK" ] && [ -S "$SSH_AUTH_SOCK" ] && ssh-add -l >/dev/null 2>&1 [ -n "$SSH_AUTH_SOCK" ] && [ -S "$SSH_AUTH_SOCK" ] && ssh-add -l >/dev/null 2>&1
} }
# Load existing agent environment if it exists # Load existing agent environment if it exists
if [ -f "$ssh_agent_env" ]; then if [ -f "$ssh_agent_env" ]; then
source "$ssh_agent_env" >/dev/null 2>&1 source "$ssh_agent_env" >/dev/null 2>&1
fi fi
# Check if agent is running and responsive # Check if agent is running and responsive
if ! ssh_agent_running; then if ! ssh_agent_running; then
# Start new agent only if ssh key exists # Start new agent only if ssh key exists
if [ -f "$HOME/.ssh/id_ed25519" ]; then if [ -f "$HOME/.ssh/id_ed25519" ]; then
# Clean up any stale agent environment # Clean up any stale agent environment
[ -f "$ssh_agent_env" ] && rm -f "$ssh_agent_env" [ -f "$ssh_agent_env" ] && rm -f "$ssh_agent_env"
# Start new agent and save environment # Start new agent and save environment
ssh-agent -s > "$ssh_agent_env" 2>/dev/null ssh-agent -s > "$ssh_agent_env" 2>/dev/null
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then

337
download-tea.sh Executable file
View File

@@ -0,0 +1,337 @@
#!/bin/bash
#
# download-tea.sh - Downloads and installs the Gitea Tea CLI tool for Linux
#
# SYNOPSIS
# ./download-tea.sh [OPTIONS]
#
# DESCRIPTION
# This script automatically downloads, verifies, and installs the Tea CLI (Gitea command-line tool).
# It supports automatic detection of the latest version, SHA256 checksum verification, and intelligent
# installation location based on user privileges.
#
# OPTIONS
# -v, --version VERSION Specifies the version of Tea to install. If not provided, the script will
# automatically fetch the latest version from the Gitea releases page.
# -f, --force Bypasses the overwrite confirmation prompt if Tea is already installed.
# -h, --help Display this help message and exit.
#
# EXAMPLES
# ./download-tea.sh
# Automatically detects and installs the latest version of Tea.
#
# ./download-tea.sh -v 0.11.1
# Installs a specific version of Tea (v0.11.1).
#
# ./download-tea.sh --force
# Installs the latest version and overwrites any existing installation without prompting.
#
# ./download-tea.sh -v 0.11.1 -f
# Installs version 0.11.1 and overwrites existing installation without prompting.
#
# NOTES
# - Requires internet connection to download from https://gitea.com
# - Automatically detects system architecture (amd64, arm64, 386, arm)
# - Installation location:
# * With root/sudo privileges: /usr/local/bin
# * Without privileges: ~/.local/bin
# - Automatically updates PATH environment variable if needed
# - After installation, restart your terminal or reload shell configuration
#
# LINK
# https://gitea.com/gitea/tea
#
# Color codes for output
readonly GREEN='\033[0;32m'
readonly RED='\033[0;31m'
readonly YELLOW='\033[1;33m'
readonly CYAN='\033[0;36m'
readonly GRAY='\033[0;37m'
readonly NC='\033[0m' # No Color
# Variables
TEA_VERSION=""
FORCE=false
# Show usage information
show_usage() {
sed -n '3,37p' "$0" | sed 's/^# \?//'
exit 0
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-v|--version)
TEA_VERSION="$2"
shift 2
;;
-f|--force)
FORCE=true
shift
;;
-h|--help)
show_usage
;;
*)
echo -e "${RED}Error: Unknown option: $1${NC}" >&2
echo "Use -h or --help for usage information."
exit 1
;;
esac
done
# 1. Determine version to install
if [[ -z "$TEA_VERSION" ]]; then
echo "No version specified. Fetching latest release..."
# Try to fetch the latest version using curl
if command -v curl &> /dev/null; then
RELEASE_PAGE=$(curl -sL "https://gitea.com/gitea/tea/releases" 2>&1)
if [[ $? -ne 0 ]]; then
echo -e "${RED}Error: Failed to fetch latest version: $RELEASE_PAGE${NC}" >&2
echo "Please specify a version manually using -v parameter."
exit 1
fi
# Fallback to wget if curl is not available
elif command -v wget &> /dev/null; then
RELEASE_PAGE=$(wget -qO- "https://gitea.com/gitea/tea/releases" 2>&1)
if [[ $? -ne 0 ]]; then
echo -e "${RED}Error: Failed to fetch latest version: $RELEASE_PAGE${NC}" >&2
echo "Please specify a version manually using -v parameter."
exit 1
fi
else
echo -e "${RED}Error: Neither curl nor wget is available${NC}" >&2
echo "Please install curl or wget, or specify a version manually using -v parameter."
exit 1
fi
# Parse the HTML to find the latest version tag (format: /gitea/tea/releases/tag/v0.x.x)
if [[ "$RELEASE_PAGE" =~ /gitea/tea/releases/tag/v([0-9]+\.[0-9]+\.[0-9]+) ]]; then
TEA_VERSION="${BASH_REMATCH[1]}"
echo -e "${GREEN}Latest version found: v$TEA_VERSION${NC}"
else
echo -e "${RED}Error: Could not determine the latest version from the releases page.${NC}" >&2
echo "Please specify a version manually using -v parameter."
exit 1
fi
else
echo "Using specified version: v$TEA_VERSION"
fi
# 2. Define variables for the download
# Detect system architecture
ARCH=$(uname -m)
case "$ARCH" in
x86_64)
ARCHITECTURE="amd64"
;;
aarch64|arm64)
ARCHITECTURE="arm64"
;;
i386|i686)
ARCHITECTURE="386"
;;
armv7l|armv6l)
ARCHITECTURE="arm"
;;
*)
echo -e "${RED}Error: Unsupported architecture: $ARCH${NC}" >&2
exit 1
;;
esac
# Construct download URLs
DOWNLOAD_URL="https://gitea.com/gitea/tea/releases/download/v${TEA_VERSION}/tea-${TEA_VERSION}-linux-${ARCHITECTURE}"
CHECKSUM_URL="https://gitea.com/gitea/tea/releases/download/v${TEA_VERSION}/checksums.txt"
# Determine installation directory based on privileges
# Check if we can write to /usr/local/bin (requires root/sudo)
if [[ -w "/usr/local/bin" ]] || [[ "$EUID" -eq 0 ]]; then
INSTALL_DIR="/usr/local/bin"
else
INSTALL_DIR="$HOME/.local/bin"
fi
# Define file names and temp locations
FILE_NAME="tea"
TEMP_FILE="/tmp/tea-${TEA_VERSION}"
TEMP_CHECKSUM="/tmp/tea-checksums.txt"
# 3. Check if file already exists
if [[ -f "$INSTALL_DIR/$FILE_NAME" ]] && [[ "$FORCE" != true ]]; then
# Prompt user before overwriting existing installation
echo -n "tea already exists in $INSTALL_DIR. Overwrite? (Y/N): "
read -r response
if [[ ! "$response" =~ ^[Yy]$ ]]; then
echo "Installation cancelled."
exit 0
fi
fi
# 4. Create the installation directory if it doesn't exist
echo "Creating installation directory: $INSTALL_DIR"
if [[ ! -d "$INSTALL_DIR" ]]; then
if ! mkdir -p "$INSTALL_DIR"; then
echo -e "${RED}Error: Failed to create installation directory${NC}" >&2
exit 1
fi
fi
# 5. Download the binary and checksum
echo "Downloading tea CLI v$TEA_VERSION for $ARCHITECTURE..."
# Use curl if available, otherwise use wget
if command -v curl &> /dev/null; then
if ! curl -fL "$DOWNLOAD_URL" -o "$TEMP_FILE"; then
echo -e "${RED}Error: Failed to download the file. Please check the URL and version number.${NC}" >&2
rm -f "$TEMP_FILE"
exit 1
fi
elif command -v wget &> /dev/null; then
if ! wget -q "$DOWNLOAD_URL" -O "$TEMP_FILE"; then
echo -e "${RED}Error: Failed to download the file. Please check the URL and version number.${NC}" >&2
rm -f "$TEMP_FILE"
exit 1
fi
else
echo -e "${RED}Error: Neither curl nor wget is available${NC}" >&2
exit 1
fi
echo "Binary downloaded successfully."
# Try to download checksum file for verification (optional but recommended)
CHECKSUM_AVAILABLE=false
echo "Attempting to download checksum file..."
if command -v curl &> /dev/null; then
if curl -fL "$CHECKSUM_URL" -o "$TEMP_CHECKSUM" 2>/dev/null; then
CHECKSUM_AVAILABLE=true
echo "Checksum file downloaded."
fi
elif command -v wget &> /dev/null; then
if wget -q "$CHECKSUM_URL" -O "$TEMP_CHECKSUM" 2>/dev/null; then
CHECKSUM_AVAILABLE=true
echo "Checksum file downloaded."
fi
fi
if [[ "$CHECKSUM_AVAILABLE" != true ]]; then
echo -e "${YELLOW}Warning: Checksum file not available for this version. Skipping verification.${NC}"
fi
# 6. Verify checksum (if available)
if [[ "$CHECKSUM_AVAILABLE" == true ]]; then
echo "Verifying file integrity..."
# Check if sha256sum is available
if ! command -v sha256sum &> /dev/null; then
echo -e "${YELLOW}Warning: sha256sum not available. Skipping checksum verification.${NC}"
else
# Calculate SHA256 hash of downloaded file
DOWNLOADED_HASH=$(sha256sum "$TEMP_FILE" | awk '{print $1}')
# Parse checksums.txt to find the hash for our specific file
# The checksums file format is: "hash filename" (two spaces)
EXPECTED_FILE_NAME="tea-${TEA_VERSION}-linux-${ARCHITECTURE}"
EXPECTED_HASH=$(grep -E "^[a-fA-F0-9]+\s+${EXPECTED_FILE_NAME}$" "$TEMP_CHECKSUM" | awk '{print $1}')
if [[ -z "$EXPECTED_HASH" ]]; then
echo -e "${YELLOW}Warning: Could not find checksum for $EXPECTED_FILE_NAME in checksums.txt${NC}"
echo -n "Continue without checksum verification? (Y/N): "
read -r response
if [[ ! "$response" =~ ^[Yy]$ ]]; then
rm -f "$TEMP_FILE" "$TEMP_CHECKSUM"
exit 1
fi
elif [[ "$DOWNLOADED_HASH" == "$EXPECTED_HASH" ]]; then
echo -e "${GREEN}Checksum verification passed.${NC}"
else
echo -e "${RED}Error: Checksum verification failed! Downloaded file may be corrupted or tampered with.${NC}" >&2
echo "Expected: $EXPECTED_HASH"
echo "Got: $DOWNLOADED_HASH"
rm -f "$TEMP_FILE" "$TEMP_CHECKSUM"
exit 1
fi
fi
fi
# 7. Move file to installation directory and make it executable
if ! mv "$TEMP_FILE" "$INSTALL_DIR/$FILE_NAME"; then
echo -e "${RED}Error: Failed to move file to installation directory${NC}" >&2
rm -f "$TEMP_FILE" "$TEMP_CHECKSUM"
exit 1
fi
if ! chmod +x "$INSTALL_DIR/$FILE_NAME"; then
echo -e "${RED}Error: Failed to make file executable${NC}" >&2
rm -f "$INSTALL_DIR/$FILE_NAME" "$TEMP_CHECKSUM"
exit 1
fi
echo "Installed to $INSTALL_DIR/$FILE_NAME"
rm -f "$TEMP_CHECKSUM"
# 8. Add the directory to PATH if necessary
echo ""
echo "Updating PATH environment variable..."
# Check if the directory is already in PATH
if [[ ":$PATH:" == *":$INSTALL_DIR:"* ]]; then
echo "Directory is already in PATH."
else
echo -e "${YELLOW}Directory is not in PATH.${NC}"
# Determine which shell configuration file to update
if [[ -n "$ZSH_VERSION" ]]; then
SHELL_CONFIG="$HOME/.zshrc"
elif [[ -n "$BASH_VERSION" ]]; then
if [[ -f "$HOME/.bashrc" ]]; then
SHELL_CONFIG="$HOME/.bashrc"
elif [[ -f "$HOME/.bash_profile" ]]; then
SHELL_CONFIG="$HOME/.bash_profile"
fi
fi
if [[ -n "$SHELL_CONFIG" ]]; then
# Check if PATH export already exists in config file
if ! grep -q "export PATH=\"\$PATH:$INSTALL_DIR\"" "$SHELL_CONFIG" 2>/dev/null; then
echo "" >> "$SHELL_CONFIG"
echo "# Added by tea installer" >> "$SHELL_CONFIG"
echo "export PATH=\"\$PATH:$INSTALL_DIR\"" >> "$SHELL_CONFIG"
echo -e "${GREEN}Added to PATH in $SHELL_CONFIG${NC}"
echo "You may need to restart your terminal or run: source $SHELL_CONFIG"
else
echo "PATH entry already exists in $SHELL_CONFIG"
fi
else
echo -e "${YELLOW}Warning: Could not determine shell configuration file.${NC}"
echo "You may need to add '$INSTALL_DIR' to your PATH manually."
fi
fi
# 9. Verification
echo ""
echo "Verification:"
if command -v tea &> /dev/null; then
VERSION_OUTPUT=$(tea --version 2>&1)
echo -e "${GREEN}$VERSION_OUTPUT${NC}"
elif [[ -x "$INSTALL_DIR/$FILE_NAME" ]]; then
VERSION_OUTPUT=$("$INSTALL_DIR/$FILE_NAME" --version 2>&1)
echo -e "${GREEN}$VERSION_OUTPUT${NC}"
else
echo -e "${YELLOW}Warning: Could not execute tea. PATH may not be updated in this session.${NC}"
fi
echo ""
echo -e "${CYAN}Installation complete! Run 'tea login add' to start configuring your Gitea instance.${NC}"
if [[ ":$PATH:" != *":$INSTALL_DIR:"* ]]; then
echo -e "${YELLOW}Note: If 'tea' command is not found, restart your terminal or reload PATH with:${NC}"
if [[ -n "$SHELL_CONFIG" ]]; then
echo -e "${GRAY} source $SHELL_CONFIG${NC}"
else
echo -e "${GRAY} export PATH=\"\$PATH:$INSTALL_DIR\"${NC}"
fi
fi

124
restore-gitea.sh Executable file
View File

@@ -0,0 +1,124 @@
#!/bin/bash
# restore-gitea.sh
# Usage: ./restore-gitea.sh <path_to_backup.tar.gz> <destination_directory>
set -e
# Colors for output
GREEN='\033[0;32m'
YELLOW='\033[0;33m'
RED='\033[0;31m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Check Arguments
if [ "$#" -ne 2 ]; then
echo -e "${RED}Usage: $0 <path_to_backup_file> <destination_directory>${NC}"
echo "Example: $0 ./backups/gitea_backup.tar.gz ~/docker/gitea_restore"
exit 1
fi
BACKUP_FILE=$(realpath "$1")
DEST_DIR="$2"
# 1. Validation
if [ ! -f "$BACKUP_FILE" ]; then
echo -e "${RED}Error: Backup file not found at $BACKUP_FILE${NC}"
exit 1
fi
if [ -d "$DEST_DIR" ]; then
echo -e "${YELLOW}Warning: Destination directory '$DEST_DIR' already exists.${NC}"
echo -e "${RED}This process will overwrite files and STOP containers in that directory.${NC}"
read -p "Are you sure you want to continue? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Restore cancelled."
exit 1
fi
else
echo -e "${BLUE}Creating destination directory: $DEST_DIR${NC}"
mkdir -p "$DEST_DIR"
fi
# Switch to destination directory
cd "$DEST_DIR" || exit 1
# 2. Extract Backup Archive
echo -e "${BLUE}Step 1/6: Extracting backup archive...${NC}"
tar -xzf "$BACKUP_FILE"
echo "Extraction complete."
# Load environment variables from the extracted .env (if it exists)
if [ -f ".env" ]; then
echo "Loading .env configuration..."
export $(grep -v '^#' .env | xargs)
fi
# 3. Stop Existing Services & Clean Volumes
echo -e "${BLUE}Step 2/6: Preparing Docker environment...${NC}"
# We stop containers and remove volumes to ensure a clean restore state
docker compose down -v 2>/dev/null || true
echo "Environment cleaned."
# 4. Restore Volume Data (Files)
echo -e "${BLUE}Step 3/6: Restoring Gitea Data Volume...${NC}"
# We must create the containers (no-start) first so the volume exists
docker compose create gitea
# Helper container to extract data into the volume
docker run --rm \
--volumes-from gitea \
-v "$DEST_DIR":/backup \
alpine tar xzf /backup/gitea_data.tar.gz -C /data
echo "Gitea data restored."
# Restore Runner Data (if present)
if [ -f "runner_data.tar.gz" ]; then
echo -e "${BLUE}Step 4/6: Restoring Runner Data Volume...${NC}"
docker compose create runner 2>/dev/null || true
if docker compose ps -a | grep -q "runner"; then
docker run --rm \
--volumes-from gitea-runner \
-v "$DEST_DIR":/backup \
alpine tar xzf /backup/runner_data.tar.gz -C /data
echo "Runner data restored."
else
echo -e "${YELLOW}Runner service not defined in compose file. Skipping.${NC}"
fi
else
echo "No runner backup found. Skipping."
fi
# 5. Restore Database
echo -e "${BLUE}Step 5/6: Restoring Database...${NC}"
# Start only the DB container
docker compose up -d db
# Wait for Postgres to be ready
echo "Waiting for Database to initialize (15s)..."
sleep 15
if [ -f "database.sql" ]; then
echo "Importing SQL dump..."
cat database.sql | docker compose exec -T db psql -U "${POSTGRES_USER:-gitea}" -d "${POSTGRES_DB:-gitea}"
echo "Database import successful."
else
echo -e "${RED}Error: database.sql not found in backup!${NC}"
exit 1
fi
# 6. Start All Services
echo -e "${BLUE}Step 6/6: Starting Gitea...${NC}"
docker compose up -d
# Cleanup extracted files (Optional - comment out if you want to inspect them)
# echo "Cleaning up temporary extraction files..."
# rm database.sql gitea_data.tar.gz runner_data.tar.gz
echo -e "${GREEN}=======================================${NC}"
echo -e "${GREEN}✅ Restore Complete!${NC}"
echo -e "${GREEN}Gitea is running at: $DEST_DIR${NC}"
echo -e "${GREEN}=======================================${NC}"

View File

@@ -61,14 +61,24 @@ if ! command -v git &>/dev/null; then
esac esac
fi fi
# Create shell directory if it doesn't exist
mkdir -p "$HOME/shell"
# Clone or update repository # Clone or update repository
if [ -d "$DOTFILES_DIR" ]; then if [ -d "$DOTFILES_DIR/.git" ]; then
echo -e "${YELLOW}Updating existing shell repository...${NC}" echo -e "${YELLOW}Updating existing shell repository...${NC}"
cd "$DOTFILES_DIR" cd "$DOTFILES_DIR"
git pull origin $DOTFILES_BRANCH git pull origin $DOTFILES_BRANCH
elif [ -d "$DOTFILES_DIR" ]; then
echo -e "${YELLOW}Directory exists but is not a git repository.${NC}"
# Check if directory is empty
if [ -z "$(ls -A "$DOTFILES_DIR")" ]; then
echo -e "${YELLOW}Directory is empty. Cloning...${NC}"
git clone "https://github.com/$DOTFILES_REPO.git" "$DOTFILES_DIR"
else
echo -e "${YELLOW}Backing up existing directory...${NC}"
mv "$DOTFILES_DIR" "${DOTFILES_DIR}.bak.$(date +%s)"
echo -e "${YELLOW}Cloning shell repository...${NC}"
git clone "https://github.com/$DOTFILES_REPO.git" "$DOTFILES_DIR"
fi
cd "$DOTFILES_DIR"
else else
echo -e "${YELLOW}Cloning shell repository...${NC}" echo -e "${YELLOW}Cloning shell repository...${NC}"
git clone "https://github.com/$DOTFILES_REPO.git" "$DOTFILES_DIR" git clone "https://github.com/$DOTFILES_REPO.git" "$DOTFILES_DIR"

View File

@@ -20,5 +20,4 @@ eza // Modern ls alternative
// Note: lazygit, lazydocker, and fabric require special installation (GitHub releases/scripts) // Note: lazygit, lazydocker, and fabric require special installation (GitHub releases/scripts)
// These are handled separately in the setup script // These are handled separately in the setup script
// lazygit // lazygit
// lazydocker // lazydocker
fabric

View File

@@ -185,14 +185,6 @@ for pkg in "${pkgs[@]}"; do
continue continue
fi fi
# Handle fabric installation
if [ "$pkg" = "fabric" ]; then
special_installs+=("$pkg")
continue
fi
# Handle lazygit - available in COPR for Fedora, special install for Debian/Ubuntu # Handle lazygit - available in COPR for Fedora, special install for Debian/Ubuntu
if [ "$pkg" = "lazygit" ] && [ "$OS_NAME" != "fedora" ]; then if [ "$pkg" = "lazygit" ] && [ "$OS_NAME" != "fedora" ]; then
special_installs+=("$pkg") special_installs+=("$pkg")
@@ -245,28 +237,6 @@ esac
echo -e "${GREEN}Package installation completed for $OS_NAME $OS_VERSION.${NC}" echo -e "${GREEN}Package installation completed for $OS_NAME $OS_VERSION.${NC}"
# Install Go if not present (required for Fabric and other Go tools)
echo -e "${YELLOW}Checking Go installation...${NC}"
if ! command -v go &> /dev/null; then
echo -e "${YELLOW}Installing Go programming language...${NC}"
GO_VERSION="1.21.5" # Stable version that works well with Fabric
# Download and install Go
wget -q "https://go.dev/dl/go${GO_VERSION}.linux-amd64.tar.gz" -O /tmp/go.tar.gz
# Remove any existing Go installation
sudo rm -rf /usr/local/go
# Extract Go to /usr/local
sudo tar -C /usr/local -xzf /tmp/go.tar.gz
rm /tmp/go.tar.gz
echo -e "${GREEN}Go ${GO_VERSION} installed successfully!${NC}"
echo -e "${YELLOW}Go PATH will be configured in shell configuration${NC}"
else
echo -e "${GREEN}Go is already installed: $(go version)${NC}"
fi
# Handle special installations that aren't available through package managers # Handle special installations that aren't available through package managers
echo -e "${YELLOW}Installing special packages...${NC}" echo -e "${YELLOW}Installing special packages...${NC}"
for pkg in "${special_installs[@]}"; do for pkg in "${special_installs[@]}"; do
@@ -285,44 +255,6 @@ for pkg in "${special_installs[@]}"; do
echo -e "${GREEN}Lazydocker is already installed${NC}" echo -e "${GREEN}Lazydocker is already installed${NC}"
fi fi
;; ;;
"fabric")
if ! command -v fabric &> /dev/null; then
echo -e "${YELLOW}Installing Fabric from GitHub releases...${NC}"
# Download and install the latest Fabric binary for Linux AMD64
curl -L https://github.com/danielmiessler/fabric/releases/latest/download/fabric-linux-amd64 -o /tmp/fabric
chmod +x /tmp/fabric
sudo mv /tmp/fabric /usr/local/bin/fabric echo -e "${GREEN}Fabric binary installed successfully!${NC}"
# Verify installation
if fabric --version; then
echo -e "${GREEN}Fabric installation verified!${NC}"
echo -e "${YELLOW}Running Fabric setup...${NC}"
# Create fabric config directory
mkdir -p "$HOME/.config/fabric"
# Run fabric setup with proper configuration
echo -e "${YELLOW}Setting up Fabric patterns and configuration...${NC}"
# Initialize fabric with default patterns
fabric --setup || echo -e "${YELLOW}Initial fabric setup completed${NC}"
# Update patterns to get the latest
echo -e "${YELLOW}Updating Fabric patterns...${NC}"
fabric --updatepatterns || echo -e "${YELLOW}Pattern update completed${NC}"
echo -e "${GREEN}Fabric setup completed successfully!${NC}"
echo -e "${YELLOW}You can test fabric with: fabric --list-patterns${NC}"
else
echo -e "${RED}Fabric installation verification failed${NC}"
fi
else
echo -e "${GREEN}Fabric is already installed${NC}"
# Still try to update patterns
echo -e "${YELLOW}Updating Fabric patterns...${NC}"
fabric --updatepatterns || echo -e "${YELLOW}Pattern update completed${NC}"
fi
;;
"lazygit") "lazygit")
if ! command -v lazygit &> /dev/null; then if ! command -v lazygit &> /dev/null; then
echo -e "${YELLOW}Installing Lazygit from GitHub releases...${NC}" echo -e "${YELLOW}Installing Lazygit from GitHub releases...${NC}"
@@ -635,30 +567,8 @@ echo -e "${GREEN}OS: $OS_NAME $OS_VERSION${NC}"
echo -e "${GREEN}Package Manager: $PKG_MANAGER${NC}" echo -e "${GREEN}Package Manager: $PKG_MANAGER${NC}"
echo -e "${GREEN}Shell: $(basename "$SHELL") → zsh${NC}" echo -e "${GREEN}Shell: $(basename "$SHELL") → zsh${NC}"
echo -e "\n${YELLOW}Testing Fabric installation...${NC}"
if command -v fabric &> /dev/null; then
echo -e "${GREEN}✓ Fabric is installed${NC}"
# Test fabric patterns
echo -e "${YELLOW}Testing Fabric patterns...${NC}"
if fabric --list-patterns >/dev/null 2>&1; then
echo -e "${GREEN}✓ Fabric patterns are available${NC}"
echo -e "${YELLOW}Number of patterns: $(fabric --list-patterns 2>/dev/null | wc -l)${NC}"
else
echo -e "${YELLOW}⚠ Fabric patterns may need to be updated${NC}"
fi
else
echo -e "${RED}✗ Fabric is not installed${NC}"
fi
echo -e "\n${GREEN}=== Post-Installation Instructions ===${NC}" echo -e "\n${GREEN}=== Post-Installation Instructions ===${NC}"
echo -e "${YELLOW}1. Restart your shell or run: source ~/.zshrc${NC}" echo -e "${YELLOW}1. Restart your shell or run: source ~/.zshrc${NC}"
echo -e "${YELLOW}2. Test Fabric: fabric --list-patterns${NC}"
echo -e "${YELLOW}3. Try a Fabric pattern: echo 'Hello world' | fabric --pattern summarize${NC}"
echo -e "\n${GREEN}=== Useful Commands ===${NC}"
echo -e "${YELLOW}• Fabric help: fabric --help${NC}"
echo -e "${YELLOW}• Update patterns: fabric --updatepatterns${NC}"
echo -e "\n${GREEN}Setup completed successfully for $OS_NAME $OS_VERSION!${NC}" echo -e "\n${GREEN}Setup completed successfully for $OS_NAME $OS_VERSION!${NC}"
echo -e "${YELLOW}Note: You may need to log out and log back in for all changes to take effect.${NC}" echo -e "${YELLOW}Note: You may need to log out and log back in for all changes to take effect.${NC}"

214
uninstall-fabric.sh Executable file
View File

@@ -0,0 +1,214 @@
#!/bin/bash
# uninstall-fabric.sh
#
# Description: Safely uninstalls the Fabric AI CLI (Daniel Miessler) and related configuration.
# Avoids removing the 'fabric' Python deployment library.
# Detects OS and uses appropriate package managers if applicable.
# Logs all actions to a file.
#
# Usage: ./uninstall-fabric.sh
#
# Author: GitHub Copilot
set -u
# Configuration
LOG_FILE="uninstall-fabric.log"
CURRENT_DATE=$(date +'%Y-%m-%d %H:%M:%S')
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Initialize log file
echo "Fabric AI CLI Uninstallation Log - Started at $CURRENT_DATE" > "$LOG_FILE"
# Logging functions
log() {
local message="$1"
echo -e "[$(date +'%H:%M:%S')] $message" | tee -a "$LOG_FILE"
}
info() {
local message="$1"
echo -e "${BLUE}[INFO]${NC} $message" | tee -a "$LOG_FILE"
}
success() {
local message="$1"
echo -e "${GREEN}[SUCCESS]${NC} $message" | tee -a "$LOG_FILE"
}
warning() {
local message="$1"
echo -e "${YELLOW}[WARNING]${NC} $message" | tee -a "$LOG_FILE"
}
error() {
local message="$1"
echo -e "${RED}[ERROR]${NC} $message" | tee -a "$LOG_FILE"
exit 1
}
# Function to detect Operating System
detect_os() {
if [[ -f /etc/os-release ]]; then
# shellcheck source=/dev/null
. /etc/os-release
OS_NAME=$ID
VERSION_ID=$VERSION_ID
info "Detected OS: $NAME ($ID) $VERSION_ID"
else
error "Could not detect operating system. /etc/os-release file not found."
fi
}
# Function to check for root privileges
check_privileges() {
if [[ $EUID -ne 0 ]]; then
warning "This script is not running as root."
warning "System package removal might fail or require sudo password."
else
info "Running with root privileges."
fi
}
# Function to confirm action
confirm_execution() {
echo -e "\n${YELLOW}WARNING: This script will attempt to uninstall the Fabric AI CLI (Daniel Miessler).${NC}"
echo -e "It will NOT remove the 'fabric' Python deployment library."
echo -e "It will remove the 'fabric' binary if identified as the AI tool, and configuration files."
echo -e "Please ensure you have backups if necessary.\n"
read -p "Do you want to proceed? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
info "Operation cancelled by user."
exit 0
fi
}
# Function to check if a binary is the Fabric AI tool
is_fabric_ai_tool() {
local bin_path="$1"
# Check help output for keywords
# The AI tool usually mentions 'patterns', 'context', 'session', 'model'
if "$bin_path" --help 2>&1 | grep -qE "Daniel Miessler|patterns|context|session|model"; then
return 0
fi
return 1
}
# Function to uninstall binary
uninstall_binary() {
local bin_path
bin_path=$(command -v fabric)
if [[ -n "$bin_path" ]]; then
info "Found 'fabric' binary at: $bin_path"
if is_fabric_ai_tool "$bin_path"; then
info "Identified as Fabric AI CLI."
# Check if owned by system package
local pkg_owner=""
if [[ "$OS_NAME" =~ (debian|ubuntu|linuxmint|pop|kali) ]]; then
if dpkg -S "$bin_path" &> /dev/null; then
pkg_owner=$(dpkg -S "$bin_path" | cut -d: -f1)
fi
elif [[ "$OS_NAME" =~ (fedora|centos|rhel|almalinux|rocky) ]]; then
if rpm -qf "$bin_path" &> /dev/null; then
pkg_owner=$(rpm -qf "$bin_path")
fi
fi
if [[ -n "$pkg_owner" ]]; then
info "Binary is owned by system package: $pkg_owner"
info "Removing package $pkg_owner..."
local sudo_prefix=""
[[ $EUID -ne 0 ]] && sudo_prefix="sudo"
if [[ "$OS_NAME" =~ (debian|ubuntu|linuxmint|pop|kali) ]]; then
$sudo_prefix apt-get remove -y "$pkg_owner" >> "$LOG_FILE" 2>&1 || error "Failed to remove package $pkg_owner"
else
$sudo_prefix dnf remove -y "$pkg_owner" >> "$LOG_FILE" 2>&1 || error "Failed to remove package $pkg_owner"
fi
success "Removed system package $pkg_owner."
else
info "Binary is not owned by a system package. Removing manually..."
rm -f "$bin_path" || error "Failed to remove $bin_path"
success "Removed binary $bin_path."
fi
else
warning "The binary at $bin_path does not appear to be the Fabric AI CLI. Skipping removal to be safe."
warning "Run '$bin_path --help' to verify what it is."
fi
else
info "'fabric' binary not found in PATH."
fi
}
# Function to uninstall from pipx
uninstall_pipx() {
if command -v pipx &> /dev/null; then
info "Checking pipx for 'fabric'..."
if pipx list | grep -q "package fabric"; then
info "Found 'fabric' installed via pipx. Uninstalling..."
pipx uninstall fabric >> "$LOG_FILE" 2>&1 || error "Failed to uninstall fabric via pipx"
success "Uninstalled fabric via pipx."
else
info "'fabric' not found in pipx."
fi
fi
}
# Function to remove configuration files
remove_config() {
local config_dirs=(
"$HOME/.config/fabric"
"$HOME/.fabric"
"$HOME/.local/share/fabric"
)
for dir in "${config_dirs[@]}"; do
if [[ -d "$dir" ]]; then
info "Found configuration directory: $dir"
rm -rf "$dir" || error "Failed to remove $dir"
success "Removed $dir."
fi
done
}
# Main execution flow
main() {
detect_os
check_privileges
confirm_execution
info "Starting uninstallation process..."
# Check pipx first as it manages its own binaries
uninstall_pipx
# Check binary
uninstall_binary
# Remove config
remove_config
echo -e "\n----------------------------------------------------------------"
success "Uninstallation steps completed."
info "A log of this operation has been saved to: $LOG_FILE"
echo -e "${YELLOW}Note: If you removed system-level components, a reboot might be recommended.${NC}"
echo -e "----------------------------------------------------------------"
}
# Trap interrupts
trap 'echo -e "\n${RED}Script interrupted by user.${NC}"; exit 1' INT TERM
# Run main
main

View File

@@ -59,7 +59,13 @@ readonly CYAN='\033[0;36m'
readonly NC='\033[0m' # No Color readonly NC='\033[0m' # No Color
# Configuration # Configuration
readonly LOG_FILE="/var/log/system-update.log" if [[ -w "/var/log" ]]; then
LOG_FILE="/var/log/system-update.log"
else
LOG_FILE="$HOME/.local/share/system-update.log"
mkdir -p "$(dirname "$LOG_FILE")"
fi
readonly LOG_FILE
# Global variables # Global variables
ERRORS_DETECTED=0 ERRORS_DETECTED=0
@@ -516,6 +522,9 @@ perform_system_update() {
increment_error "Failed to upgrade packages with nala" increment_error "Failed to upgrade packages with nala"
return 1 return 1
fi fi
log_message "INFO" "Cleaning up unused packages with nala..."
sudo nala autoremove -y
;; ;;
dnf) dnf)
log_message "INFO" "Checking for updates with dnf..." log_message "INFO" "Checking for updates with dnf..."
@@ -526,6 +535,9 @@ perform_system_update() {
increment_error "Failed to upgrade packages with dnf" increment_error "Failed to upgrade packages with dnf"
return 1 return 1
fi fi
log_message "INFO" "Cleaning up unused packages with dnf..."
sudo dnf autoremove -y
;; ;;
apt) apt)
log_message "INFO" "Updating package lists with apt..." log_message "INFO" "Updating package lists with apt..."
@@ -539,12 +551,49 @@ perform_system_update() {
increment_error "Failed to upgrade packages with apt" increment_error "Failed to upgrade packages with apt"
return 1 return 1
fi fi
log_message "INFO" "Cleaning up unused packages with apt..."
sudo apt autoremove -y && sudo apt autoclean
;; ;;
esac esac
# Universal packages
if command -v flatpak &> /dev/null; then
log_message "INFO" "Updating Flatpak packages..."
flatpak update -y
log_message "INFO" "Cleaning up unused Flatpak runtimes..."
flatpak uninstall --unused -y
fi
if command -v snap &> /dev/null; then
log_message "INFO" "Updating Snap packages..."
sudo snap refresh
fi
log_message "INFO" "System package update completed successfully" log_message "INFO" "System package update completed successfully"
} }
update_signal() {
# check if hostname is `mini`
if [[ "$(hostname)" != "mini" ]]; then
debug_log "Signal update is only available on host 'mini'"
return 0
fi
# check if distrobox is installed
if ! command -v distrobox-upgrade &> /dev/null; then
debug_log "distrobox is not installed"
return 0
fi
# Capture failure to prevent script exit due to set -e
# Known issue: distrobox-upgrade may throw a stat error at the end despite success
if ! distrobox-upgrade signal; then
log_message "WARN" "Signal update reported an error (likely benign 'stat' issue). Continuing..."
fi
}
################################################################################ ################################################################################
# Main Execution # Main Execution
################################################################################ ################################################################################
@@ -583,6 +632,9 @@ main() {
upgrade_oh_my_zsh upgrade_oh_my_zsh
perform_system_update perform_system_update
# signal is made available using distrobox and is only available on `mini`
update_signal
# Restart services # Restart services
if [[ "$SKIP_SERVICES" != true ]]; then if [[ "$SKIP_SERVICES" != true ]]; then
if [[ "$SKIP_PLEX" != true ]]; then if [[ "$SKIP_PLEX" != true ]]; then
@@ -594,6 +646,11 @@ main() {
debug_log "Skipping all service management due to --skip-services flag" debug_log "Skipping all service management due to --skip-services flag"
fi fi
# Check for reboot requirement
if [[ -f /var/run/reboot-required ]]; then
log_message "WARN" "A system reboot is required to complete the update."
fi
# Final status # Final status
if [[ $ERRORS_DETECTED -eq 0 ]]; then if [[ $ERRORS_DETECTED -eq 0 ]]; then
log_message "INFO" "System update completed successfully!" log_message "INFO" "System update completed successfully!"