Files
shell/docs/docker-bootstrap-testing-framework.md
Peter Wood 2540c2d50b Refactor documentation and enhance Immich backup system
- Updated README.md to streamline content and improve navigation with a new Quick Navigation section.
- Consolidated detailed Plex backup descriptions into dedicated documentation files.
- Added comprehensive Docker Bootstrap Testing Framework documentation.
- Created Immich backup enhancement summary and migration summary documents.
- Implemented webhook notifications and Backblaze B2 integration in Immich backup scripts.
- Centralized logging for Immich backup operations and updated configuration requirements.
- Restructured Telegram bot project documentation for better clarity and organization.
- Enhanced .gitignore to include environment files and Backblaze CLI tool.
- Updated dotfiles README to reference new testing documentation.

Resolves Documentation review #11.
2025-05-27 12:51:00 -04:00

139 lines
5.1 KiB
Markdown

# Docker Bootstrap Testing Framework
This document describes the comprehensive Docker-based testing framework for validating the bootstrap and setup process across different environments.
## Overview
The testing framework consists of three main components:
1. **test-setup.sh**: The main test script that validates the bootstrap and setup process
2. **run-docker-tests.sh**: A runner script that executes tests in Docker containers
3. **Dockerfile**: Definition of test environments (Ubuntu and Debian)
## Testing Features
- **Cross-platform testing**: Test on both Ubuntu and Debian environments
- **Isolated environments**: All tests run in fresh Docker containers
- **Comprehensive validation**: Tests both the bootstrap and setup processes
- **Continuous testing**: Tests all packages regardless of individual failures
- **Detailed reporting**: Summary of all successful and failed components
## How Testing Works
### The Docker Test Environment
The `Dockerfile` defines two testing environments:
- **ubuntu-test**: Based on Ubuntu 24.04
- **debian-test**: Based on Debian 12
Each environment:
1. Installs minimal dependencies (curl, git, sudo, wget)
2. Creates a test user with sudo permissions
3. Sets up the directory structure for testing
4. Copies the test script and packages list
5. Runs the test script when the container starts
### The Test Script (test-setup.sh)
The test script validates:
1. **Script Syntax**: Checks if bootstrap.sh and setup.sh have valid syntax
2. **Core Tools**: Verifies git, curl, wget are available
3. **Package Availability**: Checks if packages in packages.list are available in repositories
4. **Package Installation**: Tests if each package is installed
5. **Shell Setup**: Validates Oh My Zsh and plugin installation
6. **Dotfiles**: Checks if dotfiles are properly symlinked
The script tracks all missing or misconfigured components and provides a summary at the end, including suggestions for fixing issues.
### Test Runner (run-docker-tests.sh)
The runner script provides several test modes:
- **ubuntu**: Run test on Ubuntu container
- **debian**: Run test on Debian container
- **full-ubuntu**: Run full bootstrap test on Ubuntu
- **full-debian**: Run full bootstrap test on Debian
- **all**: Run tests on both Ubuntu and Debian
## Testing Without Stopping on Failures
A key feature is the ability to test all packages in `packages.list` without stopping at the first failure. This ensures:
1. Complete coverage of all requirements
2. Comprehensive reporting of all issues
3. Better debugging experience when multiple components need attention
## Running Tests
```bash
# Test on Ubuntu
./run-docker-tests.sh ubuntu
# Test on Debian
./run-docker-tests.sh debian
# Full bootstrap test on Ubuntu
./run-docker-tests.sh full-ubuntu
# Full bootstrap test on Debian
./run-docker-tests.sh full-debian
# Test on both Ubuntu and Debian
./run-docker-tests.sh all
```
### Choosing the Right Test Option
The testing framework offers different options for different testing needs:
| If you want to... | Use this command | Why |
|-------------------|------------------|-----|
| Quickly check if specific packages are available | `./run-docker-tests.sh ubuntu` or `debian` | Fast validation of packages without full installation |
| Test changes to the test-setup.sh script | `./run-docker-tests.sh ubuntu` or `debian` | Executes only the test script in a clean environment |
| Validate a fix for a package installation issue | `./run-docker-tests.sh ubuntu` or `debian` | Tests package availability and installation |
| Test the complete user experience | `./run-docker-tests.sh full-ubuntu` or `full-debian` | Executes the actual bootstrap script like a real user would |
| Ensure bootstrap.sh works correctly | `./run-docker-tests.sh full-ubuntu` or `full-debian` | Tests the entire installation process from scratch |
| Verify cross-platform compatibility | `./run-docker-tests.sh all` | Tests on both supported platforms |
| Before pushing changes to main | `./run-docker-tests.sh all` and both full tests | Complete validation across environments |
### Key Differences
**Standard Tests** (`ubuntu`, `debian`):
- Use the Docker targets defined in the main Dockerfile
- Run the `test-setup.sh` script to check components
- Faster execution, focused on component validation
- Don't perform the actual bootstrap installation
**Full Tests** (`full-ubuntu`, `full-debian`):
- Create a temporary Dockerfile for comprehensive testing
- Execute the bootstrap script directly from GitHub
- Complete end-to-end testing of the actual installation process
- Simulate the real user experience
## Test Output
The test provides:
1. A color-coded console output showing success/failure of each component
2. A list of missing packages at the end
3. A detailed log file with all test results (saved to /tmp)
4. Suggestions for fixing detected issues
## Adding New Tests
To add new package tests:
1. Add the package name to `setup/packages.list`
2. The test framework will automatically validate its availability and installation
For more complex components:
1. Add a new test function in `test-setup.sh`
2. Call the function in the main testing sequence
3. Increment the error counter if the test fails