feat: Add CI/CD setup guide with Gitea Actions for trading analysis application

feat: Implement multi-user support with separate brokerage accounts and user authentication

feat: Configure SSO authentication setup using Google OAuth 2.0 for secure access

refactor: Update index page to reflect new Trading Analysis Dashboard features and descriptions

docs: Enhance quickstart guide for deploying Trading Analysis Dashboard with detailed steps

chore: Add runner configuration for Gitea Actions with logging and container settings
This commit is contained in:
Peter Wood
2025-11-14 12:43:09 -05:00
parent 2f5e59b40f
commit c6eb26037b
24 changed files with 3594 additions and 169 deletions

393
guides/deployment/caddy.mdx Normal file
View File

@@ -0,0 +1,393 @@
---
title: 'Caddy Configuration'
description: 'Configure Caddy reverse proxy for different deployment scenarios'
---
## Overview
Caddy is a powerful web server that automatically handles HTTPS with Let's Encrypt. This guide explains how to configure Caddy for different deployment scenarios.
## Local Development
The default `Caddyfile` is configured for local development:
```caddy Caddyfile
localhost {
reverse_proxy trading_app:5000
encode gzip
header {
X-Content-Type-Options nosniff
X-Frame-Options DENY
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
-Server
}
}
```
<Info>
Access your app at: `http://localhost`
</Info>
## Production Deployment
### Step 1: Domain Setup
<Steps>
<Step title="Configure DNS">
Point your domain's DNS A record to your server's IP
</Step>
<Step title="Copy Production Template">
```bash
cp Caddyfile.production Caddyfile
```
</Step>
<Step title="Edit Caddyfile">
Replace `your-domain.com` with your actual domain
</Step>
</Steps>
### Step 2: Environment Configuration
Update your `.env` file:
```env .env
DOMAIN=your-domain.com
FLASK_ENV=production
```
### Step 3: Deploy
```bash
docker-compose up -d
```
<Check>
Caddy will automatically:
- Obtain SSL certificates from Let's Encrypt
- Handle HTTP to HTTPS redirects
- Renew certificates automatically
</Check>
## Configuration Options
### Basic Reverse Proxy
```caddy
your-domain.com {
reverse_proxy trading_app:5000
}
```
### With Compression and Security Headers
```caddy
your-domain.com {
reverse_proxy trading_app:5000
encode gzip
header {
X-Content-Type-Options nosniff
X-Frame-Options DENY
Strict-Transport-Security "max-age=31536000"
}
}
```
### Static File Caching
```caddy
your-domain.com {
reverse_proxy trading_app:5000
@static path /static/*
handle @static {
header Cache-Control "public, max-age=3600"
reverse_proxy trading_app:5000
}
}
```
### Rate Limiting
```caddy
your-domain.com {
rate_limit {
zone general 10r/s
}
reverse_proxy trading_app:5000
}
```
### Basic Authentication
```caddy
admin.your-domain.com {
basicauth {
admin $2a$14$hashed_password_here
}
reverse_proxy trading_app:5000
}
```
## SSL/TLS Configuration
### Automatic HTTPS (Default)
Caddy automatically obtains certificates from Let's Encrypt:
```caddy
your-domain.com {
reverse_proxy trading_app:5000
}
```
<Note>
No additional configuration needed! Caddy handles everything automatically.
</Note>
### Custom Certificates
```caddy
your-domain.com {
tls /path/to/cert.pem /path/to/key.pem
reverse_proxy trading_app:5000
}
```
### Internal/Self-Signed Certificates
```caddy
your-domain.com {
tls internal
reverse_proxy trading_app:5000
}
```
## Monitoring and Logging
### Access Logs
```caddy
your-domain.com {
reverse_proxy trading_app:5000
log {
output file /var/log/caddy/access.log
format json
}
}
```
### Error Handling
```caddy
your-domain.com {
reverse_proxy trading_app:5000
handle_errors {
@404 expression {http.error.status_code} == 404
handle @404 {
rewrite * /404.html
reverse_proxy trading_app:5000
}
}
}
```
## Advanced Features
### Multiple Domains
```caddy
site1.com, site2.com {
reverse_proxy trading_app:5000
}
```
### Subdomain Routing
```caddy
api.your-domain.com {
reverse_proxy trading_app:5000/api
}
app.your-domain.com {
reverse_proxy trading_app:5000
}
```
### Load Balancing
```caddy
your-domain.com {
reverse_proxy trading_app1:5000 trading_app2:5000 {
lb_policy round_robin
health_path /health
}
}
```
## Troubleshooting
### Check Caddy Status
```bash
docker-compose logs caddy
```
### Certificate Issues
```bash
# Check certificate status
docker-compose exec caddy caddy list-certificates
# Force certificate renewal
docker-compose exec caddy caddy reload --config /etc/caddy/Caddyfile
```
### Configuration Validation
```bash
# Validate Caddyfile syntax
docker-compose exec caddy caddy validate --config /etc/caddy/Caddyfile
```
### Common Issues
<AccordionGroup>
<Accordion title="Port 80/443 already in use">
```bash
# Check what's using the ports
netstat -tlnp | grep :80
netstat -tlnp | grep :443
```
Stop the conflicting service or change Caddy's ports in docker-compose.yml
</Accordion>
<Accordion title="DNS not pointing to server">
```bash
# Check DNS resolution
nslookup your-domain.com
```
Verify your domain's A record points to the correct IP address
</Accordion>
<Accordion title="Let's Encrypt rate limits">
Use staging environment for testing:
```caddy
your-domain.com {
tls {
ca https://acme-staging-v02.api.letsencrypt.org/directory
}
reverse_proxy trading_app:5000
}
```
</Accordion>
<Accordion title="Certificate validation fails">
- Ensure port 80 is accessible from the internet
- Verify DNS is propagated: `dig your-domain.com`
- Check firewall rules allow incoming connections
- Review Caddy logs for specific errors
</Accordion>
</AccordionGroup>
## Performance Tuning
### Enable HTTP/2 and HTTP/3
```caddy
your-domain.com {
protocols h1 h2 h3
reverse_proxy trading_app:5000
}
```
### Connection Limits
```caddy
your-domain.com {
reverse_proxy trading_app:5000 {
transport http {
max_conns_per_host 100
}
}
}
```
### Timeout Configuration
```caddy
your-domain.com {
reverse_proxy trading_app:5000 {
transport http {
read_timeout 30s
write_timeout 30s
}
}
}
```
## Security Best Practices
<CardGroup cols={2}>
<Card title="Strong TLS" icon="lock">
Use TLS 1.2+ with strong cipher suites (Caddy's default)
</Card>
<Card title="Security Headers" icon="shield-halved">
Add security headers like CSP, HSTS, X-Frame-Options
</Card>
<Card title="Rate Limiting" icon="gauge-high">
Implement rate limiting to prevent abuse
</Card>
<Card title="Access Control" icon="user-shield">
Use basic auth or OAuth for sensitive routes
</Card>
</CardGroup>
### Recommended Security Configuration
```caddy
your-domain.com {
reverse_proxy trading_app:5000
encode gzip
header {
# Security headers
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
X-XSS-Protection "1; mode=block"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "geolocation=(), microphone=(), camera=()"
# Hide server info
-Server
-X-Powered-By
}
}
```
## Additional Resources
<CardGroup cols={2}>
<Card title="Caddy Documentation" icon="book" href="https://caddyserver.com/docs/">
Official Caddy documentation
</Card>
<Card title="Caddyfile Syntax" icon="code" href="https://caddyserver.com/docs/caddyfile">
Learn Caddyfile syntax
</Card>
<Card title="Automatic HTTPS" icon="certificate" href="https://caddyserver.com/docs/automatic-https">
How Caddy handles HTTPS automatically
</Card>
<Card title="Docker Deployment" icon="docker" href="/guides/deployment/docker">
Back to Docker deployment guide
</Card>
</CardGroup>

View File

@@ -0,0 +1,426 @@
---
title: 'Docker Deployment'
description: 'Deploy the Trading Analysis Dashboard using Docker containers'
---
## Quick Start
<Steps>
<Step title="Install Prerequisites">
Install [Docker Desktop](https://www.docker.com/products/docker-desktop/) (includes Docker Compose)
</Step>
<Step title="Run Deployment Script">
<Tabs>
<Tab title="Windows">
```batch
deploy.bat
```
</Tab>
<Tab title="Linux/macOS">
```bash
chmod +x deploy.sh
./deploy.sh
```
</Tab>
</Tabs>
</Step>
<Step title="Manual Deployment (Alternative)">
```bash
# Copy environment file
cp .env.docker .env
# Build and start services
docker compose up -d
# Check status
docker compose ps
```
</Step>
</Steps>
## Services Overview
The deployment includes these services:
| Service | Port | Description |
|---------|------|-------------|
| **trading_app** | 8080 | Main Flask application |
| **postgres** | 5432 | PostgreSQL database |
| **caddy** | 80, 443 | Reverse proxy with automatic HTTPS |
## Access URLs
<CardGroup cols={2}>
<Card title="Production" icon="globe">
https://performance.miningwood.com
</Card>
<Card title="Main Application" icon="laptop">
http://localhost:8080
</Card>
<Card title="Via Caddy" icon="server">
http://localhost
</Card>
<Card title="Database" icon="database">
localhost:5432
</Card>
</CardGroup>
## Docker Compose Configuration
The complete `docker-compose.yml` file for the application:
```yaml docker-compose.yml
services:
server:
image: docker.gitea.com/gitea:latest
container_name: gitea
environment:
- USER_UID=${USER_UID}
- USER_GID=${USER_GID}
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=${POSTGRES_USER}
- GITEA__database__USER=${POSTGRES_USER}
- GITEA__database__PASSWD=${POSTGRES_PASSWORD}
restart: always
networks:
- gitea
volumes:
- gitea:/data
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- ${GITEA_HTTP_PORT:-3500}:3000
- ${GITEA_SSH_PORT:-2229}:22
depends_on:
- db
labels:
- diun.enable=true
healthcheck:
test:
- CMD
- curl
- -f
- http://localhost
interval: 10s
retries: 3
start_period: 30s
timeout: 10s
db:
image: docker.io/library/postgres:14
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
networks:
- gitea
volumes:
- postgres:/var/lib/postgresql/data
runner:
image: gitea/act_runner:latest
container_name: gitea-runner
restart: always
networks:
- gitea
volumes:
- runner:/data
- /var/run/docker.sock:/var/run/docker.sock
- ./runner-config.yaml:/data/config.yaml:ro
environment:
- GITEA_INSTANCE_URL=http://server:3000
- GITEA_RUNNER_REGISTRATION_TOKEN=${GITEA_RUNNER_REGISTRATION_TOKEN}
- GITEA_RUNNER_NAME=docker-runner
- CONFIG_FILE=/data/config.yaml
command: >
sh -c "
if [ ! -f /data/.runner ]; then
act_runner register --no-interactive --instance http://server:3000 --token $${GITEA_RUNNER_REGISTRATION_TOKEN} --name docker-runner;
fi;
act_runner --config /data/config.yaml daemon
"
depends_on:
- server
labels:
- diun.enable=true
networks:
gitea:
external: false
volumes:
gitea:
postgres:
runner:
```
## Configuration
### Environment Variables
Edit the `.env` file to customize your deployment:
```env .env
# Database Configuration
DB_HOST=postgres
DB_PORT=5432
DB_NAME=mining_wood
DB_USER=trading_user
DB_PASSWORD=your_secure_password
# Flask Configuration
FLASK_SECRET_KEY=your-super-secret-key-change-this
FLASK_ENV=production
# Gitea Configuration
USER_UID=1000
USER_GID=1000
POSTGRES_USER=gitea
POSTGRES_PASSWORD=gitea_password
POSTGRES_DB=gitea
GITEA_HTTP_PORT=3500
GITEA_SSH_PORT=2229
GITEA_RUNNER_REGISTRATION_TOKEN=your_token_here
```
<Warning>
Always change default passwords before deploying to production!
</Warning>
### SSL/HTTPS Setup with Caddy
Caddy provides automatic HTTPS with Let's Encrypt:
<Tabs>
<Tab title="Local Development">
No setup needed - uses HTTP by default
</Tab>
<Tab title="Production with Domain">
```bash
# Edit Caddyfile and replace localhost with your domain
cp Caddyfile.production Caddyfile
# Edit the domain in Caddyfile: your-domain.com
```
Caddy will automatically get and renew SSL certificates!
</Tab>
</Tabs>
## Database Setup
The PostgreSQL database is automatically initialized with:
- **Database**: `mining_wood`
- **Schema**: `trading_analysis`
- **User**: `trading_user`
### Import Your Trading Data
After deployment, import your trading data:
<Steps>
<Step title="Access the database">
```bash
docker compose exec postgres psql -U trading_user -d mining_wood
```
</Step>
<Step title="Import your data">
```bash
# Copy your CSV files to the container
docker cp your-data.csv trading_app:/app/data/
# Run your import script
docker compose exec trading_app python your_import_script.py
```
</Step>
</Steps>
## Management Commands
### View Logs
```bash
# All services
docker compose logs -f
# Specific service
docker compose logs -f trading_app
docker compose logs -f postgres
docker compose logs -f caddy
```
### Restart Services
```bash
# Restart all services
docker compose restart
# Restart specific service
docker compose restart trading_app
```
### Stop/Start
```bash
# Stop all services
docker compose down
# Start services
docker compose up -d
# Stop and remove volumes (⚠️ removes database data)
docker compose down -v
```
### Update Application
```bash
# Pull latest images and restart
docker compose pull
docker compose up -d
```
### Database Backup
```bash
# Backup database
docker compose exec postgres pg_dump -U trading_user mining_wood > backup.sql
# Restore database
docker compose exec -T postgres psql -U trading_user mining_wood < backup.sql
```
## Security Considerations
### For Production Deployment
<CardGroup cols={2}>
<Card title="Change Passwords" icon="key">
Update `POSTGRES_PASSWORD` and `FLASK_SECRET_KEY` in docker compose.yml/.env
</Card>
<Card title="Enable HTTPS" icon="lock">
Configure SSL certificates and enable HTTPS redirect
</Card>
<Card title="Firewall" icon="shield">
Only expose necessary ports (80, 443). Restrict database access (5432)
</Card>
<Card title="Regular Updates" icon="rotate">
Keep Docker images updated and monitor security advisories
</Card>
</CardGroup>
## Production Deployment
### Domain Setup
<Steps>
<Step title="DNS Configuration">
- Point your domain to your server's IP address
- For performance.miningwood.com: Create an A record pointing to your server IP
</Step>
<Step title="Automatic SSL">
```bash
# Caddy handles SSL automatically with Let's Encrypt
# The domain is already configured for performance.miningwood.com
# Just deploy and Caddy will handle the rest
docker compose up -d
```
</Step>
<Step title="Environment">
- Domain is already set to `performance.miningwood.com` in `.env.docker`
- Set `FLASK_ENV=production`
- Use strong passwords
</Step>
</Steps>
### Monitoring
Consider adding monitoring services:
```yaml docker-compose.yml
# Add to docker compose.yml
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
grafana:
image: grafana/grafana
ports:
- "3000:3000"
```
## Troubleshooting
<AccordionGroup>
<Accordion title="Application Won't Start">
```bash
# Check logs
docker compose logs trading_app
# Common issues:
# - Database connection failure
# - Missing environment variables
# - Port conflicts
```
</Accordion>
<Accordion title="Database Connection Issues">
```bash
# Check database status
docker compose exec postgres pg_isready -U trading_user
# Reset database
docker compose down -v
docker compose up -d
```
</Accordion>
<Accordion title="Performance Issues">
```bash
# Check resource usage
docker stats
# Scale services
docker compose up -d --scale trading_app=2
```
</Accordion>
<Accordion title="SSL Certificate Issues">
- Ensure DNS is pointing to correct server
- Wait a few minutes for certificate provisioning
- Check Caddy logs: `docker compose logs caddy`
</Accordion>
</AccordionGroup>
## Development Mode
To run in development mode:
```bash
# Use development override
docker compose -f docker compose.yml -f docker compose.dev.yml up -d
```
This enables:
- Live code reloading
- Debug mode
- Development tools
## Next Steps
<CardGroup cols={2}>
<Card title="Caddy Configuration" icon="server" href="/guides/deployment/caddy">
Learn more about Caddy reverse proxy setup
</Card>
<Card title="CI/CD Setup" icon="rocket" href="/guides/setup/cicd">
Automate deployments with CI/CD
</Card>
</CardGroup>

283
guides/setup/cicd.mdx Normal file
View File

@@ -0,0 +1,283 @@
---
title: 'CI/CD Setup with Gitea'
description: 'Set up continuous integration and deployment using Gitea Actions'
---
## Overview
This guide will help you set up continuous integration and continuous deployment (CI/CD) for your trading analysis application using Gitea Actions.
## Prerequisites
Before starting, ensure you have:
<CardGroup cols={2}>
<Card title="Gitea Server" icon="server">
Running and accessible Gitea instance
</Card>
<Card title="Production Server" icon="cloud">
Docker, Docker Compose, SSH access, and Git installed
</Card>
<Card title="Domain Name" icon="globe">
Domain pointing to your production server
</Card>
<Card title="SSH Keys" icon="key">
SSH key pair for deployment access
</Card>
</CardGroup>
## Step 1: Repository Setup
Push your code to Gitea and enable Actions:
```bash
git remote add origin https://your-gitea-instance.com/your-username/stocks-trading-analysis.git
git push -u origin main
```
<Steps>
<Step title="Enable Gitea Actions">
Go to Repository Settings → Actions and enable Actions for this repository
</Step>
</Steps>
## Step 2: Configure Repository Secrets
Navigate to your repository → Settings → Secrets and add the following secrets:
### Required Secrets
| Secret Name | Description | Example |
|-------------|-------------|---------|
| `SSH_PRIVATE_KEY` | SSH private key for production server access | `-----BEGIN OPENSSH PRIVATE KEY-----\n...` |
| `PRODUCTION_HOST` | Production server IP or hostname | `203.0.113.1` or `server.example.com` |
| `PRODUCTION_USER` | SSH username for production server | `ubuntu`, `root`, or your username |
| `DOMAIN` | Your production domain | `performance.miningwood.com` |
### Application Secrets
| Secret Name | Description | Example |
|-------------|-------------|---------|
| `FLASK_SECRET_KEY` | Flask session secret key | `your-very-secure-secret-key-here` |
| `POSTGRES_PASSWORD` | Production database password | `secure-database-password` |
| `GOOGLE_CLIENT_ID` | OAuth Google Client ID | `123456789.apps.googleusercontent.com` |
| `GOOGLE_CLIENT_SECRET` | OAuth Google Client Secret | `GOCSPX-your-client-secret` |
| `AUTHORIZED_USERS` | Comma-separated authorized emails | `admin@example.com,user@example.com` |
### Optional Notification Secrets
| Secret Name | Description |
|-------------|-------------|
| `SLACK_WEBHOOK_URL` | Slack webhook for notifications |
| `DISCORD_WEBHOOK_URL` | Discord webhook for notifications |
## Step 3: Production Server Setup
### Create Application Directory
```bash
# SSH into your production server
ssh your-user@your-production-server
# Create application directory
sudo mkdir -p /opt/stocks-trading-analysis
sudo chown $USER:$USER /opt/stocks-trading-analysis
cd /opt/stocks-trading-analysis
# Clone the repository
git clone https://your-gitea-instance.com/your-username/stocks-trading-analysis.git .
```
### Configure Environment Variables
```bash
# Copy the production environment template
cp .gitea/deployment/production.env .env
# Edit the environment file with your actual values
nano .env
```
Update the following values in `.env`:
- `POSTGRES_PASSWORD`: Set a secure database password
- `FLASK_SECRET_KEY`: Generate a secure secret key
- `GOOGLE_CLIENT_ID` & `GOOGLE_CLIENT_SECRET`: Your OAuth credentials
- `AUTHORIZED_USERS`: List of authorized email addresses
- `DOMAIN`: Your production domain name
### Initial Deployment
```bash
# Make deployment script executable
chmod +x .gitea/deployment/deploy.sh
# Run initial deployment
./deploy.sh
```
## Step 4: SSH Key Setup
### Generate SSH Key Pair (if needed)
```bash
# On your local machine or CI/CD runner
ssh-keygen -t ed25519 -C "gitea-actions-deployment" -f ~/.ssh/gitea_deploy_key
```
### Add Public Key to Production Server
```bash
# Copy public key to production server
ssh-copy-id -i ~/.ssh/gitea_deploy_key.pub your-user@your-production-server
# Or manually add to authorized_keys
cat ~/.ssh/gitea_deploy_key.pub | ssh your-user@your-production-server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
```
### Add Private Key to Gitea Secrets
```bash
# Copy private key content
cat ~/.ssh/gitea_deploy_key
# Add this content to the SSH_PRIVATE_KEY secret in Gitea
```
## Step 5: Test the CI/CD Pipeline
### Trigger First Pipeline
<Steps>
<Step title="Make a change">
Make a small change to your code
</Step>
<Step title="Commit and push">
```bash
git add .
git commit -m "Test CI/CD pipeline"
git push origin main
```
</Step>
<Step title="Monitor the pipeline">
Check the Actions tab in your Gitea repository to see the pipeline running
</Step>
</Steps>
### Verify Deployment
<Tabs>
<Tab title="Web Check">
Visit `https://your-domain.com` to verify the application is running
</Tab>
<Tab title="Logs">
SSH to server and run `docker compose logs -f`
</Tab>
<Tab title="Services">
Run `docker compose ps` to check service status
</Tab>
</Tabs>
## Workflow Overview
### Automatic Triggers
- **Push to main/master**: Triggers full CI/CD pipeline with production deployment
- **Push to develop**: Triggers CI/CD pipeline with staging deployment (if configured)
- **Pull requests**: Triggers testing and build validation only
- **Schedule**: Security scans run weekly, cleanup runs weekly
### Manual Triggers
Navigate to Actions tab in your repository, click "Run workflow" on any workflow, select branch and run.
## Monitoring and Maintenance
### Check Application Health
```bash
# SSH to production server
ssh your-user@your-production-server
# Check service status
docker compose ps
# View logs
docker compose logs -f trading_app
# Check resource usage
docker stats
```
### Database Backups
Backups are automatically created during deployments and stored in `/opt/backups/stocks-app/`.
```bash
# Manual backup
docker compose exec postgres pg_dump -U trading_user mining_wood | gzip > backup_$(date +%Y%m%d_%H%M%S).sql.gz
# Restore from backup
gunzip -c backup_file.sql.gz | docker compose exec -T postgres psql -U trading_user mining_wood
```
### SSL Certificate
Caddy automatically handles SSL certificates. Check certificate status:
```bash
# Check certificate
echo | openssl s_client -servername your-domain.com -connect your-domain.com:443 2>/dev/null | openssl x509 -noout -dates
```
## Troubleshooting
<AccordionGroup>
<Accordion title="Pipeline fails at SSH step">
- Verify SSH key is correctly formatted in secrets
- Check server SSH configuration
- Ensure server is accessible from internet
</Accordion>
<Accordion title="Docker build fails">
- Check Dockerfile syntax
- Verify all dependencies in requirements.txt
- Check for file permission issues
</Accordion>
<Accordion title="Application doesn't start">
- Check environment variables in .env
- Verify database is running: `docker compose logs postgres`
- Check application logs: `docker compose logs trading_app`
</Accordion>
<Accordion title="SSL certificate issues">
- Ensure DNS is pointing to correct server
- Wait a few minutes for certificate provisioning
- Check Caddy logs: `docker compose logs caddy`
</Accordion>
</AccordionGroup>
## Security Best Practices
<Warning>
Remember to regularly rotate secrets and monitor deployment logs for suspicious activity.
</Warning>
1. **Regularly rotate secrets** (SSH keys, database passwords)
2. **Monitor deployment logs** for suspicious activity
3. **Keep dependencies updated** (run security scans)
4. **Use strong passwords** for all services
5. **Backup regularly** and test restore procedures
6. **Monitor server resources** and set up alerts
## Customization
You can customize the CI/CD pipeline by modifying files in `.gitea/workflows/`:
- `main.yml`: Main CI/CD pipeline
- `security.yml`: Security scanning
- `cleanup.yml`: Resource cleanup and maintenance
<Note>
Remember to test changes in a staging environment before deploying to production!
</Note>

262
guides/setup/multi-user.mdx Normal file
View File

@@ -0,0 +1,262 @@
---
title: 'Multi-User Support'
description: 'Configure multi-user support with separate brokerage accounts'
---
## Overview
The application supports multiple users, each with their own brokerage account numbers and transaction data. Users authenticate via Google OAuth and can set up their brokerage account number in their profile.
## Database Schema Changes
### New Tables
#### `trading_analysis.users`
Stores user information from OAuth:
| Column | Type | Description |
|--------|------|-------------|
| `id` | Primary Key | User identifier |
| `email` | Unique | User email address |
| `name` | String | User's full name |
| `google_sub` | String | Google OAuth subject ID |
| `picture_url` | String | Profile picture URL |
| `brokerage_account_number` | String | User's primary account |
| `is_active` | Boolean | Account active status |
| `created_at` | Timestamp | Creation date |
| `updated_at` | Timestamp | Last update date |
#### `trading_analysis.brokerage_accounts`
Cross-reference table for account numbers:
| Column | Type | Description |
|--------|------|-------------|
| `id` | Primary Key | Account identifier |
| `account_number` | Unique | Brokerage account number |
| `account_display_name` | String | Optional friendly name |
| `user_id` | Foreign Key | Links to users table |
| `is_primary` | Boolean | Primary account flag |
| `created_at` | Timestamp | Creation date |
| `updated_at` | Timestamp | Last update date |
### Updated Tables
All existing tables have been updated with a `brokerage_account_id` foreign key:
- `raw_transactions`
- `matched_trades`
- `dividend_transactions`
- `monthly_trading_summary`
- `monthly_dividend_summary`
- `monthly_combined_summary`
- `processing_log`
## Migration Process
To migrate an existing database to support multiple users:
### Step 1: Run the Migration Script
```bash
python migrate_to_multiuser.py
```
### Step 2: Set Environment Variables (optional)
```bash
export DEFAULT_MIGRATION_EMAIL="your-admin@example.com"
export DEFAULT_MIGRATION_NAME="Admin User"
export DEFAULT_BROKERAGE_ACCOUNT="YOUR_ACCOUNT_NUMBER"
```
<Info>
The migration script will create default values if these environment variables are not set.
</Info>
### What the Migration Does
<Steps>
<Step title="Create new tables">
Creates `users` and `brokerage_accounts` tables
</Step>
<Step title="Add foreign keys">
Adds `brokerage_account_id` columns to existing tables
</Step>
<Step title="Create default user">
Creates a default user and account for existing data
</Step>
<Step title="Update transactions">
Updates all existing transactions to reference the default account
</Step>
<Step title="Recreate views">
Recreates database views to work with the new schema
</Step>
</Steps>
## Application Changes
### User Profile Management
<CardGroup cols={2}>
<Card title="Profile Page" icon="user">
Users can now set their brokerage account number in their profile
</Card>
<Card title="Account Validation" icon="check">
CSV uploads require a valid brokerage account number
</Card>
<Card title="Multiple Accounts" icon="building-columns">
Users can have multiple brokerage accounts (future feature)
</Card>
<Card title="Data Isolation" icon="lock">
Users only see their own transaction data
</Card>
</CardGroup>
### Upload Process
1. **User Validation**: Checks that user has a brokerage account before allowing uploads
2. **Account Association**: All uploaded transactions are associated with the user's account
3. **Processing**: Modified `trading_analysis.py` to accept `--account-id` parameter
### Authentication Flow
<Steps>
<Step title="Login">
User logs in via Google OAuth
</Step>
<Step title="User Creation">
User record is created/updated in the database
</Step>
<Step title="Set Account">
User sets their brokerage account number in profile
</Step>
<Step title="Link Account">
Brokerage account record is created and linked to user
</Step>
<Step title="Upload Data">
CSV uploads are associated with the user's account
</Step>
</Steps>
## Database Queries
### User-Specific Data
All queries now need to filter by `brokerage_account_id`:
```sql
-- Get user's transactions
SELECT * FROM trading_analysis.raw_transactions rt
JOIN trading_analysis.brokerage_accounts ba ON rt.brokerage_account_id = ba.id
WHERE ba.user_id = ?;
-- Get user's trading performance
SELECT * FROM trading_analysis.v_trading_performance
WHERE user_email = 'user@example.com';
```
### Updated Views
Views now include user context:
- `v_current_positions` - Shows account and user information
- `v_trading_performance` - Includes user email and account number
## Configuration
### Environment Variables
```bash
# Migration Configuration
DEFAULT_MIGRATION_EMAIL=your-admin@example.com
DEFAULT_MIGRATION_NAME=Admin User
DEFAULT_BROKERAGE_ACCOUNT=YOUR_ACCOUNT_NUMBER
# OAuth Configuration (existing)
GOOGLE_CLIENT_ID=your-client-id
GOOGLE_CLIENT_SECRET=your-client-secret
AUTHORIZED_USERS=user1@example.com,user2@example.com
```
## Security Considerations
<Warning>
User data isolation is critical for multi-user environments. Always verify queries filter by the correct account ID.
</Warning>
1. **User Isolation**: Users can only see their own transaction data
2. **Account Validation**: Brokerage account numbers are validated before processing
3. **OAuth Integration**: User authentication is handled by Google OAuth
4. **Data Protection**: User data is isolated by account ID in all database operations
## Future Enhancements
<CardGroup cols={2}>
<Card title="Multiple Accounts" icon="layer-group">
Support for users with multiple brokerage accounts
</Card>
<Card title="Account Sharing" icon="share-nodes">
Allow users to share specific accounts with other users
</Card>
<Card title="Admin Interface" icon="user-shield">
Administrative interface for managing users and accounts
</Card>
<Card title="Data Export" icon="download">
User-specific data export functionality
</Card>
</CardGroup>
## Troubleshooting
<AccordionGroup>
<Accordion title="Migration Fails">
- Ensure database connection is working
- Verify you have proper permissions
- Check for existing foreign key constraints
- Review migration logs for specific errors
</Accordion>
<Accordion title="User Profile Issues">
- Check that OAuth is configured correctly
- Verify user email is in AUTHORIZED_USERS
- Check application logs for authentication errors
</Accordion>
<Accordion title="Upload Failures">
- Verify user has set brokerage account number in profile
- Check CSV format matches expected schema
- Review processing logs in `trading_analysis.log`
</Accordion>
<Accordion title="Data Not Showing">
- Ensure queries are filtering by correct account ID
- Verify user-account association is correct
- Check database views are updated
</Accordion>
</AccordionGroup>
### Database Verification
```sql
-- Check user-account associations
SELECT u.email, u.brokerage_account_number, ba.account_number, ba.is_primary
FROM trading_analysis.users u
LEFT JOIN trading_analysis.brokerage_accounts ba ON u.id = ba.user_id;
-- Check transaction associations
SELECT COUNT(*) as transaction_count, ba.account_number, u.email
FROM trading_analysis.raw_transactions rt
JOIN trading_analysis.brokerage_accounts ba ON rt.brokerage_account_id = ba.id
JOIN trading_analysis.users u ON ba.user_id = u.id
GROUP BY ba.account_number, u.email;
```
## Next Steps
<CardGroup cols={2}>
<Card title="Portfolio Management" icon="chart-line" href="/features/portfolio-management">
Set up portfolio tracking for your account
</Card>
<Card title="CSV Upload" icon="file-csv" href="/features/csv-upload">
Learn how to upload transaction data
</Card>
</CardGroup>

234
guides/setup/sso.mdx Normal file
View File

@@ -0,0 +1,234 @@
---
title: 'SSO Authentication Setup'
description: 'Configure Google OAuth 2.0 authentication for your Trading Analysis Dashboard'
---
## Overview
This guide will help you configure Google OAuth 2.0 authentication for secure access to your Trading Analysis Dashboard.
## Step 1: Create Google OAuth Application
<Steps>
<Step title="Access Google Cloud Console">
Visit [Google Cloud Console](https://console.cloud.google.com/) and sign in with your Google account
</Step>
<Step title="Create a New Project">
- Click "Select a project" → "New Project"
- Name: "Trading Dashboard"
- Click "Create"
</Step>
<Step title="Enable Google+ API">
- Go to "APIs & Services" → "Library"
- Search for "Google+ API" and enable it
- Also enable "Google Identity" if available
</Step>
<Step title="Create OAuth 2.0 Credentials">
- Go to "APIs & Services" → "Credentials"
- Click "Create Credentials" → "OAuth 2.0 Client IDs"
- Choose "Web application"
- Name: "Trading Dashboard Auth"
</Step>
<Step title="Configure Authorized URLs">
Add the following URLs:
**Authorized JavaScript origins:**
- `https://performance.miningwood.com`
- `http://localhost:8080` (for testing)
**Authorized redirect URIs:**
- `https://performance.miningwood.com/auth/callback`
- `http://localhost:8080/auth/callback` (for testing)
</Step>
<Step title="Copy Credentials">
Copy the "Client ID" and "Client Secret" for the next step
</Step>
</Steps>
## Step 2: Configure Environment Variables
Update your `.env.docker` file with the OAuth credentials:
```bash .env.docker
# OAuth Configuration
GOOGLE_CLIENT_ID=your-actual-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=your-actual-client-secret
# Authorized Users (your email addresses)
AUTHORIZED_USERS=your-email@gmail.com,admin@company.com
```
<Warning>
Never commit your `.env` files to version control. Keep them secure and out of your repository.
</Warning>
## Step 3: Update and Deploy
### Rebuild the application
```bash
docker compose build trading_app
docker compose restart trading_app
```
### Test the authentication
<Steps>
<Step title="Visit your application">
Navigate to `https://performance.miningwood.com`
</Step>
<Step title="Login">
You should be redirected to the login page. Click "Sign in with Google"
</Step>
<Step title="Authorize">
Authorize the application when prompted by Google
</Step>
<Step title="Access granted">
You should be redirected back and logged in successfully
</Step>
</Steps>
## Security Features
<CardGroup cols={2}>
<Card title="OAuth 2.0 with Google" icon="shield-check">
Industry standard authentication protocol
</Card>
<Card title="User Authorization" icon="users">
Only specific email addresses can access
</Card>
<Card title="Session Management" icon="clock">
Secure server-side sessions with expiration
</Card>
<Card title="HTTPS Enforcement" icon="lock">
All authentication over encrypted connections
</Card>
</CardGroup>
## User Management
### Add Users
Add email addresses to `AUTHORIZED_USERS` in `.env.docker`, separated by commas:
```bash
AUTHORIZED_USERS=user1@example.com,user2@example.com,user3@example.com
```
Then restart the application:
```bash
docker compose restart trading_app
```
### Remove Users
Remove email addresses from `AUTHORIZED_USERS` and restart the application.
<Note>
Leave `AUTHORIZED_USERS` empty to allow all users (not recommended for production)
</Note>
## Troubleshooting
<AccordionGroup>
<Accordion title="Authentication failed">
- Check that Client ID and Secret are correct in `.env.docker`
- Verify redirect URLs match exactly in Google Cloud Console
- Ensure Google+ API is enabled
- Check application logs: `docker compose logs trading_app`
</Accordion>
<Accordion title="Access denied">
- Verify your email is in `AUTHORIZED_USERS`
- Ensure email case matches exactly
- Check for extra spaces in the email list
</Accordion>
<Accordion title="Login loop">
- Clear browser cookies for your domain
- Verify Flask secret key is set in `.env.docker`
- Check session configuration in application logs
</Accordion>
<Accordion title="Callback URL mismatch">
Ensure the redirect URIs in Google Cloud Console match your deployment:
- Use `https://` for production
- Include the exact domain and path
- No trailing slashes
</Accordion>
</AccordionGroup>
## Alternative OAuth Providers
You can also configure other OAuth providers:
<Tabs>
<Tab title="GitHub OAuth">
```bash .env.docker
GITHUB_CLIENT_ID=your-github-client-id
GITHUB_CLIENT_SECRET=your-github-client-secret
```
1. Create OAuth App at https://github.com/settings/developers
2. Set Authorization callback URL to `https://your-domain.com/auth/callback`
</Tab>
<Tab title="Microsoft OAuth">
```bash .env.docker
MICROSOFT_CLIENT_ID=your-microsoft-client-id
MICROSOFT_CLIENT_SECRET=your-microsoft-client-secret
```
1. Register app at https://portal.azure.com
2. Add redirect URI in Authentication settings
</Tab>
</Tabs>
<Info>
Contact your administrator if you need help configuring alternative providers.
</Info>
## Testing OAuth Configuration
To test your OAuth setup locally:
```bash
# Start the application locally
docker compose up -d
# Check logs for any OAuth errors
docker compose logs -f trading_app
# Visit localhost
open http://localhost:8080
```
## Security Checklist
- [ ] OAuth credentials are stored in `.env` files, not in code
- [ ] `.env` files are in `.gitignore`
- [ ] `AUTHORIZED_USERS` list is properly configured
- [ ] HTTPS is enabled in production
- [ ] Strong `FLASK_SECRET_KEY` is set
- [ ] Redirect URIs are exact matches in Google Cloud Console
- [ ] Google+ API is enabled
## Next Steps
<CardGroup cols={2}>
<Card title="Multi-User Setup" icon="users" href="/guides/setup/multi-user">
Configure multi-user support with brokerage accounts
</Card>
<Card title="Deployment" icon="rocket" href="/guides/deployment/docker">
Deploy your application to production
</Card>
</CardGroup>