Deployment¶
This guide covers deploying RotaCC to a production server. It assumes you are deploying with Docker Compose, which is the supported production method.
Architecture¶
The production deployment runs four containers via Docker Compose:
+-----------+
| Reverse |
| Proxy |
| (external)|
+-----+-----+
|
+-----v-----+
| web | Gunicorn (4 workers, 120s timeout)
| port 8000 | WhiteNoise serves static files
+-----+-----+
|
+-----------+-----------+
| |
+-------v-------+ +------v------+
| postgres | | redis |
| PostgreSQL 15 | | Redis 7 |
+---------------+ +-------------+
^ ^
| |
+-------v-------+ +------v------+
| celery_worker | | (shared |
| Celery + Beat | | Redis) |
+---------------+ +-------------+
Services:
| Service | Image | Purpose |
|---|---|---|
web |
Built from Dockerfile |
Django app served by Gunicorn on port 8000 |
celery_worker |
Built from Dockerfile |
Background task processing with Celery Beat scheduler |
postgres |
postgres:15-alpine |
PostgreSQL database |
redis |
redis:7-alpine |
Cache, session store, and Celery broker |
All containers share a Docker network. The web container exposes port 8000. Static files are served by WhiteNoise (no separate static file server required). A reverse proxy (Nginx, Caddy, or a managed load balancer) should sit in front of web to terminate TLS.
Two Compose files are provided:
docker-compose.yml-- full stack including PostgreSQL and Redis. Use this for self-hosted deployments where all services run on one machine.docker-compose.app-only.yml-- only thewebandcelery_workercontainers. Use this when PostgreSQL and Redis are managed externally (e.g., managed database services, hosted Redis).
Prerequisites¶
Server Requirements¶
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 1 core | 2+ cores |
| RAM | 2 GB | 4 GB |
| Disk | 20 GB | 40 GB SSD |
| Docker | 20.10+ | Latest |
| Docker Compose | v2.0+ | Latest |
The 120-second Gunicorn timeout accommodates long-running rota generation tasks. Celery workers also need sufficient memory for rota algorithm computation.
External Requirements¶
- DNS: An A record pointing your domain to the server IP
- TLS certificate: Use Let's Encrypt (certbot), a cloud provider's certificate manager, or Caddy's built-in ACME
- Reverse proxy: Nginx, Caddy, or a cloud load balancer to terminate TLS and forward to port 8000
Software on the Server¶
Deployment: Full Stack¶
Use this when running PostgreSQL and Redis on the same server as the application.
1. Clone the repository¶
2. Create the environment file¶
Edit .env with your production values. The minimum required changes are shown below -- see the Environment Variables section for the full reference.
# Required: generate a new secret key
SECRET_KEY=<your-generated-secret-key>
# Required: your production domain
ALLOWED_HOSTS=rota.yoursurgery.example.com
CSRF_TRUSTED_ORIGINS=https://rota.yoursurgery.example.com
SITE_URL=https://rota.yoursurgery.example.com
# Required: change the default database password
DB_PASSWORD=<strong-password>
# Required: set up initial admin user
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=admin@yoursurgery.example.com
DJANGO_SUPERUSER_PASSWORD=<strong-password>
# Required for production email
EMAIL_PROVIDER=smtp
EMAIL_HOST=smtp.your-provider.example.com
EMAIL_PORT=587
EMAIL_HOST_USER=your-smtp-user
EMAIL_HOST_PASSWORD=your-smtp-password
EMAIL_USE_TLS=True
DEFAULT_FROM_EMAIL=rota@yoursurgery.example.com
3. Generate a secret key¶
Generate a cryptographically secure secret key:
Paste the output as the SECRET_KEY value in .env.
4. Build and start services¶
This builds the Docker image, starts all four containers, and runs in the background. The entrypoint script (docker-entrypoint.sh) handles first-time setup automatically:
- Waits for PostgreSQL to be ready
- Runs database migrations
- Creates the superuser from
DJANGO_SUPERUSER_*variables (idempotent) - Collects static files into
/app/staticfiles
5. Verify services are running¶
# Check all containers are up and healthy
docker compose ps
# Expected output: all services showing "healthy" or "running"
# web running (healthy)
# celery-worker running (healthy)
# postgres running (healthy)
# redis running (healthy)
# Check the application responds
curl http://localhost:8000/
# Check logs if anything is wrong
docker compose logs web
docker compose logs celery_worker
6. Set up the reverse proxy¶
Point your reverse proxy to localhost:8000. Example Nginx configuration:
server {
listen 443 ssl http2;
server_name rota.yoursurgery.example.com;
ssl_certificate /etc/letsencrypt/live/rota.yoursurgery.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/rota.yoursurgery.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 120s;
}
}
server {
listen 80;
server_name rota.yoursurgery.example.com;
return 301 https://$host$request_uri;
}
The proxy_read_timeout 120s matches the Gunicorn timeout to avoid premature 504 errors during rota generation.
Deployment: App Only (External Database)¶
If PostgreSQL and Redis are managed externally (e.g., AWS RDS, Azure Database, Upstash), use the app-only Compose file:
This file uses ${VARIABLE:?} syntax for critical variables, meaning Docker will refuse to start if any required variable is missing from .env.
Configure these additional variables to point to your external services:
DB_HOST=your-postgres-host.example.com
DB_PORT=5432
DB_USER=rota_user
DB_PASSWORD=<external-db-password>
REDIS_HOST=your-redis-host.example.com
CELERY_BROKER_URL=redis://your-redis-host.example.com:6379/0
CELERY_RESULT_BACKEND=redis://your-redis-host.example.com:6379/0
Environment Variables¶
The getting-started guide covers the full variable reference. This section focuses on production-specific concerns.
SECRET_KEY¶
Required in production. The application will refuse to start with the insecure fallback key.
Never commit the secret key to version control. If it is compromised, rotate it immediately -- this invalidates all active sessions.
DEBUG¶
Must be False in production. The production settings module (rota/settings/prod.py) enforces this explicitly, so even if .env says True, production will use False.
ALLOWED_HOSTS and CSRF_TRUSTED_ORIGINS¶
Set these to your production domain(s):
ALLOWED_HOSTS=rota.yoursurgery.example.com
CSRF_TRUSTED_ORIGINS=https://rota.yoursurgery.example.com
Both accept comma-separated values for multiple domains. CSRF_TRUSTED_ORIGINS must include the scheme (https://).
Database Credentials¶
For the full-stack Compose file, these must match between the PostgreSQL container environment and the Django application:
DB_ENGINE=django.db.backends.postgresql
DB_NAME=rota_db
DB_USER=rota_user
DB_PASSWORD=<strong-password>
DB_HOST=postgres # Docker service name
DB_PORT=5432
The PostgreSQL container creates the database and user on first start using the POSTGRES_DB, POSTGRES_USER, and POSTGRES_PASSWORD variables, which the Compose file maps from DB_NAME, DB_USER, and DB_PASSWORD.
Redis¶
For the full-stack Compose file, use the Docker service name:
For authenticated Redis providers (e.g., Redis Cloud, Upstash), add:
Email Provider¶
The system sends emails for account verification, leave notifications, and rota updates. Choose a provider:
For SMTP (most common):
EMAIL_HOST=smtp.your-provider.example.com
EMAIL_PORT=587
EMAIL_HOST_USER=your-smtp-user
EMAIL_HOST_PASSWORD=your-smtp-password
EMAIL_USE_TLS=True
DEFAULT_FROM_EMAIL=rota@yoursurgery.example.com
For transactional email services, set the appropriate API key variable (MAILGUN_API_KEY, SENDGRID_API_KEY, POSTMARK_SERVER_TOKEN, or RESEND_API_KEY).
DEPLOYMENT_ENV¶
Controls an environment banner shown to users:
DEPLOYMENT_ENV=PROD # No banner shown (default)
DEPLOYMENT_ENV=STAGING # Yellow "STAGING" banner
DEPLOYMENT_ENV=DEV # Red "DEV" banner
Set this to STAGING or DEV on non-production environments to prevent accidental data changes by users who think they are in production.
Superuser Auto-Creation¶
The entrypoint script runs manage.py ensure_superuser on every start. This command creates the superuser if one does not already exist with the given username. It is safe to leave these variables set -- running it again on an existing superuser is a no-op.
DJANGO_SUPERUSER_USERNAME=admin
DJANGO_SUPERUSER_EMAIL=admin@yoursurgery.example.com
DJANGO_SUPERUSER_PASSWORD=<strong-password>
To skip auto-creation entirely, leave all three empty.
Production Settings¶
The production settings module (rota/settings/prod.py) extends base.py with these overrides:
Security¶
| Setting | Value | Purpose |
|---|---|---|
DEBUG |
False |
Disables debug mode (enforced, not configurable) |
CSRF_COOKIE_SECURE |
True |
CSRF cookies only sent over HTTPS |
SESSION_COOKIE_SECURE |
True |
Session cookies only sent over HTTPS |
SECURE_SSL_REDIRECT |
True (configurable) |
Redirects HTTP to HTTPS |
SECURE_HSTS_SECONDS |
31536000 (1 year, configurable) |
HTTP Strict Transport Security |
SECURE_HSTS_INCLUDE_SUBDOMAINS |
True |
HSTS applies to subdomains |
SECURE_HSTS_PRELOAD |
True |
Allows browser HSTS preloading |
SECURE_CONTENT_TYPE_NOSNIFF |
True |
Prevents MIME-type sniffing |
SECURE_BROWSER_XSS_FILTER |
True |
Browser XSS filter |
USE_X_FORWARDED_HOST |
True |
Trust X-Forwarded-Host from reverse proxy |
SECURE_PROXY_SSL_HEADER |
('HTTP_X_FORWARDED_PROTO', 'https') |
Detects HTTPS from proxy |
These settings assume the application is behind a reverse proxy that terminates TLS. The SECURE_SSL_REDIRECT can be disabled via the SECURE_SSL_REDIRECT env var if TLS is handled at a different layer.
Cache¶
Production uses Redis via django-redis instead of the in-memory cache used in development:
- Backend:
django_redis.cache.RedisCache - Key prefix:
rota(avoids collisions if Redis is shared) - Default timeout: 300 seconds (5 minutes)
Sessions are stored in the same Redis instance using django.contrib.sessions.backends.cache.
Static Files¶
Static files are served by WhiteNoise middleware, configured in base.py. The collectstatic command copies files to /app/staticfiles, which is a named Docker volume. No separate static file server is required.
Logging¶
Production configures structured logging to stdout/stderr (Docker captures these):
| Logger | Level | Purpose |
|---|---|---|
| Root | INFO |
Catch-all |
django |
INFO (configurable via DJANGO_LOG_LEVEL) |
Django framework |
rota_generation |
INFO |
Rota algorithm and generation tasks |
celery |
WARNING |
Celery worker messages |
To increase verbosity for debugging, set DJANGO_LOG_LEVEL=DEBUG in .env and restart the web container.
Initial Setup¶
The Docker entrypoint handles most first-time setup automatically. After the containers are running and healthy, complete these manual steps.
1. Verify the superuser was created¶
docker compose exec web uv run python manage.py shell -c \
"from django.contrib.auth.models import User; print(User.objects.filter(is_superuser=True).count(), 'superusers exist')"
2. Configure SystemConfiguration¶
Log in to the admin interface at https://your-domain/django-admin/ using the superuser credentials. Navigate to Configuration > System Config and configure:
- Practice name and contact details
- Rota generation parameters
- Notification preferences
The system requires at least one SystemConfiguration record to function correctly.
3. Create clinician users¶
Use the admin interface or the management command:
This prompts interactively for the clinician's details.
4. Verify email delivery¶
Send a test email using the Django shell to confirm the email provider is configured correctly:
docker compose exec web uv run python manage.py shell -c \
"from django.core.mail import send_mail; send_mail('Test', 'RotaCC test email', 'noreply@rota.system', ['admin@yoursurgery.example.com'])"
5. Verify Celery is processing tasks¶
This should return an empty dict {} (no tasks currently running) rather than an error.
Updating a Running Deployment¶
Standard update¶
cd /opt/rota-cc
# 1. Pull the latest code
git pull origin main
# 2. Rebuild the Docker image and restart containers
docker compose up --build -d
# 3. The entrypoint automatically runs migrations and collects static files
# Watch the logs to confirm:
docker compose logs -f web
The entrypoint script runs migrate and collectstatic on every start, so these steps are handled automatically.
Manual update (if needed)¶
If you need to run steps manually (e.g., to inspect migration plans before applying):
cd /opt/rota-cc
git pull origin main
docker compose build
# Check what migrations will run
docker compose run --rm web python manage.py showmigrations
# Apply migrations
docker compose run --rm web python manage.py migrate
# Collect static files
docker compose run --rm web python manage.py collectstatic --noinput
# Restart services with the new code
docker compose up -d
Rolling back¶
If an update causes problems:
# 1. Find the previous working commit
git log --oneline -10
# 2. Check out the previous commit
git checkout <commit-hash>
# 3. Rebuild and restart
docker compose up --build -d
# 4. If migrations were applied, roll them back:
docker compose run --rm web python manage.py migrate <app_name> <previous_migration_number>
Always back up the database before running backward migrations. See Backups for backup procedures.
Monitoring¶
Container health¶
Each container has a health check configured in the Compose file:
# View health status of all containers
docker compose ps
# View details for a specific container
docker inspect --format='{{.State.Health.Status}}' web
docker inspect --format='{{.State.Health.Status}}' celery-worker
docker inspect --format='{{.State.Health.Status}}' postgres
docker inspect --format='{{.State.Health.Status}}' redis
Health check configuration:
| Container | Check | Interval | Timeout | Retries |
|---|---|---|---|---|
web |
curl -f http://localhost:8000/ |
30s | 5s | 3 |
celery_worker |
python manage.py run_celery --help |
30s | 10s | 3 |
postgres |
pg_isready -U <db_user> |
5s | 5s | 5 |
redis |
redis-cli ping |
30s | 10s | 3 |
Celery worker status¶
# Check active tasks
docker compose exec celery_worker uv run celery -A rota inspect active
# Check registered tasks
docker compose exec celery_worker uv run celery -A rota inspect registered
# Check worker stats (uptime, pool size, etc.)
docker compose exec celery_worker uv run celery -A rota inspect stats
Logs¶
All services log to stdout/stderr, captured by Docker:
# Follow all logs
docker compose logs -f
# Follow a specific service
docker compose logs -f web
docker compose logs -f celery_worker
docker compose logs -f postgres
# Last 100 lines from web
docker compose logs --tail 100 web
# Logs since a specific time
docker compose logs --since 1h web
Scheduled tasks¶
Celery Beat runs alongside the worker (the --beat flag in the Compose command). Two scheduled tasks are configured:
| Task | Schedule | Purpose |
|---|---|---|
daily-pg-dump-backup |
03:00 UTC daily | PostgreSQL dump backup |
cleanup-old-backups |
04:00 UTC daily | Remove backups past retention period |
Volume usage¶
# Check volume disk usage
docker system df -v
# Check specific volume mount points
docker compose exec web df -h /app/staticfiles
docker compose exec web df -h /app/backups
Named volumes:
| Volume | Mount point | Purpose |
|---|---|---|
pgdata |
/var/lib/postgresql/data |
PostgreSQL data |
static_data |
/app/staticfiles |
Collected static files |
media_data |
/app/media |
User-uploaded files |
backup_data |
/app/backups |
Database backups (shared between web and celery_worker) |
Troubleshooting¶
Container fails to start¶
Check the logs for the specific container:
Common causes:
- Missing or invalid .env file
- Database migration failure
- Port 8000 already in use on the host
Database connection errors¶
# Check PostgreSQL is healthy
docker compose exec postgres pg_isready -U rota_user
# Verify the database exists
docker compose exec postgres psql -U rota_user -d rota_db -c "SELECT 1;"
# Check the connection from the web container
docker compose exec web python manage.py dbshell -c "SELECT 1;"
If using the app-only Compose file, verify DB_HOST, DB_PORT, DB_USER, and DB_PASSWORD match your external database credentials.
Celery tasks not running¶
# Check the worker is running
docker compose exec celery_worker uv run celery -A rota inspect active
# Check Redis connectivity from the worker
docker compose exec celery_worker python -c \
"import redis; r=redis.Redis(host='redis',port=6379); print(r.ping())"
Common causes:
- Redis is not running or unreachable
- The Celery worker crashed (check docker compose logs celery_worker)
- Tasks are stuck in the queue (check with inspect reserved)
Static files not loading¶
The entrypoint runs collectstatic on every start. If static files are missing:
# Re-run manually
docker compose exec web python manage.py collectstatic --noinput
# Verify the files exist
docker compose exec web ls -la /app/staticfiles/
If using a reverse proxy, ensure it is not intercepting /static/ requests. WhiteNoise handles these directly through the Django application.
502 / 504 errors from reverse proxy¶
- 502 Bad Gateway: The
webcontainer is not running or not listening on port 8000. Checkdocker compose ps web. - 504 Gateway Timeout: A request took longer than the reverse proxy's timeout. Rota generation can take over 60 seconds. Set
proxy_read_timeout 120s(or equivalent) in the reverse proxy to match the Gunicorn timeout.
Email not sending¶
# Test SMTP connectivity from the web container
docker compose exec web python -c "
import smtplib
s = smtplib.SMTP('smtp.your-provider.example.com', 587)
s.starttls()
s.login('your-user', 'your-password')
print('SMTP connection successful')
s.quit()
"
Common causes:
- Incorrect EMAIL_HOST or EMAIL_PORT
- EMAIL_USE_TLS not set to True when required
- SMTP credentials are wrong
- Firewall blocking outbound port 587
Permission errors in containers¶
The Dockerfile creates an appuser (UID 1000). The entrypoint script runs as root initially to fix volume permissions, then switches to appuser. If you see permission denied errors:
# Check the volume permissions
docker compose exec web ls -la /app/staticfiles /app/media /app/backups
# Fix ownership manually if needed
docker compose exec web chown -R appuser:appuser /app/staticfiles /app/media
Resetting the database¶
To start fresh (destroys all data):
docker compose down -v # Stops containers and removes volumes
docker compose up --build -d # Rebuilds and starts fresh
The -v flag removes the named volumes including pgdata, static_data, media_data, and backup_data.