Database Configuration
PostgreSQL Setup
Automagik Spark requires PostgreSQL 12 or higher for production use.Connection String Format
postgresql+asyncpg://- Required protocol and driverusername- PostgreSQL usernamepassword- PostgreSQL password (URL-encoded if contains special characters)host- Database server hostname or IPport- PostgreSQL port (default: 5432)database- Database name
Create Database
Run Migrations
After configuring the database URL, initialize the database schema:Migration Commands:
automagik-spark db upgrade- Apply all pending migrationsautomagik-spark db downgrade- Rollback last migrationautomagik-spark db current- Show current migration version
Connection Pooling
For production deployments, configure PostgreSQL connection pooling:pool_size=20- Base connection pool sizemax_overflow=10- Additional connections during peak load- Adjust based on worker concurrency and expected load
Backup Strategy
Automated Backups: Create a daily backup script:Always test your backup restoration process before you need it in production!
Redis Configuration
Redis Setup
Redis is used for Celery task queue and result storage.Connection String Format
Redis Database Numbers
Use different database numbers for organization:Memory Management
Configure Redis memory limits inredis.conf:
Persistence
For production, enable Redis persistence:Health Check
Verify Redis connectivity:Celery Configuration
Worker Configuration
Concurrency
Set worker concurrency based on workload:Task Queues
Spark uses three priority queues:| Queue | Priority | Use Case |
|---|---|---|
high_priority | High | Urgent tasks, manual runs |
default | Normal | Scheduled workflows |
low_priority | Low | Background jobs, cleanup |
Retry Policies
Built-in retry configuration (fromcelery_app.py):
Task Timeouts
Tasks exceeding
task_time_limit are killed. Increase for long-running workflows by modifying celery_app.py.Worker Memory Limits
Beat Scheduler Configuration
Celery Beat monitors schedules and creates tasks.Database Scheduler
Spark uses database-backed scheduler (no file-based schedule):Monitor Beat Logs
Security Configuration
API Key Generation
Generate secure API keys:.env:
Encryption Key Setup
Generate Fernet encryption key for sensitive data: Using Python:.env:
Encryption Key Best Practices:
- Generate unique key per environment (dev, staging, production)
- Never commit keys to version control
- Store production keys in secrets manager (AWS Secrets Manager, HashiCorp Vault, Azure Key Vault)
- Back up encryption keys securely (separate from database backups)
- Rotate keys periodically (requires data re-encryption)
- Document key recovery procedures
CORS Configuration
Restrict CORS origins in production:Production Security Checklist
Pre-Deployment Security Checks
Pre-Deployment Security Checks
Essential Security Steps:
- Generate unique
AUTOMAGIK_SPARK_API_KEY - Generate unique
AUTOMAGIK_SPARK_ENCRYPTION_KEY - Change default PostgreSQL password
- Enable Redis password authentication
- Configure restrictive CORS origins
- Set
ENVIRONMENT=production - Use HTTPS for
AUTOMAGIK_SPARK_REMOTE_URL - Enable PostgreSQL SSL connections
- Configure firewall rules (allow only necessary ports)
- Set up log rotation
- Enable Redis persistence (appendonly)
- Configure database backups
- Use environment-specific
.envfiles (not committed to git) - Audit file permissions on log directories
- Review worker concurrency and memory limits
Network Security
Network Security
Firewall Rules:
Configure UFW (Ubuntu firewall):Reverse Proxy with Nginx:Use nginx to:
| Service | Port | Access Level |
|---|---|---|
| PostgreSQL | 5432 | Internal network only |
| Redis | 6379 | Internal network only |
| Spark API | 8883 | Public or internal (depends on use case) |
| LangFlow | 7860 | Internal network only |
- Terminate SSL/TLS
- Add security headers
- Rate limiting
- Request filtering
Access Control
Access Control
Database Security:Redis Security:Best Practices:
- Use strong PostgreSQL passwords (20+ characters)
- Enable Redis
requirepass - Rotate API keys regularly (quarterly recommended)
- Implement API rate limiting (coming in future version)
- Monitor failed authentication attempts
- Use network segmentation (VPC/subnets)
- Implement least privilege principle
Logging Configuration
Log Levels
Choose appropriate log level for each environment:| Level | Use Case | Output Volume | Performance Impact |
|---|---|---|---|
DEBUG | Development, troubleshooting | Very High | Medium |
INFO | Production monitoring | Medium | Low |
WARNING | Production (quiet) | Low | Very Low |
ERROR | Critical issues only | Very Low | Minimal |
Log File Locations
Development:Log Rotation
Uselogrotate for production log management:
Centralized Logging
For production, send logs to centralized system: Using rsyslog:Complete Configuration Examples
Development .env File
Production .env File
Docker Compose .env File
For Docker Compose deployments, add container-specific variables:Configuration Validation
Verify Configuration
After setting up your.env file, verify configuration:
Health Check Commands
Troubleshooting Configuration Issues
Database Connection Failed
Database Connection Failed
Error: Solutions:
Connection refused or Could not connect to databaseDiagnostic Steps:- Verify PostgreSQL is running:
sudo systemctl start postgresql - Check database exists:
psql -U postgres -l - Verify connection string format:
postgresql+asyncpg://user:pass@host:port/db - Check firewall rules allow port 5432/5402
- Ensure user has correct permissions:
- Check PostgreSQL logs:
sudo tail -f /var/log/postgresql/postgresql-*.log
Redis Connection Failed
Redis Connection Failed
Error: Solutions:
Error connecting to Redis or Connection refusedDiagnostic Steps:- Verify Redis is running:
sudo systemctl start redis - Check Redis port:
netstat -tuln | grep 6379 - Verify password if set:
redis-cli -a your_password ping - Check broker URL format:
redis://:password@host:port/db - Ensure Redis accepts connections: check
bindin/etc/redis/redis.conf - Check Redis logs:
sudo tail -f /var/log/redis/redis-server.log
Worker Not Processing Tasks
Worker Not Processing Tasks
Error: Tasks stuck in queue, not executingDiagnostic Steps:Solutions:
- Check worker is running:
ps aux | grep celery - Verify broker connectivity:
redis-cli -h localhost -p 5412 ping - Check worker logs:
tail -f logs/worker.log - Ensure queue names match: default queue is
default - Restart worker:
- Check worker concurrency:
CELERY_WORKER_CONCURRENCY=4 - Monitor worker memory:
ps aux | grep celery | awk '{print $2, $4}'
Schedules Not Triggering
Schedules Not Triggering
Error: Scheduled tasks not executing at expected timeDiagnostic Steps:Solutions:
- Verify Beat scheduler is running:
ps aux | grep celery | grep beat - Check timezone configuration:
AUTOMAGIK_TIMEZONE=UTC - Verify schedule is enabled:
automagik-spark schedules list - Check Beat logs:
grep "DatabaseScheduler" logs/worker.log - Ensure schedule expression is valid: cron or interval format
- Restart Beat:
- Check system clock:
timedatectl(ensure NTP is synchronized)
Encryption Errors
Encryption Errors
Error: Solutions:
Fernet key must be 32 url-safe base64-encoded bytesDiagnostic Steps:- Generate proper key:
- Set in .env:
AUTOMAGIK_SPARK_ENCRYPTION_KEY=<generated-key> - Ensure key is base64-encoded (often ending with
=) - Don’t use spaces or newlines in key
- Verify key format: Should be exactly 44 characters (32 bytes base64-encoded)
- If changing keys, backup database first (encrypted data will be lost)
Permission Errors
Permission Errors
Error: Solutions:
Permission denied when accessing logs or databaseDiagnostic Steps:- Create log directory:
- Set ownership:
- Set permissions:
- Run services as correct user in systemd service files

