Troubleshooting guide for the top 10 Spark errors with practical solutions
This guide covers the most common errors you’ll encounter with Spark and how to fix them. Each error includes the exact message, what it means, and step-by-step solutions.
Quick Tip: Most errors come down to one of three things: services not running, wrong configuration, or missing environment variables. Start there.
Your shell cannot find the automagik-spark command. This usually means it’s not installed, not in your PATH, or your virtual environment isn’t activated.
# Check if .env exists in your project rootls -la .env# If it doesn't exist, create it:touch .env
Step 2: Add the database URL
Copy
# Edit .env file and add:AUTOMAGIK_SPARK_DATABASE_URL=postgresql://user:password@localhost:5432/spark_db# Example for development:AUTOMAGIK_SPARK_DATABASE_URL=postgresql://postgres:postgres@localhost:5432/automagik_spark
Step 3: Verify the variable is set
Copy
# Load the .env file (if not using dotenv)export $(cat .env | xargs)# Check if it's set:echo $AUTOMAGIK_SPARK_DATABASE_URL
Step 4: Test database connection
Copy
# Try to run migrations:automagik-spark db migrate
# On Linux (systemd):sudo systemctl status postgresql# On macOS (Homebrew):brew services list | grep postgresql# Or check for the process:ps aux | grep postgres
Step 2: Start PostgreSQL if not running
Copy
# On Linux (systemd):sudo systemctl start postgresql# On macOS (Homebrew):brew services start postgresql# On Docker:docker start postgres# OR if using docker-compose:docker-compose up -d postgres
Step 3: Verify PostgreSQL is listening
Copy
# Check if port 5432 is open:netstat -an | grep 5432# ORlsof -i :5432# Try connecting with psql:psql -h localhost -U postgres -d postgres
# Connect to PostgreSQL:psql -U postgres# Create database:CREATE DATABASE automagik_spark;# Grant permissions:GRANT ALL PRIVILEGES ON DATABASE automagik_spark TO postgres;# Exit:\q
# Test Redis connection:redis-cli ping# Expected output: PONG# If that fails, check if Redis is running:ps aux | grep redis
Step 2: Start Redis if not running
Copy
# On Linux (systemd):sudo systemctl start redis# On macOS (Homebrew):brew services start redis# On Docker:docker start redis# OR if using docker-compose:docker-compose up -d redis# Direct start (for testing):redis-server
Step 3: Verify Redis is accessible
Copy
# Test connection with redis-cli:redis-cli -h localhost -p 6379 ping# Check if port is open:netstat -an | grep 6379
Step 4: Check broker URL configuration
Copy
# Verify CELERY_BROKER_URL in .env:echo $AUTOMAGIK_SPARK_CELERY_BROKER_URL# Should be:# redis://localhost:6379/0# OR with password:# redis://:password@localhost:6379/0
Step 5: Test Celery connection
Copy
# Try starting a worker (will fail immediately if Redis is down):automagik-spark worker start# Check for connection errors in logs
# For LangFlow (default port 7860):curl http://localhost:7860/health# ORcurl http://localhost:7860/api/v1/version# For Hive (default port 8881):curl http://localhost:8881/health
Step 2: Check source configuration
Copy
# List all configured sources:automagik-spark sources list# Check the URL in the output - should match where your service is running
Step 3: Test connection manually
Copy
# Try to reach the API directly:curl -v http://localhost:7860/api/v1/flows# Look for connection errors in the output
# URLs must:# - Start with http:// or https://# - Not have trailing slashes (Spark handles this)# - Use correct port number# - Use correct hostname (localhost vs 127.0.0.1 vs 0.0.0.0)# ✅ Good examples:http://localhost:7860https://langflow.example.comhttp://192.168.1.100:8881# ❌ Bad examples:localhost:7860 # Missing protocolhttp://localhost:7860/ # Trailing slash can cause issueshtp://localhost:7860 # Typo in protocol
# Check what API key is configured:automagik-spark sources list# (API keys are encrypted in storage, so you won't see the actual value)# Test the API key manually:# For LangFlow:curl -H "x-api-key: your-api-key" http://localhost:7860/api/v1/flows# For Hive:curl -H "Authorization: Bearer your-api-key" http://localhost:8881/api/v1/agents
Step 2: Generate new API key from source
Copy
# For LangFlow:# 1. Open LangFlow UI (http://localhost:7860)# 2. Go to Settings → API Keys# 3. Create new API key# 4. Copy the key immediately (won't be shown again)# For Hive:# Use the Hive API or UI to generate a new key
Step 3: Update API key in Spark
Copy
# Update the source with new API key:automagik-spark sources update <source-id> \ --api-key your-new-api-key# OR delete and re-add the source:automagik-spark sources delete <source-id>automagik-spark sources add \ --name my-langflow \ --type langflow \ --url http://localhost:7860 \ --api-key your-new-api-key
Step 4: Verify authentication works
Copy
# Try listing workflows from the source:automagik-spark workflows list --source-url http://localhost:7860# If this works, authentication is successful
# 1. Get your API key from LangFlowLANGFLOW_KEY="sk-..."# 2. Test it directly:curl -H "x-api-key: $LANGFLOW_KEY" http://localhost:7860/api/v1/flows# 3. If that works, update Spark:automagik-spark sources update <source-id> --api-key $LANGFLOW_KEY
# List remote workflows:automagik-spark workflows list --source-url http://localhost:7860# Check if your workflow ID appears in the list
Step 2: Verify workflow exists in source UI
Copy
# For LangFlow:# 1. Open http://localhost:7860# 2. Go to Flows section# 3. Find your flow and note the ID in the URL# For Hive:# 1. Open http://localhost:8881# 2. Check Agents/Teams/Workflows section# 3. Verify the resource exists
Step 3: Sync a different workflow
Copy
# If the workflow was deleted, sync a new one:automagik-spark workflows sync <new-flow-id> \ --source-url http://localhost:7860# Or sync by name:automagik-spark workflows list --source-url http://localhost:7860# Find the correct ID, then sync it
Step 4: Clean up orphaned workflows
Copy
# List synced workflows in Spark:automagik-spark workflows list# Delete the missing workflow:automagik-spark workflows delete <workflow-id># This also removes associated schedules
Step 5: Check source type matches
Copy
# Verify you're using the correct source URL:automagik-spark sources list# Make sure the source type (langflow/automagik-hive) matches# the workflow you're trying to sync
For LangFlow workflows, Spark requires explicit input and output components to be specified. This error means the workflow structure doesn’t have identifiable components or they weren’t configured during sync.
# Open your flow in LangFlow UI:# http://localhost:7860# Identify which components should be:# - Input: Usually ChatInput, TextInput, or similar# - Output: Usually ChatOutput, TextOutput, or similar
Step 2: Get component IDs from flow
Copy
# List the flow details to see components:automagik-spark workflows list --source-url http://localhost:7860# Look for the flow structure in the output# Component IDs are usually shown in the flow data
Step 3: Sync with explicit component specification
Copy
# For LangFlow, you may need to specify components:# (Note: Current CLI may not support this directly)# In this case, use the API:curl -X POST http://localhost:8883/api/v1/workflows/sync/<flow-id> \ -H "X-API-Key: your-api-key" \ -H "Content-Type: application/json" \ -d '{ "source_url": "http://localhost:7860", "input_component": "ChatInput-xxxxx", "output_component": "ChatOutput-yyyyy" }'
Step 4: Simplify your workflow
Copy
# If the flow is too complex, simplify it:# 1. Create a simpler version with clear input/output# 2. Test the simple version first# 3. Add complexity incrementally
# 1. Open flow in LangFlow# 2. Click on the input component (e.g., ChatInput)# 3. Look in the component settings panel for the ID# 4. Note it down (format: ComponentType-uuid)# 5. Repeat for output component
# If you need second-precision or complex patterns, use cron:automagik-spark schedules create \ --workflow-id <workflow-id> \ --cron "*/5 * * * *" # Every 5 minutes# More cron examples:0 * * * * # Every hour (on the hour)0 0 * * * # Every day at midnight0 9 * * 1-5 # Weekdays at 9 AM
Schedules are configured but tasks are not being created at the scheduled times. This is usually because Celery Beat (the scheduler) is not running or not connected to the database.
# Check worker and beat status:automagik-spark worker status# Expected output should show:# - Worker is running (PID: xxxxx)# - Beat scheduler is running (PID: yyyyy)
Step 2: Start Beat if not running
Copy
# Start worker and beat together:automagik-spark worker start# OR start in daemon mode (background):automagik-spark worker start --daemon# Check logs:automagik-spark worker logs --follow
Step 3: Check if schedule is active
Copy
# List all schedules:automagik-spark schedules list# Look for "active: true" in the output# If inactive, enable it:automagik-spark schedules enable <schedule-id>
Step 4: Verify schedule configuration
Copy
# View schedule details:automagik-spark schedules list# Check:# - active: should be true# - next_run_at: should be in the future# - schedule_type: should be set (interval/cron)# - Expression: should be valid
# Stop worker and beat:automagik-spark worker stop# Wait a few seconds, then restart:automagik-spark worker start# Beat will reload schedules from database on startup
Step 7: Check timezone configuration
Copy
# Verify timezone setting in .env:echo $AUTOMAGIK_TIMEZONE# Should match your database timezone# Default is UTC# If not set, add to .env:AUTOMAGIK_TIMEZONE=UTC
# Search all logs for a specific error:grep -r "error message" logs/# Search for failed tasks:grep -i "failed" logs/worker.log# Search for connection errors:grep -i "connection" logs/*.log
# Enable debug logging:export AUTOMAGIK_SPARK_LOG_LEVEL=DEBUG# Restart services:automagik-spark worker stopautomagik-spark worker start# Logs will now be much more verbose