AutoMagik Hive is a multi-agent AI system that provides three distinct execution entity types: Agents, Teams, and Workflows. This guide covers everything you need to integrate Hive with Spark for scheduled automation of AI agent tasks.
Prerequisites:
Hive installed and running on port 8000 (default)
Spark installed with PostgreSQL and Redis running
At least one agent, team, or workflow created in Hive
Hive provides three types of execution entities, and all three become “workflows” in Spark terminology:
Agents
Single AI agents with specific roles and tools. Best for focused, individual tasks like Q&A or simple automation.
Teams
Multi-agent groups that coordinate to solve complex problems. Multiple agents working together with specialized roles.
Workflows
Structured multi-step processes with defined sequences and logic. Linear task execution with dependencies.
Important: In Spark, agents, teams, and workflows are all treated as “workflows” that can be scheduled and executed. The distinction is preserved in metadata but the interface is unified.
Spark handles all three types identically from a scheduling perspective. The adapter automatically detects the entity type and calls the appropriate Hive API endpoint.
Health check passed: status successVersion check passed: Automagik Hive Multi-Agent SystemSuccessfully added source: http://localhost:8000
The source type must beautomagik-hive (not just hive). This tells Spark to use the correct adapter.
Verify the connection:
Copy
automagik-spark sources list
You should see your Hive source with status “active”.
3
Discover Available Workflows
List all agents, teams, and workflows from your Hive instance:
Copy
# View all available Hive entitiesautomagik-spark workflows sync
Expected output:
Copy
┌────────────────────┬──────────────┬────────────────────────────────┬─────────────┐│ ID │ Name │ Description │ Source │├────────────────────┼──────────────┼────────────────────────────────┼─────────────┤│ researcher │ researcher │ AutoMagik Hive Agent: research │ my-hive ││ dev-team │ dev-team │ Hive Team with 3 members │ my-hive ││ data-pipeline │ data-pipeline│ AutoMagik Hive Workflow: data │ my-hive │└────────────────────┴──────────────┴────────────────────────────────┴─────────────┘
You’ll see:
Agents - Individual AI agents
Teams - Multi-agent teams
Workflows - Structured processes
4
Sync an Entity to Spark
Import the Hive entity you want to schedule:
Copy
# Sync a specific agent/team/workflowautomagik-spark workflows sync researcher
Expected output:
Copy
Successfully synced flow researcher
Verify it was synced:
Copy
automagik-spark workflows list
Expected output:
Copy
┌──────────┬────────────┬────────────┬─────────┬───────────┬──────────┬──────┬──────────────┐│ ID │ Name │ Latest Run │ Tasks │ Schedules │ Instance │ Type │ Last Updated │├──────────┼────────────┼────────────┼─────────┼───────────┼──────────┼──────┼──────────────┤│ wf-123 │ researcher │ NEW │ 0 (0) │ 0 │ my-hive │ ... │ 2 mins ago │└──────────┴────────────┴────────────┴─────────┴───────────┴──────────┴──────┴──────────────┘
5
Test Manual Execution
Before scheduling, test the entity manually:
Copy
# Run the workflow with a test messageautomagik-spark workflows run wf-123 \ --input "Research the latest AI trends in 2024"
Expected output:
Copy
Task task-abc-123 completed successfullyInput: Research the latest AI trends in 2024Output: { "content": "Here are the key AI trends for 2024:...", "session_id": "...", "run_id": "..."}
Available Workflows:0: researcher (0 schedules)Select a workflow: 0Schedule Type: 0: Interval (e.g., every 30 minutes) 1: Cron (e.g., every day at 8 AM) 2: One-time (run once at a specific time)Select schedule type: 1Enter cron expression: 0 9 * * 1-5Enter input value: Daily market research report
Verify the schedule:
Copy
automagik-spark schedules list
Expected output:
Copy
┌──────────────┬────────────┬──────┬─────────────┬────────────┬────────┬───────┬────────┐│ ID │ Workflow │ Type │ Expression │ Next Run │ Tasks │ Input │ Status │├──────────────┼────────────┼──────┼─────────────┼────────────┼────────┼───────┼────────┤│ schedule-123 │ researcher │ cron │ 0 9 * * 1-5 │ Tomorrow │ 0 (0) │ {...} │ ACTIVE ││ │ │ │ │ 09:00 AM │ │ │ │└──────────────┴────────────┴──────┴─────────────┴────────────┴────────┴───────┴────────┘
7
Monitor Execution Results
Check the execution history and results:
Copy
# View all task executionsautomagik-spark tasks list
Expected output:
Copy
┌──────────────┬────────────┬─────────────┬────────┬────────────┬──────────┐│ ID │ Workflow │ Schedule │ Status │ Created │ Duration │├──────────────┼────────────┼─────────────┼────────┼────────────┼──────────┤│ task-abc-123 │ researcher │ schedule-12 │ ✓ OK │ 5 mins ago │ 2.3s ││ task-def-456 │ researcher │ schedule-12 │ ✓ OK │ 1 day ago │ 1.8s │└──────────────┴────────────┴─────────────┴────────┴────────────┴──────────┘
View detailed output:
Copy
automagik-spark tasks view task-abc-123
This shows the full response from the Hive agent/team/workflow, including:
Critical: Content-Type Must Be Form-UrlencodedAll Hive execution endpoints use application/x-www-form-urlencoded, NOT JSON. This is the most common mistake when integrating with Hive.
curl -X POST "http://localhost:8000/agents/support-agent/runs" \ -H "x-api-key: your-hive-api-key" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "message=What is the status of order 12345?&stream=false&session_id=user-456"
Response:
Copy
{ "run_id": "run-abc123", "agent_id": "support-agent", "session_id": "user-456", "status": "COMPLETED", "content": "Order 12345 is currently in transit and will arrive on Monday.", "metrics": { "tokens_used": 245, "duration_ms": 1234 }}
curl -X POST "http://localhost:8000/teams/research-team/runs" \ -H "x-api-key: your-hive-api-key" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "message=Research the impact of AI on healthcare&stream=false&mode=coordinate"
Response:
Copy
{ "run_id": "run-def456", "team_id": "research-team", "session_id": "session-789", "status": "COMPLETED", "coordinator_response": { "content": "Based on research from our team, AI is transforming healthcare through...", "agent_id": "coordinator" }, "member_responses": [ { "agent_id": "researcher", "response": "Found 15 recent studies on AI in healthcare..." }, { "agent_id": "analyst", "response": "Analysis shows 45% improvement in diagnostic accuracy..." } ]}
For conversation-based agents, Spark automatically manages session IDs:
Copy
# First run creates a sessionautomagik-spark workflows run wf-123 --input "Who are you?"# Subsequent runs with the same workflow maintain sessionautomagik-spark workflows run wf-123 --input "What was my first question?"
Error: 'Connection refused' or 'Failed to validate'
Problem: Spark cannot connect to Hive.Solutions:
Copy
# 1. Check if Hive is runningcurl http://localhost:8000/api/v1/health# 2. Verify the port number (default is 8000)# If Hive runs on a different port, update the source:automagik-spark sources update \ --url "http://localhost:YOUR_PORT"# 3. Check firewall/network settings# Ensure port 8000 is accessible# 4. Test with API keycurl -X GET "http://localhost:8000/api/v1/health" \ -H "x-api-key: your-hive-api-key"
Error: 'Flow not found' after sync
Problem: Entity ID doesn’t match between Hive and Spark.Solutions:
Copy
# 1. List available entities from Hiveautomagik-spark workflows sync# 2. Use the exact ID shown in the list# Hive prioritizes: 'id' field > 'agent_id'/'team_id'/'workflow_id' > 'name'# 3. Check Hive directlycurl http://localhost:8000/agents \ -H "x-api-key: your-key"# 4. Re-sync workflowsautomagik-spark workflows delete {workflow_id}automagik-spark workflows list --source my-hive --sync
Error: 'Invalid API key' or 401 Unauthorized
Problem: API key is incorrect or missing.Solutions:
Copy
# 1. Verify your Hive API keyhive config show | grep API_KEY# 2. Update Spark source with correct keyautomagik-spark sources update \ --api-key "correct-api-key"# 3. Test authentication directlycurl http://localhost:8000/api/v1/health \ -H "x-api-key: your-key"
Error: 422 Unprocessable Entity - Wrong Content Type
Problem: Sending JSON instead of form-urlencoded data.Cause: This is the most common Hive integration mistake.Solution:Spark automatically uses the correct content type. If manually testing:
# From hive_adapter.pyresponse = client.post( f"/agents/{agent_id}/runs", data=payload, # Form data, not JSON headers={"Content-Type": "application/x-www-form-urlencoded"})
Task completes but output is empty
Problem: Hive entity executed but produced no output.Solutions:
Copy
# 1. Check task details for Hive-specific errorsautomagik-spark tasks view task-abc-123# 2. Verify the agent/team/workflow works in Hive directlyhive task assign researcher "Test message"# 3. Check Hive logshive logs --tail 100# Or if using Docker:docker logs automagik-hive# 4. Ensure the agent has proper tools and API keyshive agents status researcher# 5. Test entity manuallycurl -X POST "http://localhost:8000/agents/{id}/runs" \ -H "x-api-key: your-key" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "message=test&stream=false"
Wrong entity type detected
Problem: Spark treats a team as an agent or vice versa.Solutions:
Copy
# 1. Check the entity type in Hive's API responsecurl http://localhost:8000/agents \ -H "x-api-key: your-key"curl http://localhost:8000/teams \ -H "x-api-key: your-key"# 2. Ensure your Hive configuration uses correct format# agents.yaml should have proper 'agents' vs 'teams' sections# 3. Re-sync the entity after fixing Hive configautomagik-spark workflows sync entity-id --force
Entity type detection:
Copy
// Agent response includes:{ "id": "researcher", "data": { "type": "hive_agent" // <-- Determines execution path }}// Team response includes:{ "id": "dev-team", "data": { "type": "hive_team" // <-- Different type }}
# Run a test research taskautomagik-spark workflows run market-researcher \ --input "Research AI agent trends in Q4 2024"# Check outputautomagik-spark tasks listautomagik-spark tasks view <task-id>
✅ Understand the differences between agents, teams, and workflows
✅ Navigate Hive’s API endpoints and content-type requirements
✅ Sync Hive entities to Spark
✅ Test and schedule automated executions
✅ Monitor results and troubleshoot issues
✅ Handle Hive-specific API behaviors and execution patterns
✅ Debug common integration problems
Key Takeaway: Hive’s agents, teams, and workflows all become “workflows” in Spark, but Spark’s adapter automatically handles the different API endpoints, content types, and response formats for each entity type. The most critical detail is using form-urlencoded data instead of JSON for all execution requests.