Overview
Spark’s adapter system provides a unified interface for executing workflows across different sources (LangFlow, Hive, etc.). Adapters handle the complexity of source-specific APIs, input/output formats, and authentication, allowing Spark to work with multiple workflow platforms seamlessly.What Adapters Do
Adapters serve as translators between Spark’s generic task model and source-specific APIs:Key Responsibilities
- API Communication: Handle HTTP requests to workflow sources
- Input Transformation: Convert Spark’s input data to source-specific format
- Output Normalization: Convert source responses to unified format
- Authentication: Manage API keys and authentication headers
- Error Handling: Catch and normalize errors across sources
- Flow Discovery: List and retrieve workflows from sources
Common Adapter Interface
All adapters inherit fromBaseWorkflowAdapter (source):
Unified Result Format
All adapters return aWorkflowExecutionResult:
LangFlow Adapter
The LangFlow adapter (source) handles component-based workflows where inputs and outputs are tied to specific components.Component-Based I/O
LangFlow flows consist of connected components (nodes):- input_component: ID of the component that receives input (e.g.,
ChatInput-abc123) - output_component: ID of the component that produces output (e.g.,
ChatOutput-xyz789)
Auto-Detection
The LangFlow adapter automatically detects input/output components:Flow Execution
When executing a LangFlow workflow:API Communication
The underlyingLangFlowManager makes HTTP requests:
Input/Output Example
Spark Input:task.output_data):
Hive Adapter
The Hive adapter (source) handles three different entity types: agents, teams, and workflows, each with distinct APIs.Agent vs Team vs Workflow
Hive supports three types of executable entities:| Entity | Description | API Endpoint | Use Case |
|---|---|---|---|
| Agent | Single AI agent with tools | /api/v1/agents/{id}/run | Individual task execution |
| Team | Multiple agents working together | /api/v1/teams/{id}/run | Collaborative task solving |
| Workflow | Multi-step orchestrated process | /api/v1/workflows/{id}/run | Complex multi-stage tasks |
API Variations
Each entity type has a different execution endpoint:Unified Execution
The Hive adapter handles all three types through a single interface:Response Normalization
Hive returns different response structures for each entity type: Agent Response:WorkflowExecutionResult that Spark can process uniformly.
Default Sync Parameters
Unlike LangFlow, Hive doesn’t use component IDs:Comparison Table
LangFlow vs Hive Adapter Behavior
| Feature | LangFlow Adapter | Hive Adapter |
|---|---|---|
| Source Type | langflow | automagik-hive |
| Entity Types | Single type (Flow) | Three types (Agent, Team, Workflow) |
| Component IDs | Required (auto-detected) | Not used |
| Input Format | input_value + component ID | prompt or input |
| Output Structure | Component outputs array | Entity-specific response |
| Session Tracking | Optional | Recommended |
| Metadata | Minimal | Rich (coordinator, members, steps) |
| API Authentication | x-api-key header | Authorization: Bearer header |
API Request Comparison
LangFlow:Adapter Registration
Adapters are registered in theAdapterRegistry (source):
When to Create Custom Adapters
You should create a custom adapter when:- Integrating a new workflow platform (e.g., n8n, Zapier, custom system)
- Source has unique authentication not covered by standard API keys
- Complex input/output transformation required
- Custom error handling needed for specific source behavior
- Source requires special configuration (webhooks, polling, etc.)
Custom Adapter Example
Here’s a skeleton for a custom adapter:Register Custom Adapter
type="custom-source":
Error Handling Patterns
Adapters should catch and normalize errors:Connection Errors
Authentication Errors
Workflow Errors
Testing Strategy
When creating custom adapters, test:- Connection:
validate()method works - Discovery:
list_flows_sync()returns flows - Retrieval:
get_flow_sync()fetches individual flows - Execution:
run_flow_sync()executes and returns results - Error handling: All error types are caught and normalized
Example Test
Performance Considerations
HTTP Client Reuse
Adapters should reuse HTTP clients:Timeout Configuration
Always set timeouts:Connection Pooling
For high-volume workloads, configure connection pools:Source Code References
- BaseWorkflowAdapter:
automagik_spark/core/workflows/adapters/base.py - LangFlowAdapter:
automagik_spark/core/workflows/adapters/langflow_adapter.py - HiveAdapter:
automagik_spark/core/workflows/adapters/hive_adapter.py - AdapterRegistry:
automagik_spark/core/workflows/adapters/registry.py
Next Steps
- Learn about Task Execution to see how adapters are invoked
- See Scheduling Internals for the complete workflow execution flow
- Explore Custom Adapters for detailed custom adapter development guide

