Overview
LangFlow is a visual workflow builder for creating AI agents and chatbots using a drag-and-drop interface. Spark integrates with LangFlow to schedule and automate flow executions. This guide covers everything from initial setup to advanced troubleshooting.Prerequisites:
- LangFlow installed and running (default port 7860)
- Spark installed with PostgreSQL and Redis running
- Basic understanding of LangFlow’s visual workflow builder
Architecture Overview
How LangFlow and Spark Work Together
The Flow:- Create workflows visually in LangFlow
- Spark discovers them via LangFlow API
- Create schedules in Spark
- Workers execute workflows automatically
- Results stored in Spark’s task history
LangFlow Architecture
LangFlow workflows (called “flows”) are built using a node-based graph system:- Nodes: Individual components like ChatInput, ChatOutput, LLM, Prompt, etc.
- Edges: Connections between nodes that define data flow
- Projects/Folders: Containers for organizing flows
- Components: Reusable templates (excluded from Spark sync)
Integration Sequence
Prerequisites
Installing LangFlow
If you don’t have LangFlow running yet:Alternative: Run LangFlow with Docker
Alternative: Run LangFlow with Docker
Verify LangFlow is Running
Creating a LangFlow Flow
Important: Spark only syncs flows that are saved in a Project or Folder. Template flows (without a project) are ignored.
1
Create a new project
- Open LangFlow UI at http://localhost:7860
- Click “New Project” or select an existing project
- Name your project (e.g., “Spark Workflows”)
2
Create a simple flow
- Click “New Flow” inside your project
- Name it (e.g., “Daily Report”)
- Add components from the left sidebar:
- ChatInput component (for input data)
- ChatOutput component (for results)
- Add any processing components in between
- Connect components by dragging from output to input nodes
3
Identify component IDs (Important!)
Spark needs to know which components handle input/output:
- Click on the ChatInput component
- Look at the URL or component properties panel
- Note the component ID (e.g.,
ChatInput-abc123) - Do the same for ChatOutput (e.g.,
ChatOutput-def456)
Pro tip: Spark attempts to auto-detect ChatInput and ChatOutput components during sync, but knowing the IDs helps with troubleshooting.
4
Save and test the flow
- Click “Save” in the top right
- Click “Run” to test the flow
- Verify it produces the expected output
Step-by-Step Integration
1
Get LangFlow API Key
Finding your API key
Option 1: Via LangFlow UI- Click your profile icon in the top right
- Navigate to “Settings” or “API Keys”
- Generate a new API key or copy existing one
- Format:
sk-...or similar
Security: Never commit API keys to version control. Use environment variables or
.env files.2
Add LangFlow as Source
Add LangFlow to Spark’s list of workflow sources:Expected output:Verify the source:Expected:
3
Discover Flows
List all available flows from your LangFlow instance:Expected output:
What you’re seeing:
- ID: LangFlow’s flow ID (needed for sync)
- Name: Flow name from LangFlow
- Source: Which LangFlow instance it came from
Don’t see your flow? Check:
- Flow is saved in a Project/Folder (not a template)
- LangFlow API is accessible:
curl http://localhost:7860/api/v1/flows/ - Flow is not marked as a “Component” (
is_component=True)
4
Sync the Flow
Import a specific flow into Spark:Expected output:What happens during sync:Expected:
- Spark fetches flow metadata from LangFlow
- Identifies ChatInput/ChatOutput components
- Stores component IDs for execution
- Normalizes flow data to Spark’s format
- Creates a local workflow record
5
Test Execution
Before scheduling, test the workflow manually:Expected output (success):Expected output (failure):
Understanding LangFlow execution results
Understanding LangFlow execution results
LangFlow returns a complex response structure:Spark normalizes this to:
6
Create Schedule
Schedule the workflow to run automatically:Follow the prompts:Common schedule patterns:
- Every 5 minutes:
*/5 * * * * - Every hour:
0 * * * * - Daily at 9 AM:
0 9 * * * - Weekdays at 9 AM:
0 9 * * 1-5
7
Monitor Tasks
View execution history and results:Expected:View detailed output:
Component Identification
How Spark Finds Components
When syncing a flow, Spark’s adapter:- Fetches flow data from LangFlow API
- Scans nodes in the flow structure
- Identifies components by type:
- Stores component IDs for execution
ChatInput- User message inputChatOutput- Bot response output
Finding Component IDs Manually
Method 1: LangFlow UI
- Open your flow in LangFlow
- Click on a component (e.g., ChatInput)
- Look for the component ID in the browser URL or component properties
- Format:
ComponentType-randomId(e.g.,ChatInput-abc123)
Method 2: API Inspection
Method 3: Using jq to Extract Components
Component ID Extraction Flow
Manual Component Specification
If auto-detection fails, you can specify components manually (advanced):Custom Component Types
Limitation: Spark currently only supports ChatInput/ChatOutput components. If your flow uses custom input/output components, it may not work correctly.
Input/Output Mapping
How Spark Maps Data to LangFlow
When Spark executes a flow:- Input Data → Sent to
input_component(usually ChatInput) - Flow Execution → LangFlow processes the flow
- Output Data ← Retrieved from
output_component(usually ChatOutput)
Understanding LangFlow Inputs
LangFlow’s ChatInput component accepts various input formats: Simple string input:Input Data Format
Spark sends input as FlowExecuteRequest:Configuring Schedule Inputs
When creating a schedule, provide input in the format your flow expects:Input Transformation
The LangFlow adapter automatically wraps your input:Tweaks and Advanced Configuration
Tweaks allow runtime customization of component parameters:Future enhancement: Spark may support flow tweaks via schedule configuration for advanced use cases.
Common LangFlow Patterns
Pattern 1: Simple Chat Bot
input_component: ChatInput node IDoutput_component: ChatOutput node ID
Pattern 2: RAG (Retrieval-Augmented Generation)
- Same as simple chatbot
- Input/output remain at endpoints
- Middle components handled internally
Pattern 3: Multi-Step Agent
input_component: ChatInputoutput_component: ChatOutput- Session ID used for memory continuity
Pattern 4: Conditional Flow
- Multiple ChatOutput nodes possible
- Spark captures the active output path
Troubleshooting
Common Issues
Error: Could not find chat input and output components in flow
Error: Could not find chat input and output components in flow
What it means
Spark couldn’t identify ChatInput/ChatOutput components in your flow.Common causes
- Flow doesn’t have ChatInput or ChatOutput components
- Components have custom types
- Flow structure changed after sync
- Using a template flow (not in a project)
How to fix
Step 1: Verify components in LangFlow- Open your flow in LangFlow UI
- Check for ChatInput and ChatOutput nodes
- Ensure they’re connected to your processing chain
- Add them to your flow in LangFlow
- Connect them appropriately
- Save and re-sync
Error: Flow {flow_id} not found
Error: Flow {flow_id} not found
What it means
LangFlow API returned 404 for the flow ID.Common causes
- Flow was deleted in LangFlow
- Wrong flow ID
- Flow moved to different project
- LangFlow database reset
How to fix
Step 1: List available flowsError: HTTP error 401 or 403
Error: HTTP error 401 or 403
What it means
Authentication failed with LangFlow API.Common causes
- Invalid or expired API key
- API key not set in source
- LangFlow authentication settings changed
- CORS or security policy blocking requests
How to fix
Step 1: Generate new API key- Open LangFlow UI
- Go to Settings → API Keys
- Create new key
LANGFLOW_AUTO_LOGIN is configured correctly.Flow structure changed after sync
Flow structure changed after sync
What it means
You modified the flow in LangFlow after syncing to Spark.Common causes
- Added/removed components
- Changed component IDs
- Renamed flow
- Restructured flow logic
How to fix
Step 1: Re-sync the flowPrevention
- Re-sync after major flow changes
- Test manually before relying on schedules
- Use version control for flow exports
Execution successful but no output
Execution successful but no output
What it means
LangFlow executed the flow but returned no output data.Common causes
- ChatOutput not connected to processing chain
- Flow error not captured by LangFlow
- Output component configured incorrectly
- Async execution not completed
How to fix
Step 1: Test in LangFlow UI- Run the flow manually in LangFlow
- Verify output appears
- Check for any warnings/errors
"result": null→ Flow returned nothing"result": {}→ Empty result"result": {"outputs": []}→ No outputs generated
API Connection Errors
API Connection Errors
Error: Connection Refused or Timeout
Common causes:- LangFlow not running
- Wrong URL or port
- Network/firewall issues
- Execution timeout
Debugging Tips
Check LangFlow is accessible:- Timeout errors → Increase
AUTOMAGIK_SPARK_HTTP_TIMEOUT(default: 30s) - Connection refused → Verify LangFlow URL and port
- SSL errors → LangFlow adapter uses
verify=Falsefor development
Retry Failed Tasks
If a task failed due to temporary issues:Troubleshooting Decision Tree
API Endpoint Reference
Base URL Configuration
Endpoints Used by Spark
| Method | Endpoint | Purpose | Spark Usage |
|---|---|---|---|
| GET | /flows/ | List all flows | Discovery/sync |
| GET | /flows/{flow_id} | Get flow details | Component detection |
| GET | /flows/{flow_id}/components/ | Get flow components | Component mapping |
| POST | /run/{flow_id}?stream=false | Execute flow | Task execution |
| GET | /folders/ | List folders (legacy) | Filter templates |
| GET | /projects/ | List projects (modern) | Filter templates |
Request/Response Examples
List Flows
Request:Get Flow Details
Request:Execute Flow
Request:Advanced Configuration
Flow-specific Settings
When syncing flows, Spark stores these metadata fields:Session Tracking
LangFlow supports session IDs for conversation continuity:- Track conversation history
- Debug execution flow
- Correlate multiple task runs
Custom Component Handling
For flows with non-standard components:Session Management
LangFlow supports session-based conversations:Performance Optimization
Retry Configuration:Version Compatibility
Spark Version Support
| Spark Version | LangFlow Versions | Notes |
|---|---|---|
| 0.1.x - 0.5.x | 0.6.x - 1.0.x | Legacy folders endpoint |
| 0.6.x+ | 1.0.x+ | Projects endpoint support |
API Endpoint Evolution
Legacy (LangFlow < 1.0):Breaking Changes
LangFlow 1.0 Changes:folder_id→project_id(field rename)- Flow structure changes in some component types
- New authentication methods (still supports
x-api-key)
Complete Example
Here’s a full end-to-end example:Best Practices
Flow Design
- Always include ChatInput and ChatOutput
- Keep flows simple and focused
- Test in LangFlow before syncing
- Use descriptive component names
Scheduling
- Start with manual tests before scheduling
- Use appropriate intervals (avoid every minute)
- Monitor task success rates
- Set up alerts for failures
Maintenance
- Re-sync after flow changes
- Update API keys regularly
- Monitor LangFlow version updates
- Clean up old tasks periodically
Security
- Never commit API keys
- Use environment variables
- Restrict LangFlow network access
- Validate input data
Quick Reference
Essential Commands
LangFlow API Endpoints
Next Steps
Simple Schedule Example
Learn basic scheduling concepts
Hive Integration
Connect Automagik Hive agents
Production Deployment
Deploy Spark to production
CLI Reference
Complete command reference
Congratulations! Your LangFlow workflows are now automated with Spark. Build once, schedule everywhere.

