Sources Commands
Manage workflow sources (LangFlow, Automagik Hive) that provide workflows to Spark.automagik-spark sources add
Add a new workflow source to Spark. Usage:--name, -n(required): Human-readable name for the source--type, -t(required): Source type (langfloworautomagik-agents)--url, -u(required): Base URL of the source (e.g.,http://localhost:7860)--api-key, -k(required): API key for authentication--status, -s(optional): Initial status (activeorinactive, default:active)
- The command validates the source by checking its health and version endpoints
- If a source with the same URL exists, it will be updated instead
- API keys are encrypted before storage
automagik-spark sources list
List all configured workflow sources. Usage:--status, -s(optional): Filter by status (activeorinactive)
automagik-spark sources update
Update an existing workflow source. Usage:URL: Source URL to update
--status, -s(optional): New status (activeorinactive)--api-key, -k(optional): New API key
automagik-spark sources delete
Delete a workflow source by ID or URL. Usage:ID_OR_URL: Source UUID or URL
--force, -f: Force deletion even if workflows exist
- Cannot delete source with workflows: Use
--forceflag or delete workflows first - Source not found: Verify the URL or ID is correct
automagik-spark sources run-agent
Run an agent from an automagik-agents source. Usage:SOURCE_ID: Source UUIDAGENT_NAME: Name of the agent to run
--input, -i(required): Input message for the agent--session-id, -s(optional): Session ID for conversation history
Workflows Commands
Manage workflows synced from sources.automagik-spark workflows list
List all synced workflows. Usage:--folder(optional): Filter by folder name
automagik-spark workflows sync
Sync workflows from remote sources. Usage:FLOW_ID(optional): Specific flow ID to sync
--source(optional): Source URL or ID to sync from (syncs all sources if not specified)--page(optional): Page number for listing flows (default: 1)--page-size(optional): Number of flows per page (default: 20)
automagik-spark workflows run
Run a workflow directly (ad-hoc execution). Usage:WORKFLOW_ID: Local workflow ID or remote flow ID
--input, -i(optional): Input data for the workflow (string or JSON)
automagik-spark workflows delete
Delete a workflow from Spark. Usage:WORKFLOW_ID: Workflow UUID
- Cannot delete workflow with active schedules: Stop or delete schedules first
Schedules Commands
Create and manage workflow schedules.automagik-spark schedules create
Create a new schedule (interactive command). Usage:- Workflow selection
- Schedule type (interval, cron, or one-time)
- Schedule expression
- Input data
5m- Every 5 minutes30m- Every 30 minutes1h- Every hour4h- Every 4 hours1d- Every day
* * * * *- Every minute*/5 * * * *- Every 5 minutes0 * * * *- Every hour0 0 * * *- Every day at midnight0 8 * * *- Every day at 8 AM0 8 * * 1-5- Every weekday at 8 AM
now- Run immediately2025-02-15 08:00- Run at specific time2025-02-15T08:00:00- ISO format
automagik-spark schedules list
List all workflow schedules. Usage:automagik-spark schedules update
Update schedule status (pause, resume, or stop). Usage:SCHEDULE_ID: Schedule UUIDACTION: One ofpause,resume, orstop
pause: Temporarily stop schedule execution (can be resumed)resume: Resume a paused schedulestop: Permanently stop schedule (cannot be resumed)
automagik-spark schedules set-expression
Update the schedule expression. Usage:SCHEDULE_ID: Schedule UUIDEXPRESSION: New schedule expression (interval, cron, or datetime)
automagik-spark schedules set-input
Update the input data for a schedule. Usage:SCHEDULE_ID: Schedule UUIDINPUT_DATA: New input data (string or JSON)
automagik-spark schedules delete
Delete a schedule. Usage:SCHEDULE_ID: Schedule UUID
Tasks Commands
View and manage workflow execution tasks.automagik-spark tasks list
List workflow execution tasks. Usage:--workflow-id(optional): Filter by workflow ID--status(optional): Filter by status (pending,running,completed,failed,error)--limit(optional): Limit number of results (default: 50)--show-logs(optional): Display task logs
automagik-spark tasks view
View detailed information about a specific task. Usage:TASK_ID: Task UUID or prefix
automagik-spark tasks retry
Retry a failed task. Usage:TASK_ID: Task UUID or prefix
- Only failed tasks can be retried
- A new task will be created with the same input data
- Retry count limits still apply
automagik-spark tasks create
Create a new task for a workflow. Usage:WORKFLOW_ID: Workflow UUID or prefix
--input-data(optional): Input data for the task (JSON)--max-retries(optional): Maximum retry attempts (default: 3)--run(optional): Run the task immediately (synchronous)
--run flag:
Worker Commands
Manage Celery workers that execute scheduled tasks.automagik-spark worker start
Start the Celery worker and beat scheduler. Usage:--threads(optional): Number of worker threads (default: 2)--daemon(optional): Run in background mode
- Foreground mode: Shows logs directly in terminal, blocks until stopped (Ctrl+C)
- Daemon mode: Runs in background, logs to file, can be stopped with
worker stop - Both worker and beat scheduler start together
- Worker processes tasks from the Celery queue
- Beat scheduler monitors schedules and creates tasks
- Default:
/var/log/automagik/worker.logand/var/log/automagik/beat.log - Configurable via
AUTOMAGIK_SPARK_WORKER_LOGenvironment variable
automagik-spark worker stop
Stop the running worker and beat scheduler. Usage:- Sends graceful SIGTERM first, then SIGKILL if needed
- Only works for daemon mode workers
- Foreground workers are stopped with Ctrl+C
automagik-spark worker status
Show worker and scheduler status. Usage:- Shows process IDs for both worker and beat scheduler
- Displays active, scheduled, and reserved tasks
- Uses Celery’s inspection API
automagik-spark worker logs
View worker logs. Usage:--tail, -t: Show last N lines (default: 50)--lines, -n: Number of lines to show with —tail (default: 50)--follow, -f: Follow log output in real-time (liketail -f)
- Log location:
/var/log/automagik/worker.log(or configured path) - Use
--followfor debugging active workers - Press Ctrl+C to stop following
API Commands
Manage the FastAPI REST server.automagik-spark api start
Start the REST API server. Usage:--host(optional): Host to bind to (default:0.0.0.0, overridesAUTOMAGIK_HOST)--port(optional): Port to bind to (default:8883, overridesAUTOMAGIK_PORT)--reload(optional): Enable auto-reload on code changes (development only)
- The API server must be running for REST API requests to work
- Use
--reloadonly in development (slower performance) - API key authentication required (set via
AUTOMAGIK_API_KEY)
Database Commands
Manage database schema and migrations.automagik-spark db init
Initialize database and migration system. Usage:- Only needs to be run once per project
- Creates
migrations/directory andalembic.inifile - Skips if already initialized
automagik-spark db migrate
Generate a new database migration. Usage:--message, -m(optional): Description of the migration
- Auto-generates migration by comparing models to database schema
- Review generated migration before applying
- Use descriptive messages for easier tracking
automagik-spark db upgrade
Apply all pending database migrations. Usage:- Applies all unapplied migrations
- Idempotent - safe to run multiple times
- Always backup before upgrading in production
automagik-spark db downgrade
Revert the last database migration. Usage:- Only reverts ONE migration
- May result in data loss
- Test downgrade scripts thoroughly
automagik-spark db clear
Clear all data from database (preserves schema). Usage:- DESTRUCTIVE: Deletes ALL data from all tables
- Schema and migrations remain intact
- Use for development/testing only
- No confirmation prompt - use carefully

