Deploy Your First Agent Team in 5 Minutes
This guide will walk you through creating a multi-agent team with Hive, complete with memory, RAG, and MCP integration.
Step 1: Initialize Hive
# Initialize Hive workspace
hive init
# This creates:
# - .hive/ directory
# - agents.yaml (your agent configuration)
# - .env (API keys)
Edit agents.yaml to define your team:
version : "1.0"
agents :
- name : researcher
role : "Research information and gather data"
llm : claude
model : "claude-3-5-sonnet-20241022"
memory : true
tools :
- web-search
- wikipedia
- name : coder
role : "Write and review code"
llm : claude
model : "claude-3-5-sonnet-20241022"
memory : true
tools :
- bash
- edit-file
- read-file
- name : analyst
role : "Analyze data and create reports"
llm : claude
model : "claude-3-5-sonnet-20241022"
memory : true
tools :
- python
- data-viz
memory :
type : vector
provider : qdrant
persist : true
path : ./.hive/memory
mcp :
enabled : true
Step 3: Set API Keys
Edit .env with your credentials:
# Required
ANTHROPIC_API_KEY = sk-ant-your-key-here
# Optional (based on your needs)
OPENAI_API_KEY = sk-your-openai-key
GOOGLE_API_KEY = your-google-key
# Memory (if using Qdrant)
QDRANT_URL = http://localhost:6333
Start with in-memory storage for quick testing! Just set provider: in-memory in agents.yaml.
Step 4: Start Hive
# Start Hive with hot-reload enabled
hive start --hot-reload
# You should see:
# 🐝 Hive v1.0.0
# ✓ Agents spawned: researcher, coder, analyst
# ✓ Memory system initialized
# ✓ Hot-reload enabled
Step 5: Give Your Team a Task
# Assign a task to your research agent
hive task assign researcher "Research the latest trends in AI agents and summarize key findings"
# Or use the team coordinator
hive task create "Build a simple web scraper for tech news" --assign-to coder
Real-World Example: Research & Development Workflow
Research Phase
hive task assign researcher \
"Research best practices for building REST APIs in Python"
The researcher agent:
Searches the web
Reads documentation
Stores findings in memory
Development Phase
hive task assign coder \
"Build a REST API based on the research findings" \
--context "Use the research from the previous task"
The coder agent:
Accesses research from memory
Writes implementation code
Creates tests
Analysis Phase
hive task assign analyst \
"Analyze the API implementation and create performance recommendations"
The analyst agent:
Reviews the code
Analyzes architecture
Generates report
Common Agent Team Configurations
Content Creation Team
Development Team
Data Analysis Team
agents :
- name : researcher
role : "Research topics and gather sources"
tools : [ web-search , wikipedia ]
- name : writer
role : "Create engaging content"
tools : [ write-file ]
- name : editor
role : "Review and improve content"
tools : [ read-file , edit-file ]
Using Memory and RAG
Agents automatically remember context:
# Agent stores information
hive task assign researcher "Research Python async best practices"
# Later, another agent can access this knowledge
hive task assign coder "Write async code using best practices" \
--use-memory
# Query memory directly
hive memory search "async best practices"
Hot-Reload in Action
Update agents.yaml while Hive is running:
agents :
- name : researcher
role : "Research information and gather data"
llm : claude
temperature : 0.7 # Added parameter
tools :
- web-search
- wikipedia
- arxiv # Added new tool
Save the file - changes apply instantly without restart!
# Check updated configuration
hive agents list
# researcher now has arxiv tool available
Multi-Agent Collaboration
Have agents work together on complex tasks:
# Create a workflow
hive workflow create product-launch.yaml
product-launch.yaml:
name : product-launch
description : Complete product launch workflow
steps :
- agent : researcher
task : "Research target market and competitors"
output : market-research
- agent : writer
task : "Create product description based on research"
input : market-research
output : product-copy
- agent : analyst
task : "Analyze product positioning and suggest improvements"
input : product-copy
output : analysis-report
parallel :
- agent : coder
task : "Build landing page"
- agent : writer
task : "Write blog announcement"
Execute the workflow:
hive workflow run product-launch
Pro Tips
Begin with 2-3 specialized agents, then add more as needed: # Start with
agents :
- name : assistant
role : "General purpose helper"
# Then expand to
agents :
- name : researcher
- name : coder
- name : writer
- name : analyst
Enable memory for agents that need context: - name : researcher
memory : true # Needs to remember findings
- name : calculator
memory : false # Pure computation, no context needed
Monitor Agent Performance
Next Steps
Advanced Workflows Learn to:
Create complex multi-step workflows
Use conditional logic
Handle errors and retries
Custom Tools Build your own tools:
MCP tool integration
Custom Python functions
External API connectors
Memory & RAG Deep dive into:
Vector databases
Knowledge graphs
Semantic search
Production Deployment Scale to production:
Docker deployment
Kubernetes orchestration
Monitoring and logging
Troubleshooting
Check API keys and connectivity: hive config validate
hive agents test researcher
Verify memory backend is running: # If using Qdrant
docker run -p 6333:6333 qdrant/qdrant
# Or switch to in-memory
# In agents.yaml: provider: in-memory
Ensure file watching is enabled: hive start --hot-reload --verbose
Congratulations! 🎉 You’ve deployed your first AI agent team with Hive. Your agents are ready to collaborate on complex tasks!
Join the Community Share your agent configurations and learn from others