Skip to main content

Overview

Feature development with Forge follows the Vibe Coding++™ approach: you plan and orchestrate, AI agents execute, you review and ship. This workflow ensures you ship production-ready code that you understand and can maintain.

The Feature Development Cycle


Step 1: Plan Your Feature

Before jumping into code, plan your feature at a high level.

Example: Building a Real-Time Notification System

Feature Goal: Add real-time notifications with WebSocket support, browser notifications, and notification history. Initial Planning Questions:
  • What are the core components needed?
  • What are the technical dependencies?
  • What’s the order of implementation?
  • Are there any risks or unknowns?

Step 2: Break Down Into Tasks

Decompose your feature into small, focused tasks. Each task should be completable in 15-30 minutes of AI execution.
Golden Rule: If you can’t describe a task in 1-2 sentences, it’s too big. Break it down further.

Example Task Breakdown

Epic: Real-Time Notification System

Tasks:
  1. Design notification data model and database schema
  2. Create WebSocket server with Socket.io
  3. Build notification API endpoints (CRUD)
  4. Implement browser notification permissions
  5. Create notification UI component
  6. Add notification history view
  7. Write integration tests for WebSocket
  8. Add notification settings panel
  9. Document notification API

Step 3: Create Task Cards in Forge

Open Forge and create your task cards:
  • Via UI
  • Via CLI
  • Via MCP
# Start Forge UI
npx automagik-forge

# Open http://localhost:3000
# Click "New Task" button
# Fill in task details
Task Card Template:
  • Title: Short, action-oriented (e.g., “Create WebSocket server”)
  • Description: Detailed requirements and acceptance criteria
  • Labels: Type (feature, refactor, etc.) and priority
  • Dependencies: Link related tasks

Step 4: Choose Your Agents Strategically

Different AI agents have different strengths. Choose based on the task type:

Backend/API Work

Best Agents:
  • Claude Code: Excellent for complex logic
  • Cursor CLI: Great for boilerplate
  • Gemini: Fast iterations

Frontend/UI Work

Best Agents:
  • Cursor CLI: Modern React patterns
  • Claude Code: Accessible UI
  • Gemini: Quick prototypes

Database/Schema

Best Agents:
  • Claude Code: Complex migrations
  • GPT-4 Codex: SQL optimization
  • Gemini: Schema design

Testing

Best Agents:
  • Claude Code: Comprehensive tests
  • Cursor CLI: Integration tests
  • Specialized “test-writer” agent

Example: WebSocket Server Task

# Try multiple agents to compare approaches
forge task create \
  --title "Create WebSocket server" \
  --agent claude-code

# Create alternative attempt with different agent
forge task fork task-123 --agent gemini

Step 5: Experiment & Iterate

Forge’s git worktree isolation lets you experiment fearlessly.

Running Multiple Attempts

1

Start First Attempt

# Claude Code attempts the task
forge task start task-123 --agent claude-code

# Forge creates: .forge/worktrees/task-123-attempt-1/
2

Create Alternative Approach

# Try Gemini's approach
forge task fork task-123 --agent gemini

# Forge creates: .forge/worktrees/task-123-attempt-2/
3

Monitor Progress

# Watch real-time execution
forge task watch task-123

# Or check status
forge task status task-123

When to Create Multiple Attempts

Use multiple attempts when:
  • Task is complex or critical
  • You want to compare different approaches
  • Previous attempt didn’t meet requirements
  • Learning which agent works best for task type
Single attempt is fine when:
  • Task is straightforward
  • You trust the chosen agent for this task type
  • Time is constrained

Step 6: Review and Compare Results

Never merge without understanding what changed.

Review Checklist

  • Does the code follow project conventions?
  • Are there any code smells or anti-patterns?
  • Is error handling comprehensive?
  • Are edge cases covered?
  • Does it meet all acceptance criteria?
  • Are there any missing features?
  • Does it handle the happy path AND edge cases?
  • Are there any security concerns?
  • Are there tests for new functionality?
  • Do existing tests still pass?
  • Is test coverage adequate?
  • Are tests meaningful (not just for coverage)?
  • Is the code self-documenting?
  • Are complex parts commented?
  • Is API documentation updated?
  • Are there usage examples?

Comparing Multiple Attempts

# Side-by-side comparison
forge task compare task-123

# See detailed diff between attempts
forge diff \
  .forge/worktrees/task-123-attempt-1 \
  .forge/worktrees/task-123-attempt-2

# Run tests on both attempts
forge task test task-123-attempt-1
forge task test task-123-attempt-2
Example Comparison:
AspectClaude Code AttemptGemini AttemptWinner
Code QualityComprehensive, well-structuredSimple, conciseClaude ✅
PerformanceOptimized with connection poolingBasic implementationClaude ✅
TestingFull test suiteBasic testsClaude ✅
DocumentationExtensive JSDocMinimal commentsClaude ✅
Time Taken8 minutes4 minutesGemini ⚡
Decision: Choose Claude’s approach for production quality.

Step 7: Merge & Ship

Once you’re satisfied with an attempt, merge it to your main branch.
# Review one final time
forge task review task-123-attempt-1

# Merge to main
forge task merge task-123-attempt-1

# Clean up other attempts
forge task cleanup task-123

Merge Best Practices

Before merging, always:
  1. Run the full test suite
  2. Check for conflicts with main branch
  3. Verify no secrets or sensitive data committed
  4. Update CHANGELOG if applicable

Step 8: Document & Test Integration

After merging individual tasks, test how they work together.

Integration Testing

# Create integration test task
forge task create \
  --title "Integration test: Notification system end-to-end" \
  --description "Test full flow: send notification → WebSocket → browser → history" \
  --labels "testing,integration" \
  --agent claude-code

Documentation Task

# Document the feature
forge task create \
  --title "Document notification API and usage" \
  --description "Update API docs, add usage examples, create user guide" \
  --labels "documentation" \
  --agent gemini  # Gemini is fast for docs

Real-World Example: Complete Feature

Here’s a complete workflow for building a user dashboard:
1

Day 1 - Planning & Backend

# Morning: Plan and create tasks
forge task create --title "Design dashboard data model" --agent claude-code
forge task create --title "Create dashboard API endpoints" --agent claude-code
forge task create --title "Add caching layer with Redis" --agent gemini

# Afternoon: Execute and review
forge task start-batch task-1 task-2 task-3
forge task review-all
forge task merge-selected task-1 task-2  # Merge what's ready
2

Day 2 - Frontend

# Morning: UI components
forge task create --title "Build dashboard layout component" --agent cursor-cli
forge task create --title "Create chart components" --agent cursor-cli
forge task create --title "Add real-time data updates" --agent claude-code

# Try multiple approaches for charts
forge task fork task-5 --agent gemini  # Compare approaches
3

Day 3 - Polish & Ship

# Morning: Testing and docs
forge task create --title "Write dashboard integration tests" --agent claude-code
forge task create --title "Add dashboard user guide" --agent gemini

# Afternoon: Final review and ship
forge task review-all
forge task merge-all-approved

# Create PR
git checkout -b feature/user-dashboard
git push origin feature/user-dashboard
gh pr create --title "Add user dashboard with real-time charts"

Pro Tips for Feature Development

Build the simplest version first, then enhance:
# Iteration 1: Basic functionality
forge task create --title "Basic dashboard - static data"

# Iteration 2: Add real-time
forge task create --title "Add real-time data updates"

# Iteration 3: Polish
forge task create --title "Add animations and loading states"
Link tasks that must be completed in order:
forge task create --title "Create API" --id api-task
forge task create --title "Build UI" --depends-on api-task
Good labeling makes filtering and tracking easier:
--labels "feature,frontend,priority:high,sprint-12"
Never merge half-finished work. Use Forge’s isolation:
  • Each task in its own worktree
  • Only merge when tests pass
  • Review every change before merging

Common Pitfalls to Avoid

Don’t Do This:

  1. Creating Tasks That Are Too Big
    • ❌ “Build entire authentication system”
    • ✅ “Create user registration endpoint”
  2. Not Reviewing Before Merging
    • ❌ Blindly merging AI-generated code
    • ✅ Understanding every change
  3. Using Only One Agent
    • ❌ Always using the same agent
    • ✅ Experimenting to find best fit
  4. Forgetting Tests
    • ❌ “I’ll add tests later”
    • ✅ Tests are part of the feature
  5. Poor Task Descriptions
    • ❌ “Fix the thing”
    • ✅ “Add validation to email field in signup form”

Next Steps


Remember: Vibe Coding++™ means you’re always in control. The AI agents are powerful tools, but you’re the maestro orchestrating them to create production-ready features.