Skip to main content

Overview

Each Forge task can have multiple attempts - different executions with different AI agents, prompts, or approaches. This is Forge’s superpower: experiment until you find what works!

What Are Attempts?

An attempt is a single execution of a task by an AI coding agent in an isolated Git worktree.
Task: "Add user authentication"
├── Attempt 1: Claude Sonnet → Too complex
├── Attempt 2: Gemini Flash → Missing edge cases
├── Attempt 3: Claude Haiku → Perfect! ✅
└── Result: You choose Attempt 3 to merge
Each attempt runs in complete isolation - no conflicts, no mess!

Creating Attempts

First Attempt (via UI)

1

Create Task

Create a task in Forge with title and description
2

Click 'Start Task'

Click the task card → Start Task button
3

Choose Agent

Select your AI coding agent (Claude, Gemini, etc.)
4

Monitor Progress

Watch real-time logs as the agent works

Additional Attempts (via CLI)

# Try the same task with a different agent
forge task fork 1 --llm gemini

# Try with a specialized agent profile
forge task fork 1 --llm claude --agent "test-writer"

# Try with different context
forge task fork 1 --llm cursor --description "Focus on performance"

Via MCP

You: "Try that authentication task again but with Gemini this time"

Claude Code: I'll create a new attempt with Gemini...

[Uses MCP to fork task]

Attempt #2 started with Gemini

Attempt Lifecycle

Status Explained

StatusMeaningActions Available
CreatedAttempt queued, not startedStart, Delete
RunningAgent is workingMonitor, Cancel
CompletedFinished successfullyReview, Merge, Fork
FailedExecution failedRetry, View Logs
Under ReviewAwaiting your approvalApprove, Reject, Compare
MergedChanges merged to mainView, Rollback
RejectedYou decided not to use itDelete, Retry

Running Attempts

Sequential Execution

# Run one at a time
forge task start 1 --llm claude
# Wait for completion...

forge task start 1 --llm gemini
# Wait for completion...

Parallel Execution

# Run multiple attempts simultaneously!
forge task start 1 --llm claude &
forge task start 1 --llm gemini &
forge task start 1 --llm cursor &

# All three run in isolated worktrees
wait
Parallel execution is perfect for comparing approaches quickly!

Monitoring Attempts

View Attempt Status

# List all attempts for a task
forge task attempts 1

# Output:
# Attempt #1 (claude)     [Completed]  5 minutes ago
# Attempt #2 (gemini)     [Running]    Started 30s ago
# Attempt #3 (cursor)     [Failed]     2 minutes ago

Real-Time Logs

# Follow logs as agent works
forge task logs 1 --follow

# View specific attempt
forge task logs 1 --attempt 2 --follow

Web UI Monitoring

The Forge UI shows live progress:
  • Current file being edited
  • Lines added/removed
  • Test results (if running)
  • Agent’s thinking process

Comparing Attempts

See Comparing Results for detailed guide. Quick comparison:
# Compare two attempts
forge diff 1-claude 1-gemini

# Compare all attempts
forge task compare 1

# See just the stats
forge task compare 1 --stats-only

Specialized Agent Profiles

Use the same task with different “personas”:

Test Writer

forge task fork 1 \
  --llm claude \
  --agent "test-writer" \
  --description "Focus on comprehensive test coverage"

Security Expert

forge task fork 1 \
  --llm gemini \
  --agent "security-expert" \
  --description "Add input validation and sanitization"

Performance Optimizer

forge task fork 1 \
  --llm cursor \
  --agent "performance-optimizer" \
  --description "Optimize for speed, add caching"

Documentation Writer

forge task fork 1 \
  --llm claude \
  --agent "docs-writer" \
  --description "Add comprehensive documentation and examples"
Specialized agents are just different system prompts applied to the base LLM!

Attempt Metadata

Each attempt stores:
{
  "id": "attempt-abc123",
  "taskId": "task-1",
  "llm": "claude",
  "agent": "default",
  "status": "completed",
  "worktreePath": ".forge/worktrees/task-1-claude",
  "startedAt": "2024-01-15T10:30:00Z",
  "completedAt": "2024-01-15T10:35:22Z",
  "duration": 322000,
  "filesChanged": 8,
  "linesAdded": 245,
  "linesRemoved": 12,
  "cost": {
    "inputTokens": 15420,
    "outputTokens": 3890,
    "totalCost": 0.23
  }
}
View metadata:
forge task attempt-info 1 --attempt 2

Handling Failed Attempts

View Failure Reason

# Check logs
forge task logs 1 --attempt 3

# Get error summary
forge task attempt-info 1 --attempt 3 --errors-only

Common Failure Reasons

Issue: Agent generated invalid codeSolutions:
  • Retry with more specific instructions
  • Try different agent
  • Add example code in task description
Issue: Tests didn’t passSolutions:
  • Check test output in logs
  • Add “make tests pass” to description
  • Use --agent "test-aware" profile
Issue: Agent took too longSolutions:
  • Increase timeout: --timeout 600
  • Break task into smaller pieces
  • Use faster agent (Haiku, Flash)
Issue: LLM API failedSolutions:
  • Check API key is valid
  • Verify rate limits
  • Check network connectivity
  • Retry after a few minutes

Retry Failed Attempt

# Retry with same configuration
forge task retry 1 --attempt 3

# Retry with modifications
forge task retry 1 --attempt 3 \
  --llm gemini \
  --description "Try simpler approach"

Cost Tracking

View Attempt Costs

# Cost for specific attempt
forge task cost 1 --attempt 2

# Output:
# Input tokens:  15,420 ($0.046)
# Output tokens:  3,890 ($0.058)
# Total cost:            $0.104

# Total cost for all attempts
forge task cost 1 --all

# Output:
# Attempt 1 (claude):  $0.234
# Attempt 2 (gemini):  $0.104
# Attempt 3 (cursor):  $0.187
# Total:               $0.525

Cost Optimization Strategies

Start Cheap

Use fast, cheap models first:
  • Gemini Flash (free tier!)
  • Claude Haiku
  • GPT-3.5 Turbo
Save expensive models for complex tasks

Cancel Bad Attempts

If agent is going wrong direction, cancel early:
forge task cancel 1 --attempt 2
Saves tokens and money

Reuse Good Results

Once you find an approach that works, save it as a template:
forge template save 1 --attempt 2 \
  --name "user-auth-pattern"

Track Spending

Monitor total costs:
forge cost summary --month january
Set budgets per project

Attempt Best Practices

1

Start with One Agent

Don’t create 5 attempts immediately. Start with your preferred agent (Claude or Gemini)
2

Review Before More Attempts

If first attempt is close but not perfect, fork it with specific feedback rather than starting fresh
3

Use Different Agents for Different Strengths

  • Claude: Complex logic, architecture
  • Gemini: Fast iterations, simple tasks
  • Cursor: UI/UX focused work
  • Local models: Privacy-sensitive work
4

Clean Up Rejected Attempts

forge task cleanup 1 --remove-rejected
Keeps your workspace tidy

Advanced: Attempt Hooks

Run custom scripts on attempt events:
.forge/hooks.yaml
on_attempt_complete:
  - run: "npm test"
  - run: "npm run lint"
  - notify: "slack"

on_attempt_fail:
  - run: "cat logs/error.log"
  - notify: "email"

Troubleshooting

Error: “Maximum attempts reached for task”Solution:
  • Default limit is 10 attempts per task
  • Clean up old attempts: forge task cleanup 1
  • Or increase limit: forge config set max-attempts 20
Error: “Worktree path already exists”Solution:
# Clean orphaned worktrees
forge worktree cleanup

# Or remove specific worktree
git worktree remove .forge/worktrees/task-1-old
Issue: Attempt shows running but nothing happeningSolution:
# Cancel the attempt
forge task cancel 1 --attempt 2

# Check for zombie processes
ps aux | grep forge

# Restart Forge
forge restart

Next Steps