The Power of Structured Task Management
In Forge, every piece of work is organized as a Task - a persistent, trackable unit of work that maintains complete context, history, and results. Unlike chat-based AI interactions that disappear, Forge tasks live forever in your Kanban board.What is a Task?
A Forge Task is a structured work unit that contains:- Title & Description: Clear statement of what needs to be done
- Context: Files, screenshots, diagrams attached for AI understanding
- Agent Assignment: Which AI coding agent will execute it
- Attempts: Multiple execution attempts with different agents or approaches
- Git Worktree: Isolated environment for each attempt
- Diffs & Results: Complete history of changes made
- Status: Lifecycle state (planning, in-progress, review, merged, archived)
Task Lifecycle
Every task moves through a well-defined lifecycle:Lifecycle States
| State | Description | Actions Available |
|---|---|---|
| Planning | Task created, context being gathered | Add context, assign agent, edit description |
| Ready | Ready for execution | Start attempt, assign different agent |
| In Progress | AI agent actively working | Monitor logs, cancel if needed |
| Review | Attempt completed, awaiting human review | View diffs, compare with other attempts, approve/reject |
| Merged | Approved changes merged to main branch | Archive task, create follow-up tasks |
| Archived | Task completed and stored | View history, reference in future tasks |
State Transitions
- Planning → Ready: When you’ve added enough context and assigned an agent
- Ready → In Progress: When you start an attempt
- In Progress → Review: When the agent completes execution
- Review → In Progress: If you reject and want to retry
- Review → Merged: When you approve and merge changes
- Merged → Archived: After verification and cleanup
Multiple Attempts Strategy
The killer feature of Forge: every task can have multiple attempts with different AI agents, configurations, or approaches.Why Multiple Attempts?
Different AI models excel at different things:| Agent | Strengths | Best For |
|---|---|---|
| Claude Code | Complex logic, architecture | Large refactors, system design |
| Gemini | Simplicity, speed | Quick features, straightforward implementations |
| Cursor CLI | Balance, pragmatism | Production code, balanced solutions |
| OpenAI Codex | Broad knowledge | General-purpose tasks |
- You’re stuck with one agent’s approach
- No way to compare quality
- Can’t learn which agent works best for which task type
- Miss better solutions from other models
Isolation is Key
Each attempt runs in its own Git worktree:- No conflicts between attempts
- Main branch stays clean
- Easy to compare approaches side-by-side
- Safe to experiment without fear
When to Use Multiple Attempts
Not every task needs multiple attempts. Here’s when to use them:✅ Use Multiple Attempts For:
Critical Features
Features affecting many users or core business logicExample: Payment processing, authentication systems
Complex Refactoring
Major architectural changes with high riskExample: Migrating from REST to GraphQL
Performance Optimization
When you need the best possible solutionExample: Database query optimization, API response times
Learning Opportunities
When you want to see different approachesExample: Learning new patterns or best practices
❌ Single Attempt is Fine For:
- Simple bug fixes
- Documentation updates
- Minor UI tweaks
- Quick configuration changes
- Low-risk experiments
The Cost-Benefit Analysis
Multiple Attempts Cost:- More time (each agent runs separately)
- More API usage (multiple LLM calls)
- More review time (comparing results)
- Higher quality solutions
- Better understanding of the problem
- Learning which agents work best
- Reduced risk of bugs in production
Rule of Thumb: If a bug would cost more than 30 minutes to fix in production, use multiple attempts during development.
Comparing Attempts
Forge makes it easy to compare different attempts:Comparison Features
1. Side-by-Side Diffs| Metric | Attempt 1 | Attempt 2 | Attempt 3 |
|---|---|---|---|
| Lines Changed | 250 | 120 | 180 |
| Files Modified | 8 | 4 | 5 |
| Tests Added | 15 | 8 | 12 |
| Execution Time | 8 min | 3 min | 5 min |
| Complexity | High | Low | Medium |
- Test coverage added
- Documentation quality
- Code complexity metrics
- Performance impact
- Security considerations
Making the Choice
1
Review All Attempts
Read through each attempt’s changes carefully
2
Run Tests
Verify all attempts pass your test suite
3
Consider Maintainability
Which code will you understand in 6 months?
4
Check Performance
Run benchmarks if relevant
5
Choose Winner
Select the best attempt or cherry-pick from multiple
6
Merge
Merge the chosen attempt to your main branch
Cherry-Picking Best Parts
Sometimes you want parts from multiple attempts:Best Practices
Anti-Pattern: Creating 10 attempts for every tiny task. This wastes time and API credits.Best Practice: Use judgment - critical features get 2-3 attempts, simple tasks get 1.
Task Creation Tips
- Clear Titles: “Add user auth” ✅ vs “Do stuff” ❌
- Detailed Descriptions: Include acceptance criteria, edge cases, examples
- Attach Context: Screenshots, diagrams, related code files
- Label Appropriately: feature, bug, refactor, docs, etc.
- Set Realistic Scope: Smaller tasks = better AI results
Attempt Management Tips
- Start with Your Best Agent: Use the agent you trust most first
- Try Different Approaches: Second attempt should use a different strategy
- Cancel Bad Attempts Early: Don’t waste time on obviously wrong directions
- Document Learnings: Note which agents work best for which task types
- Review Before Merging: Never auto-merge without human review
Real-World Example
Scenario: Adding File Upload Feature
Task Created:- Generated comprehensive solution with chunked uploads
- Added retry logic, error handling
- 300 lines of code, very robust
- Review: Too complex for MVP, over-engineered
- Simple FormData upload with progress event
- Basic error handling
- 80 lines of code
- Review: Too simple, missing edge cases
- Balanced approach with FormData + basic chunking
- Good error handling without complexity
- 150 lines of code
- Review: ✅ Perfect balance - merged!

