Overview
One of Forge’s most powerful features: compare results from multiple AI agents side-by-side. See which approach works best, cherry-pick the best parts, or combine solutions.Why Compare?
Different AI agents have different strengths:| Agent | Strength | Weakness |
|---|---|---|
| Claude Sonnet | Complex logic, architecture | Can be verbose |
| Gemini Flash | Fast, concise | May miss edge cases |
| Cursor | UI/UX intuition | Less depth on algorithms |
| GPT-4 | Comprehensive | Expensive |
Quick Comparison
Via CLI
Via Web UI
1
Open Task
Click on task card in Kanban board
2
View Attempts Tab
Click Attempts tab to see all attempts
3
Select Attempts to Compare
Check boxes next to 2+ attempts
4
Click 'Compare'
Opens split-view comparison interface
Comparison Views
File-by-File Diff
Side-by-Side Code Comparison
Statistics Comparison
Detailed Comparison
Test Results
Code Quality Metrics
Performance Analysis
Visual Comparison (Web UI)
The Forge UI provides rich visual comparisons:Split-Screen Editor
- Left pane: Attempt 1 code
- Right pane: Attempt 2 code
- Synchronized scrolling
- Inline diff highlighting
Architecture Diagram
Auto-generated comparison:Decision Matrix
Build a decision matrix to choose systematically:Cherry-Picking Best Parts
Sometimes you want to combine approaches:Manual Cherry-Pick
Via Web UI
- Open comparison view
- Select blocks of code from different attempts
- Click “Create Combined Attempt”
- Forge creates new attempt merging selected parts
Common Comparison Scenarios
Feature Implementation
Question: Which agent implemented the feature most completely?Bug Fix
Question: Which fix actually solves the bug without breaking anything?Refactoring
Question: Which refactor improves code without changing behavior?Export Comparison Reports
Generate Report
Share with Team
Best Practices
Test All Attempts
Don’t just read code - run tests:Code that looks good but fails tests is useless
Check Edge Cases
Specifically test edge cases:Simple cases are easy; edges matter
Consider Maintainability
Today: All work
Six months from now: Which will you understand?Prefer clear code over clever code
Document Your Choice
Advanced: A/B Testing in Production
For critical features, deploy multiple attempts for A/B testing:- Response times
- Error rates
- User feedback
Troubleshooting
Comparison takes forever
Comparison takes forever
Issue: Comparing large attempts is slowSolutions:
- Use
--files-onlyto skip detailed diffs - Compare specific files:
--file src/auth/login.ts - Increase timeout:
--timeout 300
Can't see differences
Can't see differences
Issue: Attempts look identicalSolutions:
- Check if comparing same attempt twice
- Use
--ignore-whitespaceto see real changes - Try
--context 10for more surrounding lines
Test comparison fails
Test comparison fails
Issue: Can’t run tests in worktreesSolutions:
- Ensure dependencies installed in each worktree
- Check test paths are correct
- Use
--setup-cmd "npm install"first

