Use Cases
This page focuses on high-ROI, real-world CCCC workflows.
How to Read This Page
Each scenario includes:
- Goal
- Minimal setup
- Execution flow
- Success criteria
- Common failure points
Use Case 1: Builder + Reviewer Pair
Goal
Increase delivery quality without adding human review bottlenecks.
Minimal Setup
bash
cd /path/to/repo
cccc attach .
cccc setup --runtime claude
cccc setup --runtime codex
cccc actor add builder --runtime claude
cccc actor add reviewer --runtime codex
cccc group startExecution Flow
- Send implementation task to
@builder. - Send review criteria to
@reviewer(bug risk, regression risk, tests). - Require
@builderto reply with changed files + rationale. - Require
@reviewerto reply with findings (severity + evidence). - Use human decision for final merge.
Success Criteria
- Faster implementation feedback loop.
- Fewer missed regressions.
- Review output is actionable, not generic.
Common Failure Points
- Task scope too broad.
- Reviewer lacks explicit acceptance criteria.
- Team skips obligation semantics (
reply_required) for critical asks.
Use Case 2: Foreman-Led Multi-Agent Delivery
Goal
Split one medium project into parallel tracks while keeping alignment.
Minimal Setup
bash
cccc actor add foreman --runtime claude
cccc actor add frontend --runtime codex
cccc actor add backend --runtime gemini
cccc actor add qa --runtime copilot
cccc group startExecution Flow
- Foreman defines shared goal in Context (
vision,sketch,milestones). - Assign focused tasks via direct recipients.
- Enforce checkpoint reminders through Automation rules.
- Foreman integrates and resolves conflicts.
- QA agent validates key acceptance criteria before handoff.
Success Criteria
- Parallel execution without major rework churn.
- Clear ownership per track.
- Traceable decision history in ledger.
Common Failure Points
- Missing shared architecture baseline.
- Agents editing same surfaces without ownership rules.
- No explicit integration checkpoints.
Use Case 3: Mobile Ops with IM Bridge
Goal
Operate long-running groups from phone while keeping a reliable audit trail.
Minimal Setup
bash
cccc im set telegram --token-env TELEGRAM_BOT_TOKEN
cccc im startThen /subscribe in your IM chat.
Execution Flow
- Receive progress/error notifications in IM.
- Send escalation commands to
@foremanfrom mobile. - Switch to Web UI for deep debugging only when needed.
- Keep all critical decisions in CCCC messages (not only in IM thread).
Success Criteria
- You can intervene without laptop access.
- Critical context remains in ledger.
- Downtime is reduced for overnight or offsite ops.
Common Failure Points
- Exposing Web UI without proper token/gateway.
- Using IM as the only source of truth.
- No restart/recovery playbook.
Use Case 4: Repeatable Agent Benchmark Harness
Goal
Run comparable multi-agent sessions with stable logging and replayability.
Minimal Setup
- Define fixed task prompts and evaluation criteria.
- Use same group template and runtime setup per run.
- Keep automation policies deterministic.
Execution Flow
- Create baseline group/template.
- Run multiple sessions with different runtime combinations.
- Collect ledger and terminal evidence.
- Evaluate outcome quality and operational stability.
Success Criteria
- Comparable runs with low setup variance.
- Reproducible evidence set (
ledger, state artifacts, logs). - Clear model/runtime tradeoff signals.
Common Failure Points
- Hidden prompt drift between runs.
- Uncontrolled environment differences.
- Missing run metadata in messages.
Recommended Next Reads
docs/guide/operations.mddocs/reference/positioning.mddocs/reference/features.md