Agent Workflow — The Daily Loop

What an AI agent actually does in a Claude Code session with ido4. Every session starts with full project understanding, every action is governed, every outcome is recorded.

1
Orient
Understand the project state
Morning standup: what happened since last session, what's blocked, what's the compliance score, where's the leverage?
/ido4dev:standup → get_standup_data
2
Pick Task
Intelligent work selection
4-dimension scoring: cascade value (what unblocks the most), epic momentum, capability match, dependency freshness.
get_next_task → lock_task
3
Load Context
Full context package in one call
Task spec, acceptance criteria, upstream decisions, downstream dependents, sibling progress, epic status, structured comments from prior sessions.
get_task_execution_data
4
Develop
Specs-driven implementation
8-phase execution: read spec → understand architecture → implement → test against success conditions → verify acceptance criteria.
start_task → code → test
5
Transition
BRE validates every move
Every state change goes through the 34-step validation pipeline. Dependencies checked. Integrity enforced. Audit trail records actor + result.
review_task → approve_task
6
Write Context
Institutional memory
Structured context comment: what was built, what decisions were made, what the next agent needs to know. Knowledge accumulates.
add_context_comment
7
Merge
Quality gate before merge
6-check readiness: workflow compliance, PR reviews, dependency completion, integrity, security, compliance threshold.
check_merge_readiness
8
Handoff
Complete + recommend next
Atomically: approve task, release lock, find newly unblocked tasks, suggest which agent should pick each up, recommend next task.
complete_and_handoff
Loop: step 8 feeds back to step 2 — next task, full context, continuous flow
Context, Not From Scratch
Every session starts with accumulated project knowledge. A fresh agent with the right 100K tokens of context outperforms a "specialized" agent with wrong context. ido4 assembles the right context.
Governance, Not Gatekeeping
The BRE doesn't slow agents down — it catches mistakes before they cascade. Dependency violations, integrity breaks, and missing criteria are caught deterministically, not by LLM reasoning.
Memory Across Sessions
Structured context comments + audit trail = institutional memory. Agent A writes what it built. Agent B reads it next week. The project's knowledge never resets to zero.