Open Questions
Things we haven't fully figured out yet. Documenting them is step one.
-
Sub-agent quality variance. Some agents nail it first try, others need multiple rounds. Can we predict or prevent bad runs?
-
Cross-project context. When working on multiple projects in one session, how do we manage context switching without overload?
-
How do multiple human-AI Lead pairs share a methodology and stay in sync? CSDLC works for individual pairs. What changes when multiple pairs share design docs and a process?
-
Measuring process health. How do we know CSDLC is actually working? Metrics? PR first-pass rate? Time to merge?
-
Context budget optimization. How much startup context is "just right"? Can we measure the ROI of each loaded file?
-
How do multiple AI Leads coordinate when they discover conflicting process improvements? One pair finds that X works better; another pair finds that Y works better. How do you reconcile?
-
What's the right cadence for design doc updates as a project evolves? Design docs capture decisions at a point in time. When the system drifts, how often should docs be updated — continuously, per epic, per milestone?
-
At what scale does CSDLC need formal tooling vs markdown files and git? The current stack is lightweight and works. At what point does it need a real platform?