What's the right way to use AI for architecture reviews?
Ground the model in your ADRs and module map first, then ask it to critique. bRRAIn exposes your architecture as a graph the model can walk — dependencies, risks, decisions — and generates reviews that cite sources.
The failure mode of ungrounded AI reviews
Ask a raw LLM to review your architecture and you get a confident essay about the CAP theorem, microservice anti-patterns, and the importance of observability. None of it is wrong; none of it is useful. The model has no idea which trade-offs you have already made, which modules are legacy, or which risks the team has already accepted. Architecture review is a context-heavy task, and context is what stateless models lack. The right way is to ground the model before you ask for the critique.
Grounding through ADRs and a module map
bRRAIn ingests your ADRs, module map, and risk registry into a traversable graph. Each ADR becomes a node with links to the modules it affects and the risks it mitigates. Each module carries its dependencies, owners, and open issues. The POPE graph stores these relationships so an agent can walk from "proposed change" to "every decision and constraint it touches". The Document Portal is where ADRs land; the Consolidator keeps the graph fresh as new ones merge.
How the model generates a cited review
With the graph in place, the review flow changes. The agent receives a proposal, walks the graph for relevant ADRs and dependencies, and writes a critique that cites specific nodes: "this conflicts with ADR-0042 on synchronous calls"; "module billing-core has two open risks that this change amplifies". The Handler assembles the prompt with the cited sources inline, so the output reads like a senior reviewer's memo. The MCP Gateway is how Cursor, Claude Code, or any agent fetches that bundle.
How humans stay in the loop
The AI review is a first pass, not the verdict. Senior engineers read the cited critique, confirm or overrule each point, and file any new decisions back into the graph. Over time the graph gets denser, the reviews get sharper, and the human effort shifts from writing critiques to curating the decisions that power them. The architecture review becomes a collaborative artefact between humans and agents, with every claim traceable to an ADR node. Governance and speed stop being trade-offs.
Relevant bRRAIn products and services
- POPE Graph RAG / Memory Engine — the traversable graph of ADRs, modules, and risks the agent walks for cited reviews.
- Handler — assembles the prompt with cited sources inline so output reads like a senior memo.
- Document Portal — the landing zone for ADRs that feed the review graph.
- Consolidator — keeps ADRs, modules, and risks in sync as PRs merge.
- MCP Gateway — standards-based access for the IDE agents running the review.
- Book a demo — see a cited architecture review generated from a real proposal.