How do I know which PMs are adding value in an AI era?
Measure graph contribution. Decisions logged, risks raised, blockers resolved, context written. bRRAIn's Ontology Viewer surfaces a contribution graph per person. Performance reviews get objective.
From activity theater to graph contribution
Traditional PM reviews measure activity theater — meetings attended, reports written, tickets touched. None of that reflects value in an AI era where agents handle the activity. The honest metric is graph contribution: decisions the PM logged that other teams used, risks they raised that the org acted on, blockers they resolved, context they wrote into memory. bRRAIn's Ontology Viewer surfaces all four as a per-person contribution graph. The PM's output becomes visible and comparable without counting status meetings.
What the contribution graph shows
The contribution graph is built from POPE events in the graph layer. Each person is a node with edges to the artefacts they authored, the decisions they ratified, the risks they raised, and the blockers they closed. The Audit Log attaches a timestamp and source to each edge so nothing is inferred — every credit is traceable to a ratified change in the graph. A reviewer opens a PM's page and sees four months of contribution in one view, filtered by project or scope if needed.
Why this survives gaming attempts
Every metric gets gamed. A dumb contribution graph would reward volume — whoever logs the most decisions wins. bRRAIn's Consolidator deduplicates and weights: a decision that three other teams cited is worth more than a decision nobody read. A risk that was escalated and acted on is worth more than a risk filed and forgotten. The Handler can produce a weighted score, but the raw graph is what matters most — reviewers see texture, not just a number. That keeps the metric honest and the review conversation substantive.
Fairness for PMs doing invisible work
Some of the best PM work is invisible — defusing a political risk, coaching a junior lead, translating a stakeholder's vague ask into scope. A contribution graph surfaces those too, because the workspace captures the decision log, the coaching notes in Sessions.md, and the scope translations in the charter. The PM who does the invisible work well has a rich graph; the PM who performs meetings without writing memory has a thin one. Reviews based on this data produce fairer outcomes than gut-feel assessments. Book a demo to see it.
Relevant bRRAIn products and services
- Ontology Viewer — per-person contribution graph the reviewer opens to see four months of PM output.
- POPE Graph RAG — graph layer where decisions, risks, blockers, and context writes become measurable edges.
- Audit Log — timestamps and sources every graph edge so contribution credit is traceable.
- Consolidator — weights contributions by downstream citation, defeating volume gaming.
- Handler — produces a weighted score when a reviewer wants a quick summary view.
- bRRAIn Workspaces — where the invisible PM work (coaching notes, scope translations) gets captured and credited.