RADAR/The paper
20-minute read · 12 sections · For VP+ readers

RADAR. Five signals every AI transformation must transmit.

Reimagination · Agentification · Data and Context · Absorption · Rails. Miss any one, and the outcome is predictable. The reverse-lookup is the diagnostic; the named pathology is what makes a leader stop and act.

A radar screen with five small dot markers and a figure beside it pointing at one of the dots.
Listen15 min
0:00
0:00
The claim

“The reverse-lookup is the diagnostic. The score is the score. The pathology name is what makes a leader stop and think.”

01

Why this exists

AI programs stall for five specific reasons. Generic diagnostics don't name them.

If you are reading the briefings inside any large enterprise today, you are reading the same diagnosis: “change management,” “data readiness,” “talent gaps,” “culture.” The diagnoses are not wrong. They are also not actionable. A VP cannot direct change management; they can direct programs and capital. RADAR exists because the named pathology (Faster Horses, PowerPoint AI, Hallucination, Organ Rejection, Pilot Purgatory) is what a leader can act on. “We are in PowerPoint AI” tells you what's broken and what to fix. “We have change management challenges” tells you nothing.

The framework was built from the pattern that shows up in nearly every stalled AI program. Five signals must transmit; if even one is silent, the program quietly underperforms while looking active. The signals are independently observable, the pathologies are independently nameable, and the tests are independently runnable. A leader can read their own program in fifteen minutes and walk into a staff meeting with a decision that didn't exist before they started.

02

What RADAR is

Five signals an AI transformation must transmit

RADAR is a transmission diagnostic. The metaphor is deliberate: a program is a signal source, the organization is a receiver, and outcomes are the lock that confirms transmission. If the receiver returns nothing, the natural assumption is to fix the receiver: train more people, restructure more teams, communicate more. The framework's claim is that, more often than not, one of the five signals at the source is not transmitting at all. Fixing the receiver doesn't fix that.

The five signals are Reimagination (is the work being redesigned?), Agentification (are agents actually deployed and accountable?), Data and Context (do agents have proprietary grounding?), Absorption (does the org adopt what works?), and Rails (is the production substrate trustworthy enough for sign-off?). Sections 04 through 08 take each in turn. The shorthand is RADAR. The five letters are the five signals; the audience is everyone deciding where to push next.

The reverse-lookup

“When a signal is missing, the organization presents with a named pathology. The pathology is the diagnostic. Faster Horses · PowerPoint AI · Hallucination · Organ Rejection · Pilot Purgatory.”

03

Why naming pathologies matters

A name is half the intervention.

Consultants and frameworks are full of dimensions, scores, and gauges. Most of it slides off senior leaders, not because they're inattentive, but because the dimension is too abstract to act on. “Your data dimension scores 47 out of 100” doesn't tell a VP what to do on Monday. “You are in Hallucination: your agent outputs are indistinguishable from a competitor running on the same vanilla model” tells the VP exactly what to do on Monday. Same data, different framing, very different consequence.

RADAR's load-bearing innovation is not the five letters. It is the reverse-lookup table that converts a missing signal into a named pathology a leader recognizes in their own org inside a minute. Faster Horses, PowerPoint AI, Hallucination, Organ Rejection, Pilot Purgatory: each name is designed to be quotable in a staff meeting and uncomfortable enough that the room stops to think. The score is the score. The name is what moves the program.

04

R · Reimagination · Faster Horses

Are you redesigning the work, or layering AI on the workflow you already have?

Reimagination is the willingness to retire a workflow before deploying AI inside it. The reflex of every operating leader is to take the workflow as given and ask where AI fits. The reflex is wrong. The single highest-leverage move in every AI transformation we have studied is the decision to retire a workflow that AI made obsolete and redesign the next one around the capability AI now provides. Layering on top is not transformation; it is automation of the prior operating model.

The pathology when this signal is missing is Faster Horses, after the Henry Ford line. The org accelerates the workflow that should have been retired and reports the savings as transformation. The savings are real and small. The cost is the opportunity to have absorbed the new operating model two years sooner than the competitor that went directly to redesign. Faster Horses is the most common pathology in established enterprises because it is the pathology that doesn't feel like one. Every output looks like progress.

60-second leader test

Map the top twenty workflows by labor hours. Mark each: redesign, retire, or augment. If more than 70% are 'augment', the program is layering, not transforming. The number to drive over the next four quarters is the share of the top twenty marked retire or redesign.

05

A · Agentification · PowerPoint AI

Are AI agents actually deployed and accountable, or stuck in pilot decks?

Agentification is the move from AI as feature to AI as colleague. An agent in this framework is software that owns an outcome (a quota, an SLA, a measurable lag indicator), not a step in someone else's workflow. The named owner of the agent is the person whose performance review reflects the agent's outcome. Without that ownership wiring, an agent is a demo. With it, an agent is a unit of organizational capacity.

The pathology when this signal is missing is PowerPoint AI. Strategy decks proliferate; production agents don't. The give-away is in the metrics: outcome targets quietly become activity targets (pilots launched, models trained, hours saved per task), and the leader cannot point to a single agent that owns a quarterly outcome. PowerPoint AI is what most large enterprises ship as their first three quarters of AI program. It is endemic in companies whose default operating language is the deck.

60-second leader test

Count agents in production with a named owner and a quarterly outcome target. If the count is zero, you are at PowerPoint AI regardless of slide volume. If the count is non-zero, the next test is whether the count is rising quarter over quarter; flatlining is the leading indicator that PowerPoint AI is reasserting.

06

D · Data and Context · Hallucination

Do your agents have the proprietary context that makes them yours, not generic?

Data and Context is the layer that turns a vanilla model into a specific company's intelligence. It is not a data lake; it is the operational substrate the agent reasons over: customer history, product catalog, decision precedent, support transcripts, internal documents, the things that make the company itself recognizable in the agent's output. Without this substrate, every agent in the company is pulling from the same public model the competitor is pulling from. The moat is the model's, not yours, and switching costs are zero.

The pathology when this signal is missing is Hallucination, used here in a broader sense than the term-of-art. The agent isn't necessarily fabricating facts. It is producing outputs indistinguishable from competitors using the same models. The customer experience is generic. The strategic asset is non-existent. This is the most expensive pathology to live with quietly because it doesn't look broken; it looks like AI is working. It is working: for everyone, equally.

60-second leader test

Read three live agent outputs side-by-side with the same prompt run on a vanilla public model. If a customer can't tell which is yours, the context layer isn't loaded. The next move is not better prompts; it is investing in the data product that grounds the agent in the company's specific context.

07

A · Absorption · Organ Rejection

When something new works, how quickly does the rest of the organization actually adopt it?

Absorption is the rate at which a working pilot multiplies. The first team makes it work; the second team adopts it; the tenth team adopts it faster than the second. Absorption is the multiplier on every other RADAR signal: without it, even a perfect pilot returns linear value. With it, AI returns compounding value. Absorption is also the easiest signal to fake, because launching a second team that adopts on paper looks identical to launching a second team that adopts in practice.

The pathology when this signal is missing is Organ Rejection. A pilot that worked six months ago in one team has not been picked up by anyone else, and each new team starts from zero. The cause is rarely technical. It is almost always that the absorber (the rest of the organization) has no antibody-friendly path to adopt: no sponsor, no integration playbook, no incentive aligned to the metric the pilot improved. The failure looks like a tooling issue and is in fact an immune-system issue. (This is where RADAR meets EMI; see section 11.)

60-second leader test

Pick a pilot that worked six or more months ago. Count how many other teams now run it. If the answer is fewer than three, the absorber is rejecting, not absorbing. The lever is rarely a better pilot. It is a sponsor and an incentive that match the work the pilot makes easier.

08

R · Rails · Pilot Purgatory

Trust, governance, and kill-switches: the conditions a leader needs before signing off on production.

Rails is the production substrate every agent has to ride on: evals, logging, kill-switches, accountability lines, escalation paths, the safety architecture that lets a regulated leader sign off on shipping. Rails is the most under-budgeted line item in enterprise AI, and the most over-blamed when programs stall. Leaders blame talent or capability; the actual block, almost always, is that no one drafted the rails the leader needs in order to take the production decision.

The pathology when this signal is missing is Pilot Purgatory: endless POCs that never graduate because leadership won't sign off without governance, and governance is treated as a Phase 2 problem. Pilot Purgatory is the single largest hidden cost in enterprise AI today. It does not show up as failure because nothing was killed; it shows up as time, and the time compounds. Every quarter spent in Pilot Purgatory is a quarter the competitor with rails-by-default is shipping.

60-second leader test

For every active pilot, list its production gate: the specific evidence that ships it. If 'Phase 2 governance review' appears more than once, you've named the bottleneck. The remediation is to fund the rails team to ship a thin governance product (eval harness, kill-switch, audit log) that is reusable across pilots, not a per-pilot review.

09

The matrix

Five signals × healthy / missing / 60-second test

The at-a-glance view across all five signals. The full matrix with rendered table is on the one-pager. The named pathology is what most leaders take away in the first read.

R
Reimagination
When missing
Faster Horses
A
Agentification
When missing
PowerPoint AI
D
Data and Context
When missing
Hallucination
A
Absorption
When missing
Organ Rejection
R
Rails
When missing
Pilot Purgatory
10

The diagnostic instrument

Five minutes. Twenty-five questions. One answer.

The RADAR diagnostic is twenty-five Likert questions, five per signal, calibrated against the named pathology. It returns three things: a composite RADAR score (0-100), an archetype placement (the closest of five archetypes the org's transmission profile maps to), and the named pathology the program is currently closest to. The instrument is designed to be taken by an operator who runs an AI program, not a survey instrument distributed to a workforce.

The diagnostic is the entry point. The action is downstream: pathology-specific intervention scaffolds, an 8-week pilot blueprint, and a paired metabolism-and-immunity read (OMI + EMI) for orgs that have already run RADAR and want to understand the absorber side of the equation.

11

RADAR in the Adaptive Org stack

Transmission, metabolism, immunity

RADAR is the third leg of a stack. OMI (Organizational Metabolism Index) measures the organism: how fast the receiver can absorb structural change at all. EMI (Emotional Metabolism Index) measures the immune system: whether absorption is healthy or autoimmune. RADAR measures transmission: whether the AI input the receiver is asked to absorb is even complete.

OMI · Metabolism
How fast the org structurally absorbs change.
Six dimensions. Archetypes from Cosmetic to Adaptive. The receiver-side rate constant.
EMI · Immunity
Whether absorption is healthy or autoimmune.
Five dimensions, five named pathologies. The receiver-side response calibration.
RADAR · Transmission
Whether the AI input itself is transmitting.
Five signals, five named pathologies. The transmitter-side completeness check.

Most stalls are misdiagnosed at the receiver. A program that scores poorly on RADAR will look like it has “adoption challenges” or “culture issues” if you are reading from the absorber side, because the absorber is doing exactly what an absorber does when fed an incomplete signal: nothing. RADAR is the discipline of checking the transmitter before redesigning the receiver.

12

How a VP starts

Three moves that fit in the next 90 days

Run RADAR on your program

Take the diagnostic with your operating leader and your AI program lead. 5 minutes each, 30 minutes to compare. The output is a score, an archetype, and the named pathology your program is closest to. The conversation is the value; the score is the artifact.

Pick the named pathology to fix

Don't fix all five. Pick the most acute pathology and commit one quarter to extinguishing it. Faster Horses → top-20 workflow audit. PowerPoint AI → shipping one production agent with a named owner. Hallucination → standing up the context layer behind a single agent. Organ Rejection → multiplying one pilot to three new teams. Pilot Purgatory → funding the rails team to ship a thin reusable governance product.

Pair it with the absorber read

After RADAR, run OMI and EMI on the same program. Most VPs find that the program is healthier on RADAR than they expected and weaker on EMI than they expected. The absorber side is where the second quarter of work lives.

Run the diagnostic

Five minutes. Twenty-five questions. A score, an archetype, and the named pathology you're closest to.

Author: Rahul Jindal · byrxj.com · jindal.rahul@gmail.com. Stack: RADAR (transmission), OMI (metabolism), EMI (immunity).