You're Reviewing The Wrong Thing

This week’s prompt isn’t about getting better output. It’s about deciding where you’re willing to take responsibility for it.

Your Prompt

I want to evaluate how I’m currently using AI in my workflow.

Start by asking me to describe one real task I’ve recently used AI for (or plan to use it for).

Then help me break that task into three phases:

1. Before the work (setup, context, instructions)

2. During the work (generation, iteration)

3. After the work (review, approval, delivery)

For each phase, ask me:

- Where is the human currently involved?

- What specifically are they responsible for?

- What would happen if they weren’t there?

Then push me further:

- Which parts of this process am I reviewing out of habit rather than necessity?

- Where am I over-verifying just to feel safe?

- Where could I shift responsibility from reviewing outputs to improving the system instead?

Finally, help me redesign the process:

- Where should the human actually sit if the goal is to scale this?

- What mistakes am I willing to tolerate?

- What would I need to put in place to stand behind the system instead of every individual output?

Challenge my assumptions. Don’t accept vague answers. Force me to be specific.

Pushing Against The Grain

Everyone says we'll still need a human in the loop.

Safe answer. Responsible answer. Signals you’re not about to let a chatbot run payroll while you book a flight to Cabo.

But the real question isn’t whether a human should be in the loop.

It’s who decides where they go.

And right now… that decision isn’t being made by the people building the systems.

It’s being made after something goes wrong.

Take driverless cars.

The technology is improving. Already safer than human drivers. Millions of miles… most of them uneventful.

But that doesn’t matter.

Because the standard isn’t being set by the miles that worked.

It’s being set by the one that didn’t.

One accident becomes a headline. A lawsuit. A referendum on the entire system.

So we respond the only way that protects us…

Put a human back in the seat. Hands on the wheel. Eyes forward. Ready to take over at any moment.

Not because it’s the most efficient design.

Because it’s the most defensible one.

We’re seeing the same thing play out with Generative AI.

An attorney submits a brief citing a fake case. Advisory firms give guidance that doesn’t hold up.

And the reaction isn’t…

“Where should the human have been in that system?”

It’s… “Why didn’t you catch this?”

No one writes headlines about the advice that didn’t collapse. The returns that didn’t get challenged. The thousands of daily decisions that worked.

Different question. Different standard.

We're not asking whether the system was designed responsibly.

We're enforcing an expectation that every output should have been verified anyway.

100% review… after the fact.

We didn’t reject AI.

We rejected system-level accountability.

We pushed responsibility all the way back down to the output.

Check everything. Every time.

Because that’s what holds up when someone asks, “Who’s at fault?”

And that’s why we're hearing about AI fatigue already.

Because operationally… that standard doesn’t scale.

“AI isn’t saving me any time.”

Of course it isn’t.

You didn’t remove work… you relocated it to the only place that feels legally safe.

Review.

Less creating. More verifying.

And if you swap in 40 hours of review time for 40 hours of creation time… nothing changed.

You just built a faster way to generate work… that still requires full manual validation on the back end.

That’s not leverage.

That’s mid-level management… without the human interactions that make it worth doing.

Meanwhile, in other sectors… we’ve already figured out a different tradeoff.

Manufacturing doesn’t inspect every unit.

Not because defects don’t matter.

Because they’ve decided where responsibility lives.

In the system.

In the controls. The tolerances. The process design.

And when something slips through…

They don’t pretend it should have been caught individually every time.

They fix the system.

Same underlying idea.

Different liability model.

And that’s the tension showing up in professional services right now.

Because we’ve always lived at the output level.

Signing off on 100% because we reviewed 110%.

That standard made sense when humans produced everything, when prep took more time than review.

It breaks when systems do.

Not because the work is worse…

Because the expectation didn’t move with it.

So now we're stuck between two models:

One where you sign off on every output… and stay the bottleneck.

And one where you sign off on the system… and accept that not every miss is caught in real time.

It’s the same sticking point that has always separated firms that can build systems and scale a team… from firms that stay trapped in silos around one professional reviewing everything.

That’s not a technical decision.

That’s a risk decision.

And more importantly…

A reputation decision.

Because stories of the failures will travel.

The successes won’t.

So the pressure will always push you toward over-reviewing.

Toward keeping the human everywhere.

Toward defending against the visible mistake… even if it kills the invisible upside.

Because your name is still on the work.

That hasn’t changed.

What’s changing is what that signature means.

It can mean you verified everything. Safe. Defensible.

Or it can mean you stand behind the system that produced it.

Those are not the same promise.

One says you will catch every error.

The other is about deciding which errors are acceptable so you can implement faster.

One leads to leverage. The other leads to you in more loops. Forever.

Sign up to receive this week's Against The Grain