Who’s Responsible For This?

Today we're going to evaluate a process, and think through how another team member should work through it without supervision.

Your Prompt

I want to evaluate how I’m using AI in a real workflow in my business.

First, ask me questions to help identify a specific process where AI is currently being used (or could be used soon). Do not assume the process… help me define it clearly.

Once we’ve identified the workflow, map it step by step.

For each step, break down:

- What AI is doing

- What a human is responsible for

- Where judgment is required

- Where errors, bad assumptions, or low-quality thinking could slip through unnoticed

Then analyze the workflow and identify:

- Where I am relying on ‘it looks right’ instead of actual understanding

- Where different team members might make inconsistent decisions

- Where I am implicitly trusting AI output without enough verification

Next, challenge me:

- What assumptions am I making about how my team will use AI in this process?

- Where are those assumptions likely to fail?

- If a weaker or less experienced team member ran this process, what would go wrong?

Finally, help me define:

- What must be checked or verified before anything goes out

- Where human judgment must override AI

- What standards would need to exist so this process produces work I would confidently stand behind every time

Do not give generic advice. Use my specific workflow and push for clarity, edge cases, and risks I might be overlooking.

Pushing Against The Grain

We're asking the wrong question right now.

“How much of this can AI do?”

Reasonable… but incomplete.

Because the real shift isn’t about how much work gets done. That's going to remain a fast-moving target.

It’s about training responsibility for the decisions behind that work.

AI doesn’t eliminate the need for a team…

it changes what the team is accountable for.

Implementation is moving at light speed.

The draft comes faster.

The numbers tie quicker.

The summary reads clean and authoritative on the first pass.

So it’s easy to assume the work is shrinking.

It’s not. It’s moving.

From creating

to judging

From building

to approving

From effort

to responsibility

And that shift doesn’t reduce pressure… it concentrates it.

Because now the question isn’t “Can we produce this?”

It’s “Should we stand behind it?”

That’s a different standard.

And most teams aren't being aligned on it yet.

Everyone has their own line.

What they’re comfortable pasting into AI.

How much they verify.

What feels “good enough” to send.

Individually, those decisions might be reasonable.

Collectively, they’re a mess.

Which means the output of your business is inconsistent… even though it looks polished.

And that’s where this stops being about efficiency, and starts being about trust.

Every deliverable that leaves your business now carries an invisible question:

How much of this was generated… and how much will you stand behind?

Your client doesn’t care which tool you used.

They care whether the answer is right.

Whether it applies to them.

Whether you thought about it.

And they’re getting better at spotting when you didn’t.

You’ve probably done this yourself.

You start reading something…

realize pretty quickly it was generated and not really worked through…

And you’re out. Maybe you click away entirely, maybe your eyes just gloss over as you skim.

Not because AI was involved.

Because you can tell no one actually cared enough to think.

That’s the part that should send out warning bells.

AI doesn’t just make work faster.

It makes it easier to ship something that looks finished… without doing the thinking that used to be required to get there.

And when that happens often enough…

Trust erodes.

I’ve played this game myself, long ago.

Early in my career, I would point to the tax software and say, “that’s the number it gave me.”

I knew my inputs were right.

I just couldn’t fully explain the output yet.

And I could feel that wasn't a good answer.

It took years to close that gap… to understand what the software was actually doing, and to stand behind it without hesitation. To even get to the point of knowing when the software was wrong.

That’s the same shift happening today.

Except now the tool is more powerful… and the gap will take more intention to close. Because it's so easy to stay complacent.

Which is why adoption isn’t really a platform problem.

It’s a standards problem.

Not just your individual standards… your team’s.

Because the future isn’t one person managing a stack of AI tools and catching everything.

It’s a group of people, each using AI,

each making judgment calls,

each deciding what is good enough to send.

And those decisions don’t stay isolated.

They are your work product. Your brand.

Across your team.

Across your clients.

Across everything that leaves your business.

We're only as good as the least-thought-through product that leaves the office.

Which means this only works if the line of responsibility is clear.

Not in your head… but in a way that holds up across the entire organization and thousands of microdecisions.

Because at the end of the day, everything - AI generated or not - still goes out with your name on it.

Sign up to receive this week's Against The Grain