Looks Good, Thanks

This week's prompt can be a template customized for any purpose you might have: Emails, proposals, financial statements, social posts, scripts, etc.

Your Prompt

I’m using AI to review or help produce work, but I do not want surface-level approval. I want you to assume this output may be weak, incomplete, or wrong in a way that is not obvious at first glance.

Do not tell me whether it is generally “good.” Do not hype me up. Instead, analyze it like a careful reviewer trying to find where polished output is hiding weak reasoning.

For the content I paste below:

- Identify the parts that look finished but may rest on weak logic, untested assumptions, missing context, or shallow analysis.

- Point out where an experienced reviewer would want to go 2 to 3 layers deeper.

- Highlight any number, claim, conclusion, recommendation, or paragraph that should be traced back to its source or logic before being trusted.

- Tell me what I may have accepted too quickly because it sounded right, looked clean, or fit the format I expected.

- For each major issue you find, explain: where the error or weakness may be hiding, why it would be easy to miss, what I should verify, question, or test next

Be skeptical, specific, and direct. Do not rewrite the piece unless I ask. Make your argument for what you would change, and why.

I've pasted the work below:

Pushing Against The Grain

Have you ever caught yourself approving something you didn’t fully understand?

The numbers tied. The report was clean. The copy looked thorough. Nothing jumped out as wrong.

So you moved on… you shipped.

And the feedback came back… fine.

Nothing broke. Nothing took off. Maybe a few sales, or a “looks good, thanks.”

But nothing that told you what actually worked. Nothing that told you what to repeat, or what to avoid next time.

Just… passable.

It’s getting easier to feel like the work is done. And this will only accelerate.

Not because the work is simpler, but because the output clears the bar faster. The numbers tie, the payroll runs, the paragraph reads clean.

So you assume there was thought behind it… because there used to be.

That’s the problem.

We’re still using the old bar… the final pass. The version of “done” that used to sit at the end of a process.

Now it shows up at the beginning.

This isn’t new. We’ve been moving this direction for a while.

QuickBooks categorizes transactions, so the books look right. But categorization won’t tell you if an owner’s vacation got buried in expenses, or if something is sitting on the balance sheet that shouldn’t be there.

The payroll software calculates the taxes and files the reports, so payroll feels complete. But if a pay type is set up incorrectly, Social Security and Medicare can be off in a way that still looks normal… even when you try to trace it.

On the surface… everything works.

AI accelerates the same pattern.

It produces work that feels finished. Fast. The structure is there, the tone is there, the gaps are filled just enough that nothing looks obviously wrong.

So you read it through the old lens.

You don’t pick it apart. You don’t ask what’s missing. You assume the thinking already happened… because it looks like it did.

We’re moving from a world where bad work was visible… to one where average work is indistinguishable at a glance.

And when everything clears that first bar, the feedback changes.

“Looks good.”

“Nice job.”

“Ship it.”

So what does that look like ten years from now?

Teams who have only ever worked on outputs that looked finished. Teams who never had to build something from scratch, never had to chase down why a number doesn’t tie, never had to pass drafts back and forth that weren’t meeting the mark.

Teams who were trained on an inherited definition of done

If it looks right… it’s done.

If nothing stands out… it’s done.

If it passes a quick review… it’s done.

But the real work was never on the surface.

The real work was the investigation we developed while getting there. The judgment. The discernment. The taste.

How did this number tie?

What assumption is driving this result?

What is this actually saying… and is it true?

Those questions don’t go away.

They just get easier to skip when everything looks like it’s already been answered.

And most people will skip them.

Not because they’re lazy. Because the system teaches “good enough” faster than it teaches professional skepticism.

So the extra time doesn’t go to a second layer of thinking. It goes to more volume, more output, more things that look right.

That’s always been the risk.

Not that the tools get better. We should want that.

But if your profession has always positioned itself around the production… and not around the levers that actually change the outcome…

What’s left when no one needs you to produce anymore?

Sign up to receive this week's Against The Grain