← Back to blog

Dunning-Kruger as a Service

Dylan Moore
Dylan Moore
· 6 min read

The 30% Knowledge Problem

We've productised the Dunning-Kruger effect. You can subscribe to it monthly now.

I've been watching this unfold for a while. Someone asks ChatGPT about database indexing and walks away knowing that indexes make queries faster and B-trees are involved somehow. They now know more than they did ten minutes ago. They also know just enough to be dangerous in a production environment.

The problem isn't that people are using LLMs to learn. I use them constantly. The problem is the output feels like understanding. It's fluent, structured, confident. It reads like something a knowledgeable person would say. So you pattern-match your way into thinking you've actually learned the thing, when what you've really done is consumed a very convincing summary.

Summaries aren't knowledge. They're the menu, not the meal.

Vibe Everything

"Vibe coding" entered the lexicon recently and people treat it like it's just a software thing. It's not. It's everywhere now. Vibe lawyering. Vibe doctoring. Vibe financial planning. People prompting their way to surface-level fluency in every domain and mistaking the output for expertise.

I've watched people copy-paste legal clause analysis from ChatGPT into actual contracts. I've seen founders make architectural decisions because Claude said it was a good idea. I've seen people on Twitter explain complex geopolitical situations with the confidence of a foreign policy analyst because they spent fifteen minutes with a chatbot. The tech industry just noticed it first because we're the ones building the bloody thing.

Everyone's a domain expert on demand. Nobody did the reading.

Confidence Without Reps

Real knowledge has a texture to it. You get it from getting things wrong, repeatedly, and building an intuition for where the edges are. The scar tissue is the knowledge.

LLMs skip the entire failure loop. You get the answer without the pain. The result is people who can articulate a position but can't defend it. They can explain a concept but can't apply it. Ten thousand hours gets collapsed into ten minutes, and people genuinely believe they got the same result.

They didn't.

The Dangerous Middle

There are three kinds of people in any domain:

The person who knows nothing. They know they know nothing. They defer to experts, ask questions, stay humble. They're fine.

The person who knows everything (or close to it). They are the expert. They know the landscape, the trade-offs, the history of why things are the way they are. They're fine too.

Then there's the person at 30%. They've got just enough vocabulary to participate in the conversation. They know the key terms. They can pattern-match on the right phrases. They sound like they belong in the room. But they don't have the depth to know when they're wrong, and critically, they don't have the self-awareness to realise they're at 30%.

LLMs are a 30% factory. That's exactly where they put you. You walk away knowing the shape of the answer without knowing its substance. And because the shape looks right — well-formatted, articulate, uses the correct terminology — you have no signal that you're missing the other 70%.

Everyone's a Senior Now

In software this is getting particularly painful. Juniors are shipping code they can't debug. They can generate it, prompt for it, get something that passes the tests. But when it breaks in production, they're staring at a stack trace they've never seen before because they never learned what the code was actually doing. They learned what it looked like, not how it worked.

Founders are making architectural decisions based on a conversation with an LLM. "We should use event sourcing" — mate, do you understand the operational overhead of that? The debugging complexity? The read model synchronisation issues? "No, but Claude said it was good for our use case." Right.

Managers now "understand the technical trade-offs" because they pasted the RFC into ChatGPT and got a summary. They don't understand the trade-offs. They understand a description of the trade-offs. These are not the same thing.

The levelling effect of LLMs isn't upward. Nobody's getting smarter. What's happening is a flattening of perceived competence. Everyone sounds like they're at 70% and nobody can tell the difference between someone who genuinely is and someone who's been doing AI-assisted cosplay.

The Ego Problem

None of this would matter if people had the ego control to say "I used an LLM to get a starting point, but I don't actually know this topic well." But almost nobody says it. The output felt like understanding. Your brain goes: "I clearly understand this. Look at how well I can explain it." The fact that you're just replaying something else's explanation doesn't register.

Knowing that you don't know something is a skill. LLMs don't teach it. They actively undermine it. Every interaction makes you feel slightly more knowledgeable than you actually are. Multiply that across millions of people, thousands of interactions per person, across every domain, and you've got an epistemic crisis that makes social media misinformation look quaint.

The Fiverr Line

Planes mostly fly themselves. So why don't we have vibe pilots? Because when something goes wrong at 35,000 feet, you need someone who actually understands how the aircraft works. The autopilot doesn't replace expertise. It assumes it.

Why do we think software is different? You're vibing your way through database migrations, backups, security, infrastructure. Misconfiguring your Firebase bucket like the Tea app did — 72,000 user images including government IDs, 1.1 million private messages, all because someone left a storage bucket wide open. "Not life or death" doesn't mean "no consequences."

And plenty of software is life or death. Therac-25 killed cancer patients because of a race condition. Boeing's 737 MAX fell out of the sky twice — 346 people dead — because MCAS relied on a single sensor. Knight Capital lost $440 million in 45 minutes because old test code was left running during a deployment.

The distinction isn't AI vs no AI. It's expert with AI vs AI without expert.

A senior engineer using Copilot is like a pilot using autopilot — they know when to trust it, when to override it, and what to do when it fails. A vibe coder using Copilot is like someone who watched a YouTube video about planes trying to fly one because the autopilot "mostly handles it."

AI doesn't replace understanding. It amplifies it. If you understand nothing, it amplifies nothing.

DKaaS

LLMs aren't the problem. They're tools. The problem is the complete absence of intellectual humility in how people interact with them. The tool is neutral. The user rarely is.

We've democratised access to information and completely failed to democratise the wisdom to know what to do with it. That's not a technology problem. It's a human one — ego, with a much more effective delivery mechanism.

Do the reps. Learn the edges. Earn the scar tissue. Or don't — there's always Fiverr.

Dylan Moore

Written by Dylan Moore

Self-taught developer since age 13. Sold first software company at 16 for $60K, second for mid-six figures. Founded multiple ventures. Currently founding developer at PodFirst.

Posts you may like