Why the smartest-sounding people in the room are often contributing the least.

Everyone wants to sound smart. Very few people want to communicate.

These are not the same thing, and the gap between them is where most professional conversations go to die.

I’ve been running an informal experiment for years. When someone uses a buzzword confidently, I ask them to define it. Not to embarrass them. Not as a gotcha. I just want to see what happens when someone has to turn a familiar noise into a clear thought.

I started with “Big Data.” Back when every company in America was convinced they needed a Big Data strategy, I’d ask a simple question: “What makes data big? How do you know yours qualifies? Is it volume? Velocity? Variety? All three? And at what threshold does regular data become Big Data?”

Most people would stall. Some would get annoyed, which was more revealing than they realized. Almost nobody had a clear answer. Yet they’d been building strategies, hiring teams, and buying platforms around a phrase they couldn’t define.

Then it was “digital transformation.” Same pattern.

Now it’s “AI” and “agents.”

I’ve heard hundreds of people talk about AI agents in the last year alone. Conferences, meetings, LinkedIn posts, podcasts. Maybe five of those people could give me a working definition of what an agent actually is. Not a textbook definition. Not a vendor pitch. Just their definition. What does it mean to them? What are they actually describing when they use that word?

Most couldn’t answer. Because they weren’t communicating an idea. They were performing familiarity with one.

The 95/4/1 Problem

After years of watching this pattern repeat, I’ve landed on a rough breakdown that holds pretty consistently across every hype cycle:

95% adopt the language without understanding it. They hear a term gaining traction. They see it in headlines, in conference titles, in job postings. They start using it. They nod along in meetings when others use it. They can arrange the vocabulary into sentences that sound directionally correct. But they have no working mental model underneath the words.

This isn’t stupidity. It’s social learning doing what social learning does. Humans are wired to mirror the language of their environment. It’s how we signal belonging. The problem isn’t that people pick up new terms. The problem is that professional culture treats vocabulary adoption as a proxy for understanding, and it absolutely is not.

4% understand the concepts but weaponize the complexity. These are the people who could explain it simply. They choose not to. They’ve learned that jargon is a competitive moat. If you’re the only person in the room who can talk fluently about retrieval-augmented generation or stochastic gradient descent, and you make no effort to translate that into plain language, you’ve created job security through confusion. It’s a rational strategy. It’s also a betrayal of what communication is supposed to do.

1% can define their terms, adjust those definitions when challenged, and use the concepts to make better decisions. These are the people actually moving things forward. You can spot them because they do something the other 99% rarely do: they volunteer definitions without being asked. They say things like “when I say agent, what I mean specifically is…” They’re not threatened by follow-up questions. They welcome them, because clarity is the whole point.

The Incentive Problem

Here’s what makes this so persistent: the system doesn’t punish the 95%. It promotes them.

If you never attempt a clear definition, you never risk being wrong. Vagueness is safe. You can say “we need to leverage AI to drive transformation” in a meeting and nobody will challenge you, because challenging you would require them to define the terms, and they can’t either. So everyone nods. The meeting ends. Nothing was communicated, but everyone performed competence successfully.

Precision, on the other hand, is dangerous. The person who says “here is exactly what I mean by this, and here is what I think it does and doesn’t apply to” is the most exposed person in the room. They’ve staked a claim. They can be wrong. They can be questioned. They can be corrected.

They’re also the only person who actually said anything.

And it gets worse. In toxic cultures, the 1% don’t just risk being questioned. They risk being punished. When someone clearly and simply defines a concept that the influential 95% have been confidently misusing, that clarity becomes a threat. It makes powerful people look uninformed. And in organizations where status is built on sounding authoritative rather than being correct, the person who exposes the gap between performance and understanding isn’t rewarded for their contribution. They’re reprimanded for making the wrong people uncomfortable.

This is how organizations calcify. The few people capable of cutting through the noise learn that cutting through the noise is a career risk. So they stop. Or they leave. And the feedback loop breaks even further.

The professional world has evolved a conversational equilibrium where vagueness is rewarded and clarity is punished. The more precise you are, the more surface area you expose. The more vague you are, the safer you feel. And because most people are optimizing for safety rather than communication, the vague middle wins almost every time.

Karaoke Knowledge

Think about the crypto boom. For about 18 months, everyone was a blockchain expert. Your Uber driver had opinions on tokenomics. Your cousin was explaining DAOs at Thanksgiving.

But ask most of those “experts” to explain proof of work vs. proof of stake in a way that actually taught you something. Not recite the difference. Teach it. Help someone who knew nothing walk away knowing something.

They couldn’t. What they could do was reproduce phrases they’d memorized, arranged in an order that sounded approximately correct. Words in the right sequence, delivering zero understanding.

That’s not knowledge. That’s karaoke. You’re performing someone else’s song. You’ve memorized the sounds. You might even hit the notes. But you didn’t write it, you don’t understand the theory behind it, and if someone changed the key, you’d be lost.

Professional culture is full of karaoke knowledge. People performing fluency in concepts they’ve never actually internalized. And the standing ovation they get from the equally uninformed audience reinforces the behavior.

This is Dunning-Kruger operating at an industrial scale. The 95% are clustered at the left side of the curve, where confidence is high and competence is low. But they never get the feedback that would push them into the valley of “oh, I actually don’t understand this.” Because in most professional settings, that feedback doesn’t exist. The game doesn’t punish you for not understanding. It rewards you for sounding like you do.

What Games Get Right That Work Gets Wrong

I spend a lot of time thinking about strategic games. Not as entertainment, though they are that. As decision-making environments.

Here’s something games get right that professional culture gets catastrophically wrong: games have honest feedback loops.

In Factorio, you build factories. Complex, interdependent production systems where raw materials flow through processing chains to produce increasingly sophisticated outputs. It’s essentially supply chain management as a game.

Now, you can absolutely go online and copy someone else’s factory blueprint. Paste it into your world. Watch it run. For a while, it’ll work. You’ll feel like you understand what you built.

Then something changes. A resource patch runs dry. Demand shifts. A bottleneck forms somewhere upstream. And suddenly your copied blueprint is failing, and you have no idea why, because you never understood the logic behind the design. You memorized the layout. You didn’t learn the system.

Hearts of Iron IV does the same thing with military strategy. You can copy a division template from a YouTube guide. It’ll work fine in the specific scenario the guide was designed for. But the moment the situation changes and you need to adapt, you’re exposed. If you don’t understand why that template works, you can’t modify it for new conditions. You’ll get encircled and destroyed, and you’ll blame the game instead of your own lack of understanding.

This is exactly what’s happening in professional settings, just without the honest feedback.

When someone says “we need an agentic AI solution” in a meeting, there’s no equivalent of a Factorio belt backing up or a Hearts of Iron encirclement. Nobody’s factory stops producing. Nobody’s divisions get destroyed. The meeting just continues. The vague language goes unchallenged. The decision gets made on vibes dressed up as strategy.

In games, form without function fails visibly, immediately, and undeniably. You can’t talk your way out of a supply chain collapse. In professional settings, form without function can succeed for years because the feedback loop is broken or nonexistent.

This is one of the core ideas behind what I call Gaming Is Good. Strategic games aren’t just recreation. They’re training environments for a specific and undervalued skill: building real understanding of systems, not just the vocabulary to describe them. The game forces you out of the 95% because the 95% approach doesn’t work when the system talks back.

The Real Payoff of Understanding

There’s a flip side to all of this that makes the effort worth it, and it goes beyond “being a more honest communicator.”

When you actually understand a topic, not just its vocabulary but its mechanics, every tool at your disposal becomes exponentially more powerful.

Take AI itself as an example. The 95% use LLMs to generate text they can’t evaluate. They paste the output, skim it, and ship it. If it sounds right, it must be right. They’re using the tool the same way they use buzzwords: performing competence without verifying it.

The 1%? They use the same tool and get radically different results. Because they understand the domain, they can evaluate what the AI produces. They can spot where it’s wrong. They can take a good-but-incomplete output and connect it to adjacent concepts in ways the tool never would have on its own. They’re not just copying. They’re composing. The AI output becomes raw material that they can shape, extend, and combine because they understand the underlying system well enough to see connections that aren’t obvious on the surface.

This applies to everything, not just AI. Blueprints in Factorio. Division templates in Hearts of Iron. Frameworks borrowed from a conference talk. Code snippets from Stack Overflow. Best practices from a consulting deck.

For the person who doesn’t understand the domain, borrowed material is a ceiling. It’s the best they can do, and they can’t adapt it when conditions change.

For the person who does understand, borrowed material is a floor. It’s a starting point they can build on, remix, challenge, and improve. Understanding turns copying into leverage. And that leverage compounds, because every new connection you make deepens the understanding that lets you make more connections.

This is the part nobody talks about when they encourage people to “just use the tools.” The tools are incredible. But they’re incredible in proportion to your understanding of the domain they’re operating in. A chainsaw in the hands of an experienced woodworker is transformative. The same chainsaw in the hands of someone who’s never worked with wood is just dangerous.

From Buzzwords to Decisions

This all connects to something I’ve been building toward for a while in my work around Decision Systems Management.

The fundamental unit of business value isn’t data. It isn’t AI. It isn’t any specific technology. It’s the decision. Everything else is infrastructure in service of making better decisions more consistently.

But here’s the thing: if you can’t define the terms you’re using, you can’t define the decisions you’re trying to improve. And if you can’t define the decisions, you’re not optimizing anything. You’re just adding complexity and calling it progress.

I’ve seen this firsthand. Teams that can’t clearly articulate what “optimization” means in their context. Leaders who say they want “AI-driven decision-making” but can’t tell you which specific decisions they want to improve, what “better” looks like for those decisions, or how they’d measure improvement.

The buzzword problem isn’t a communication problem in isolation. It’s a decision quality problem. Fuzzy language produces fuzzy thinking. Fuzzy thinking produces fuzzy decisions. Fuzzy decisions produce results that nobody can evaluate because nobody defined what success looked like in the first place.

Defining your terms is the first act of intellectual honesty in any professional context. It’s also the first requirement of building anything that actually works.

The Challenge

So here’s what I’d ask you to do, and I’m asking genuinely, not as a rhetorical device.

Pick the buzzword you use most. AI. Agents. Optimization. Strategy. Digital transformation. Whatever it is.

Write a two or three sentence definition. Your definition. In your own words. Not something you’d copy from a vendor’s website or a Wikipedia article. What does it mean to you? What are you actually describing when you use that word?

If you can do it clearly, you’re in good shape. You’re probably in the 1%, and you should keep pushing for clarity every chance you get.

If you struggle, that’s actually great news. It means you’ve found the exact edge of your understanding, and now you know where to push. The valley of Dunning-Kruger is uncomfortable, but it’s where real learning begins.

But if you can’t define it and you keep using it anyway? You’re not communicating. You’re performing. And the audience clapping for you is performing too.

The smartest-sounding people in the room are often contributing the least. The ones willing to say simple, clear, precise things, even when it makes them look less sophisticated, are the ones actually moving the conversation forward.

Stop trying to sound clever. Start trying to communicate.

If this resonated, find me on LinkedIn, where I write about Decision Systems Management, gaming as a decision-making tool, and why most of what passes for strategy is just vibes in a slide deck.