Is AI Failing? Or Are We Using It Wrong?
You've probably seen a wave of negative headlines this week about AI. They're all getting a lot of clicks, and they're all pointing back to one source: a "State of AI in Business" report from MIT that was published back in July.
The main takeaway that's getting all this new attention? "High Adoption, Low Transformation." Across the board, companies are investing billions... and seeing very little in return.
For many in the media, this is a “gotcha" moment, a chance to finally write the headline, "See? AI doesn't work!"
But they're missing the real story.
The MIT study itself is more nuanced and most headlines completely miss it. The researchers found that 95% of companies are failing not because AI doesn’t work, but because they’re using it in the wrong way: tools aren’t learning, adapting, or integrating into real workflows.
The problem isn’t that AI is failing. It’s that we’re applying it like a faster calculator, and that’s not what it is.
We're Using the Wrong Tool for the Job
For 50 years, we've used "deterministic" computers. You give them a precise input (like 2+2) and you get a single, correct, factual answer (it's determined).
Generative AI (GenAI) is not a calculator. It's a "probabilistic" engine that “generates” content. ChatGPT, Gemini, Copilot, Claude, etc. are all GenAI tools that provide results using probabilities.
In simple terms: it's an incredibly advanced prediction machine. It doesn't calculate the answer; it predicts the most statistically likely next word. This makes it a powerful tool for subjective, "fuzzy" tasks, but a poor tool for objective, factual ones.
We get frustrated when it "hallucinates" (makes things up), but we're just asking the wrong questions. We're asking a creative brainstorming partner to do our accounting.
Stop "Using AI," Start Solving Problems
The study is right: companies with a vague goal to "Use AI" are failing.
Why? Because "Use AI" isn't a goal. It's like a 1990s goal to "Use The Internet."
A successful AI strategy doesn't start with the tool. It starts with a real, human bottleneck. Instead of "Use AI," a good strategy targets specific, high-value goals that are uniquely suited to AI's probabilistic power:
Bad Goal: "Use AI."
Good Goal: "Unlock Our Trapped Knowledge." (e.g., "Read all 5,000 of our customer support tickets from this quarter and tell us the 3 biggest themes.")
Bad Goal: "Make our team use AI."
Good Goal: "Make Better Decisions." (e.g., "You are a 'red team' critic. Read our new marketing plan and tell us the 3 biggest flaws.")
Bad Goal: "Get everyone on ChatGPT."
Good Goal: "Explore Diverse Perspectives." (e.g., "Draft three versions of this email: one for a skeptical CEO, one for a busy manager, and one for a new intern.")
The Missing Piece: The Anchor
Here's the most critical part. A probabilistic model, left to itself, is just a very confident guesser.
MIT researchers call this the “learning gap.” It’s the divide between tools that adapt and those that don’t.
In practice, closing that learning gap means giving AI something real to learn from — your company’s own knowledge, context, and values. That’s what I call anchoring.
To be a truly effective business partner, AI must be grounded in your organization's reality.
If you just let it pull answers from the public internet, it will give you generic, average, and sometimes just wrong answers.
To get real transformation, the AI's "brain" must be anchored to your mission, your values, your strategic goals, and your internal knowledge base.
When your AI understands your company context, it stops being a public-facing gimmick and starts being a powerful internal partner, one that can write in your voice, access your data, and make recommendations that actually align with your strategy.
The GenAI Gap isn't between companies that use AI and those that don't. It's between companies that are just tossing a generic tool at their team... and those that are building a strategic, grounded, and purposeful AI framework.
Want to build your framework?
The MIT report warns that the window for crossing the GenAI Gap is rapidly closing because large enterprises are already locking in adaptive systems that learn from their own data.
If you're stuck on the ‘low transformation’ side of the divide, you’re not alone. Most organizations are. The difference is whether you close the learning gap by anchoring your AI in your own data and goals.
That’s where real transformation starts, and that’s what we help organizations do.