By Edward Meyman, FERZ LLC
In the realm of technological discourse, few phrases have gained as much uncritical acceptance as “AI – First.” This seemingly innocuous slogan has infiltrated boardrooms, strategy documents, and keynote addresses across the technology landscape. It sounds visionary yet demands nothing specific. It promises transformation while avoiding accountability. And it’s fundamentally dangerous to clear thinking about technology’s role in human enterprises.
When “AI – First” Goes Wrong: Real-World Consequences
The abstract problems with “AI – First” become painfully concrete when we examine real failures:
In one telling scenario, a regional hospital network adopted an “AI – First” diagnostic system in its radiology department. Enthralled by the slogan’s promise, executives fast-tracked the rollout and cut radiologist staffing by 30%. Within months, the system flagged a 42-year-old mother of three with what it interpreted as terminal lung cancer—a misclassification of a treatable fungal infection. Fortunately, a second opinion from a remaining radiologist caught the error before chemotherapy began. The AI had simply never encountered that fungal pattern in its training data—something basic validation would have revealed had implementation been guided by rigor rather than rhetoric.
Similarly, a mid-sized financial services firm embraced “AI – First” for loan approvals, replacing their careful multi-factor process with an algorithmically-driven system. Six months into implementation, they discovered the system was systematically rejecting qualified applicants from certain postal codes – neighborhoods with historically low loan volumes but acceptable risk profiles. The “AI – First” mindset had allowed them to skip the rigorous fairness evaluations that a more measured approach would have included.
These aren’t failures of AI technology itself. They’re failures of the “AI – First” mentality that positions a tool as a panacea, bypassing the crucial work of problem definition, systematic evaluation, and proper human oversight.
The Problem with Tool-Centrism
The fundamental flaw with any “[Tool] – First” mentality is its inversion of the proper relationship between tools and objectives. Tools exist to serve human needs, goals, and values—not the other way around. When we declare “AI – First,” we inappropriately position artificial intelligence as the central organizing principle rather than as one of many potential means to achieve well-defined ends.
Consider the absurdity in other contexts: “Hammer First” in construction or “Scalpel – First” in medicine. Such framings would rightly be rejected as putting the cart before the horse. No craftsperson begins with a tool selection; they begin with a clear understanding of the problem at hand.
The Linguistic Trap: How Words Shape Thought
The imprecision of “AI – First” isn’t merely an aesthetic concern—it has concrete cognitive consequences. Decades of research in cognitive linguistics confirm that language shapes how we think and act.
The Sapir-Whorf hypothesis, even in its more moderate forms, demonstrates that the language we use influences our perception and decision-making. Research by Lera Boroditsky at Stanford has shown that even subtle linguistic differences can shape how people conceptualize problems and approach solutions.
This isn’t abstract theory. In high-stakes environments, linguistic precision is treated as mission-critical. NASA’s mission control uses a carefully developed technical vocabulary with precisely defined terms to prevent life-threatening misunderstandings. Air traffic controllers employ specific phraseology designed to eliminate ambiguity that could result in disaster.
By contrast, “AI – First” introduces dangerous ambiguity at the foundation of technological strategy. When language lacks precision, thinking follows suit—and in complex technological implementations, imprecise thinking creates real hazards.
“AI – First” as a Tool for Dodging Accountability
Beyond its logical flaws, “AI – First” serves a more insidious function: it enables the systematic evasion of responsibility. The slogan’s vagueness is not a bug; it’s a feature.
When an executive declares “AI – First,” they can claim visionary leadership without defining specific goals, metrics, or ethical guardrails. When implementations fail, the same vagueness provides convenient cover—”We were implementing AI correctly; the problem must be elsewhere.”
The slogan shifts accountability away from decision-makers and toward abstract technological forces. This manipulation benefits technology evangelists and vendors while leaving organizations holding the bag for failed implementations.
A hospital administrator who says, “We will reduce diagnostic errors by 50% through carefully validated machine learning systems with radiologist oversight” makes a testable claim for which they can be held accountable. One who simply declares “AI – First for our diagnostic approach” does not.
Systems That Work Every Time vs. “AI – First”
Rather than focusing on fancy mathematical terms, let’s talk straight: What organizations need are systems they can trust to work correctly every time—regardless of whether those systems use AI.
When a self-driving car approaches a pedestrian crossing, we don’t want it to “probably” make the right decision. We want absolute certainty it will detect the pedestrian and stop—every single time.
This is what we mean by predictable systems. They’re reliable, transparent, and consistently deliver the same output for the same input. “AI – First” ignores this fundamental need by prioritizing a technology approach over the more important question: “Will this system work flawlessly where it matters most?”
The answer often lies not in blindly applying AI, but in designing systems that combine different technologies with appropriate human oversight, all built on a foundation of rigorous testing and verification.
The Innovation Cost: When “AI – First” Narrows Your Vision
“AI – First” doesn’t just lead to implementation failures—it actively stifles innovation by narrowing the solution space. When organizations default to AI for everything, they stop exploring the full range of potential solutions.
Consider a manufacturing company facing quality control issues. An “AI – First” approach immediately pushes toward computer vision systems for defect detection. But what if the better solution is a redesigned production process that prevents defects in the first place? Or a simple mechanical guide that physically prevents assembly errors? Or enhanced training for quality control personnel?
By making AI the default solution, organizations develop tunnel vision. They’re not innovating—they’re limiting themselves to one tool while ignoring the rest of the toolbox. True innovation comes from considering the full spectrum of possibilities, from the simplest manual process to the most sophisticated AI system, and everything in between.
Beyond Empty Slogans: A Better Approach
Organizations genuinely committed to technological excellence would be better served by abandoning the empty calories of “AI – First” in favor of more substantive principles:
- Problem First: Begin with a precise definition of the problem before considering any technology.
- Outcome Focus: Define clear, measurable goals that any solution must achieve.
- Right-Tool Approach: Evaluate multiple potential solutions, both AI and non-AI.
- Trustworthy Systems: Insist on solutions that work reliably every time, especially for critical applications.
- Human-Technology Partnership: Design systems where humans and technology complement each other’s strengths.
These principles require more thought than a simplistic slogan, but that additional cognitive work is precisely the point. Technological implementation should never be reducible to a two-word catchphrase if we are serious about its consequences.
What You Can Do: A Practical Next Step
If your organization is currently operating under an “AI – First” mandate or something similar, here’s a concrete next step:
Before launching any new AI initiative, complete this simple exercise:
- Write down your specific goal in one clear sentence. (Not “implement AI” but something like “reduce order processing errors by 50%.”)
- List three ways AI might help achieve this goal.
- List three ways AI might fail to achieve this goal or create new problems.
- List two non-AI approaches that could achieve the same goal.
If you can’t complete this exercise, you’re not ready to implement AI—regardless of any “AI – First” mandate. Share the results with stakeholders to start a more substantive conversation about tools and objectives.
Conclusion
The phrase “AI – First” should be relegated to the dustbin of empty corporate slogans. As practitioners committed to effective technology implementation, we have a responsibility to insist on language that promotes clear thinking rather than obscuring it.
Technology is too important to be reduced to simplistic mantras. AI systems are powerful tools that, when properly understood and appropriately deployed, can solve significant problems. But they must always remain tools in service of human objectives—never the organizing principle around which those objectives are defined.
For organizations truly committed to excellence, the principles should always be “Problem First,” “Human First,” or “Value First”—with AI and other technologies taking their proper place as means rather than ends.
May 14, 2025
© 2025 FERZ LLC. All rights reserved.
No part of this publication may be reproduced, distributed, or transmitted in any form without prior written permission from FERZ LLC.