Above is a version of my talk given at the OKC Tech++ event at The Verge OKC.
The AI—Gap
I have to be honest: I’ve resisted becoming an "AI talking head" I’ve held off writing or speaking publicly on the topic, because AI is, in many ways, still an enigma, a vast, fast, and emerging technology.
AI drives cars. It automates emails. Just yesterday, I was on a call where an AI assistant took notes, summarized discussion points, and delivered follow-ups. AI plays therapist, matchmaker, news curator, recipe helper, and ghostwriter. It helps me code websites and applications for clients. It even created my LinkedIn profile photo—which my daughter says looks more like skinny Sam with a big forehead.
AI is the new thing: the new buzzword, the new pitch deck. It is everywhere, doing everything, all at once. But let’s not pretend it’s brand new. AI has been with us for decades under another name: Machine Learning. Algorithms and queries help us clean, filter, categorize, and make sense of data. If that than this (IFTTT), has been the way we work through complex automation and optimized workflows using some form of algorithmic/artificial intelligence.
The reason for the renewed excitement is understandable. Large Language Models and generative AI have brought us closer to the sci-fi dream—closer to Data from Star Trek, C-3PO, R2-D2, Rosey the robot maid, or Will Robinson’s guardian bot. My personal favorite? Isaac, the Kaylon emissary aboard The Orville.
We are, in a sense, living out every young nerd’s fantasy: interfacing with a potentially conscious, maybe sentient system of bits, wires, nodes, networks, and memory chips. We could sidebar here on the topic of consciousness and meta-awareness, but we’ll save that for a later essay.
The Philosophical Gap
It seems at this moment in time, we are entering the next great wave of human exploration. Some believe it leads to utopia. Others, to extinction or some other type of dystopia. I’m not here to speculate on human extinction at the hands of AI. We’ll leave that to the always entertaining screenplay writer and science-fiction author (of which, I am a connoisseur).
I want to focus instead on where AI fits in the broader context of truth and reality. Maybe leaning too much into my rather quixotic educational background with a graduate degree in Business and another in Biblical Studies—with these two influences, I often find myself interpreting current trends and signals in the larger interconnected web of Western Christianity and Western Capitalism. In this instance, AI can’t seem to fit into a clean four-cornered box but instead seeps into all types of classical educational subjects, touching on philosophy, religion, history, and the humanities. Western Society is obsessed with obtaining knowledge and truth, utilizing our system of economics and beliefs to discover it.
In Capitalism we seek knowledge to help us with the creation of wealth in the short term and in Christianity we seek truth that will set us free from earthly constraints into the riches beyond this short-lived existence.
As a species, we are obsessed with truth. We chase it, shape it, declare it, and—unfortunately—manipulate it to gain an advantage.
You could say truth was one of America’s earliest competitive advantages: the idea that you could find truth here, in these lands, and under our laws.
From Luther’s 95 Theses in 1517 and the Reformation that followed, to the First Great Awakening of the 1730s and 1740s, we see a historical throughline: the personalizing of truth via the Gutenberg Press printing Bibles and more impactful—Bible commentaries, opinions (doctrines) from unsanctified authors.
If the printing press spread truth; the Awakening made it intimate and individualized. Impacting our forefathers, who wrote in the Declaration of Independence in 1776:
“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”
Founding this country on the premise of natural rights and self-governance. Mirroring the fracture of Protestantism from Catholicism, which removed the intermediary between God and Man. With this new doctrine of faith, You could know God directly, and know Truth intimately.
Then came postmodernism. If Protestantism believed in the one absolute truth, postmodernism introduced relativism and the idea that truth is relative, that is subjective, and dependent on the individual, the culture, and the moment.
Reality then, is made of many truths—many interpretations of what is and was right. Truth is no longer one absolute reality it is an amalgamation of many different realities, which all assert their truth.
Let me show you what I mean with a simple math problem:
1 + 1 = 2, right?
Well… sometimes. Let’s take another perspective, if it rains and you have two puddles, what happens when they combine?
1 puddle + 1 puddle = 2 puddles, right?
Nope. 1 puddle + 1 puddle = 1 big puddle.
In this example, 1 + 1 = 1.
If math doesn’t math, what happens in the age of AI? Do we understand whose perspective we are aligning to? Are we seeing the problem within the same reality? Or are the foundational truths underneath the model, not the right truth?
Who’s truth—China’s truth? Russia’s? Facebook’s? Google’s? Elon’s? Sam Altman’s?
These models are modern-day oracles, they predict based on the data they’ve consumed. An AI model’s dogma—its founding set of principles, its"truth"—is shaped entirely by its training and trainer.
Just like the Great Awakening led to a proliferation of doctrines within the United States, we are witnessing an explosion of AI models, each with a worldview, its own rules, and its sense of history and truth.
AI is not one truth. It is an amalgamation of many truths—each carrying historical baggage, cultural bias, virtues, and blind spots. In a Democratic State, I would assume a diverse set of AI models, as Democracy thrives on the diversity of thought, language, and tradition.
Authoritarian systems will likely opt for control. Centralized media, curated history, managed reality. And therefore, a less diverse, maybe even singular AI model.
The Technological Gap
If this is true… AI is only as GOOD or BAD as the data we feed it. What can we do, as builders, to narrow this gap?
If our models are built on our understanding of truth, then our intent becomes the fulcrum by which we measure success.
If we build AI with the same spirit as the Declaration of Independence—believing all people are created equal and endowed with unalienable rights—then the solution is not just a better codebase. It's better data, better processes, and better intentions (not just capitalistic ones).
Where do we begin?
If you break your hip, leg, or back—who do you want to fix it?
A) Your family doctor (a generalist)
B) A Orthopedic Surgeon (a specialist)
A generalist might recognize the problem. They may offer a short-term fix. But the specialist has the depth of expertise to solve it. You and I would both pick an Orthopedic Surgeon to operate on our broken bone.
Now take this sentiment into our conversation on AI (here I’m talking about Generative Large Language Models, like ChatGPT, Gemini, Grok, DeepSeek).
Generative AI models—LLMs—are generalists. They provide answers across a wide surface area. But they don’t always have the depth.
SLMs—Small Language Models—are the specialists. They are curated. Controlled. Tuned to a specific domain, trained on relevant experience.
At Stripe’s conference last year, I listened to Patrick Collison interview Jensen Huang, the CEO of NVIDIA. One line from that conversation stuck with me.
“It would be great to have super models that help you reason about things in general, but... For us, for all companies that have very specific, domain-specific expertise, we’re going to have to train our own models. And the reason for that is because we have a proprietary language. That difference between 99% and 99.3% is the difference between life and death for us. So it’s too valuable to us.”
- Jensen Huang, CEO and Founder of NVIDIA
That’s the AI Gap—the technological gap is the missing specialist.
LLMs need more SLMs.
Domain-specific, proprietary, copyrighted, contextualized data is what will transform AI from a novel toy into an industry-shaping tool.
I’ve been working on an SLM for coffee, but there are countless other domains still waiting for their specialist. What domain are you an expert in? What domains need a facilitator of truth? How can we build a bridge that closes the gap between the generalists and the specialists?
Just like Jesus said… It’s not so much about finding truth as it is about living it, building it, and contributing toward truth in everything that we build.







