| LatchKey.ai | Archive | About | Consulting | Forward |

Issue #17: Mind The Gap

During my years at JPL, when the media production pipeline was really humming, the first question I asked in every new project meeting was always the same.
It didn't matter what the subject of the video was. A three-minute explainer about a Mars instrument. A five-minute feature on a Europa Lander design. An update on a cold atom experiment aboard the ISS. The scientists and engineers would walk into the pre-production meeting with months, sometimes years, of work we were tasked with compressing into a coherent three-minute story.
They had charts. Data. Diagrams. Visualizations. An entire career of context behind the thing they were excited to share with the world.
And the first thing I would ask was: Who's the audience for this video?
The answer was almost always the same.
…Everyone?
Sure. That sounds generous. Admirable, even. We do want the work to reach as many people as possible. We want the discovery to matter beyond the conference room. We want the general public, the science nerds, the educators, the mission fans, the kids in classrooms, the already-converted, the not-yet-curious.
Everyone.
But in the production of a three-minute video, "everyone" is a trap.
Because everyone does not start from the same place.
A room full of NASA diehards already knows what a spectrometer does. You don't burn forty-five seconds of your three-minute runtime budget defining it for them. But an audience of the newly science-curious? They need that context. Without it, the next two minutes, the actual payload, lands on ears that don't have the scaffolding to hold it. The impact is lost. Maybe the interest, too.
But if we start too far back, the more mission-literate fans, often the most engaged viewers, start to feel like we're moving too slowly, or worse, insulting their intelligence.
The obvious compromise is to split the difference. Aim for the middle.
That almost never works. Everybody gets a little of what they need. Nobody gets enough.
The better move was always to pick your audience deliberately. Not demographically. Not cynically. Specifically.
Identify the archetypal viewer we were really making the piece for. Understand what they know, and don't know, and build around that person's starting point.
Once you've chosen an archetype, you stop speaking in the language of the material and start speaking in the language of the person. You inhabit their demeanor, their frame of reference, their register. The piece gets to have a personality because it knows who it's talking to. That is the language that builds shared meaning.
That single question — who's the audience? — would often halve the available time in a video for the new information. Because the gap between what the speaker knows and what the audience knows is almost always bigger than what the speaker thinks it is.
This sort of communication gap isn't exclusive to video production, of course. We all fall victim to it regularly without even noticing.
My wife occasionally brings it to my attention when she's excited to recall something that happened in her day. I want to hear it. I really do. I'm ready. I'm seated. I'm emotionally available.
And then she starts.
There are qualifiers. There are characters mentioned. A name I sense I'm supposed to remember. I think I can infer a location. Ah! There was a thing that happened before the thing that happened. There's some color commentary about why this person is like this, except not always, except in this case apparently yes.
I'm making my Twin Peaks face.
After a bit, I'll pause her:
"I really want to enjoy this story as much as you're enjoying telling it. But right now… I still don't know what we're talking about."
Which is my husbandly way of saying: you buried the lede.
After a quick recalibration, she realizes that if she'd just opened with, "I finally talked to Linda about what happened at the pool last week," everything that followed would have had somewhere to land. I'd have had the background. I'd have known the scene. I would understand how to process the supporting cast, the caveats, the tonal shifts. The whole director's commentary track.
Without that key frame, though, my brain is too busy grasping for a footing to follow the plot.
To be clear, this is not a wife problem. I bring my own fully outfitted suite of confusing communication nonsense to the table as a matter of course.
We all do. Because this is a human problem.
We get too close to our own work, our own stories, our own thinking. Small assumptions accumulate until they feel like the floor. We start explaining from the room we are standing in. Meanwhile, the other person is still looking for an unlocked door.
You know how this goes at work. We brief a colleague on a meeting they missed and give them our notes on the deliverable. They listen. Then the deliverable comes back sideways.
Was it them? Were they not listening? Or was it us? Did we start too deep? Did we assume they had more background context?
We can't really know. With people, the view into their thinking, exactly how they arrived at their interpretation, is forever opaque.
There is a name for this space. The distance between what we mean and what they hear. The gap that lives in every conversation, every brief, every production meeting, every story that starts without the opening line.
The Interpretation Gap.
We have been navigating it our entire lives. Intuiting it. Compensating for it. Getting burned by it. We brief. We hope. We find out later.
Sound familiar?
As much as my wife continues to believe that I will eventually learn to read her mind, I still have to ask for the missing context myself. Because, like the rest of us, I am only human.
And then along comes AI.

At this point we've all settled into some regular habits with our chats. We type a prompt. We read the answer. We evaluate. Perhaps we engage in a longer dialog.
And then we move on. For most, that's the whole interaction loop.
But, there's a door in your chat window to a deeper level. Almost every platform has one. It's subtle. Semi-hidden, even, so it's easy to scroll past. But it's there, and it's unlocked.
Most of us never open it.
That makes sense. Your brain skipped it: Looks technical. Probably for developers. One more thing I'd have to learn before I can use the thing I was already using. The word "thinking" alone carries the whiff of a computer science lecture. So we scroll past.
But it's not a lecture. It's a doorway. And what's inside isn't code. It's words.
You can generally find it in your chat thread, just above the AI's reply. Right there on its forehead. And you can click on it.
What's inside, you ask? Its Thinking.
Or "reasoning." Or "thoughts." Or whatever label your tool of choice calls it.
And when we enter, we are not reading its reply. We are reading its translation. We are reading its thoughts.
We see the model deciding what kind of request we made. We see it checking our words against everything it knows about how we work: our preferences, our custom instructions, our prior exchanges. We see it finding the ambiguities we left in our phrasing and quietly making choices to resolve them. We see it weighing constraints we didn't know were in the room: standing preferences, safety parameters, system rules, model habits. We watch it decide what "good" means in this specific moment, without asking us first.
That is the Interpretation Gap, rendered in text, in real time, before the reply ever ships.
But honestly, how useful is that?
If I tried to click on my wife's forehead, I would not need to read her thoughts. She would definitely be sharing them out loud.
But with AI, the click is right there, waiting. Ready to reveal something no audience has ever shown us. Its thoughts. Before its response.
But hang on… what do we even mean by thoughts here? Am I suggesting the model is truly thinking?
A couple of issues ago we talked about how every word we use to describe AI is smuggled in from human experience, because we have no other frame of reference by which to talk about a thing that's never existed before. But none of those words fit exactly. They are useful, but they are metaphors. If we actually stopped to hammer out a rigorously vetted position on whether computers can actually "think," we'd never get around to making use of them for anything else.
Legendary physicist Richard Feynman took this on during a Q&A with students all the way back in 1985.
Asked whether machines would ever really think like human beings, he said no. Not because machines would fail, but because imitation is the wrong metric, and the wrong goal. Airplanes fly, he said. But they do not fly like birds. They use different materials, different mechanics, different principles. That's not a failure of flight. It's just how flight works for the machine.
Same idea here.
Whether we decide what the machines do is "thinking" or not, they are never going to think like us. They can't. They don't have a childhood, a nervous system, a spouse, a mortgage, a bad knee, a weird bias for a bad song they heard at the right time in their life.
They have computation. Probabilities. Context windows. Training data. Instructions stacked on instructions. Patterns moving at a speed and scale our brains cannot fathom.
Our thinking is a byproduct of our human experience. You can't separate one from the other.
Marshall McLuhan famously argued that every medium is an extension of ourselves, and that what matters is not just what a medium does, but what it reveals about the people using it. Sociologist Sherry Turkle spent four decades studying humans and computers together. Her work pushed that idea into even sharper focus. Our tools don't just extend us, she argued. They reflect us. The way we talk to them, the assumptions we carry in, the things we reach for. All of it gets handed back.
That is exactly what is happening when we open the door to the thinking trace.
We're not only reading how the machine interpreted our prompt. We're seeing the shape of our own communication, revealed by the gap between what we meant and what it heard.
Do this enough over time, and patterns emerge. That bit of context we always forget to give. The assumptions we bake in without realizing they were assumptions. The places our thinking gets sharp, and the places it gets drunk.
And here is the part that matters for the long game: the Interpretation Gap does not live only in the chat window. Once we start seeing it there, we start seeing it everywhere. In how we brief a team. In how we scope a project. In how we explain what we need to a colleague, a client, a contractor, a barista.
The thinking trace trains the muscle. AI is the first audience in history that shows us the gap in real time. Every time we read one, we are running a rep. And every rep makes the next brief, human or machine, a little sharper.
That is not the AI getting smarter. That is cognitive amplification. Our skill, our instinct, our years of 'reading the room', amplified by a machine that finally shows us the room we are standing in, reflected back to us.
The thinking trace is not a report card. It's a mirror.
Because sometimes the audience we really need to know is ourself.
## Try This
Next time you work with AI, any model, any task, do this before you read the answer.
1. Write the prompt you were going to write anyway.
2. Make sure you're in an extended-thinking or reasoning model, not the fast default. Then expand the thinking section before you read the answer.
3. Read the interpretation first. Ask: what did it think I meant? What job did it think I was hiring it to do? Where did it guess?
4. Now read the answer. Notice where those guesses show up in the output.
5. Rewrite the prompt once, adding only the context the model was missing. Not more words for the sake of more words. Just the missing frame.
6. Run it again. Compare.
That's it.
Three minutes.
You'll feel the difference immediately. Not because the AI became more intelligent. Because the brief became more honest.
And unlike every human audience you've ever had, this one showed you where the gap was.You Just Don't Understand (Deborah Tannen, 1990): NYT bestseller — why people who care about each other still talk past each other.
Stuck on a problem? Talking to a rubber duck might unlock the solution (The Conversation, 2025) Articulating a problem out loud to an inanimate object forces the brain to organize thoughts logically and identify discrepancies between intent and execution.
Treaty of Wuchale: How a Bad Translation Caused a War (The Collector): One verb — "could" in Amharic, "must" in Italian — turned a diplomatic agreement into a war that killed thousands.
The Illusion of Transparency (The Decision Lab): Cognitive scientists have a name for our certainty that our meaning landed — and the research says it's a wiring issue, not a personal one.
Why I Write (Joan Didion, 1976): "I write entirely to find out what I'm thinking."
Heads Up!
Something New is Coming!
I'm excited to announce the launch of LatchKey Learnings.
Live online classes aimed at demystifying the tasks, tools, and techniques that are rapidly reshaping our work and our lives.
75 minute — fully interactive — small group sessions (20 max.)
All sessions led by me!
$29 for newsletter subscribers (Regularly $49)
Full details on the first slate of topics coming soon!
In the meantime, reply to this newsletter with the topics you would find most helpful to you. I want to build the curriculum around your needs.
Or just reply "I'm in" to get early access notice.
Private 1:1 tutoring also available — $200/hr. Get in touch.
Thanks for reading,
-Ep
Miss any past issues? Find them here: CTRL-ALT-ADAPT Archive
Know someone still looking for the unlocked door? Forward this. Help them decompress.
Did this newsletter find you? If you liked what you read and want to join the conversation, CTRL-ALT-ADAPT is a weekly newsletter for experienced professionals navigating AI without the hype. Subscribe here —>latchkey.ai
