| LatchKey.ai | Archive | About | Consulting | Forward |

Issue #18: Living In The Abstract

I have noticed an argument cropping up with increasing frequency lately. We've all heard it. Some of us have used it. AI is making us dumber. We're outsourcing our thinking. Atrophying our ability to reason, to write, to hold a thought without reaching for a chatbot.
The future of independent human thought is doomed.
The online debates are heated and relentless. The rage-bait is real. And in the abstract, the concern sounds reasonable enough.
But I see it differently. Let me explain.
What do you call that thing you live in? A house, right? Or an apartment? A domicile?
I bet a thing you never call it is an organized assembly of wood, glass, concrete, wire, and nails working together to form a protective barrier between you, your stuff, and the elements. Because that would be awkward to say with any regularity. And it would make inviting friends over a monologue every time.
Instead, we just call it a house.
The moment the word leaves our mouths, all those structural details — the frame, the wiring, the paint, the plumbing — they dissolve into the background. Made invisible behind our concept of a "house". What it means to us. The role it plays. The people and things it protects.
We aren't thinking about the nails and the drywall and the pipes running under the foundation. We're thinking about the larger function it all serves. The ends, not the means.
That's called abstraction. And we do it a thousand times a day without a second thought. We navigate layers of abstraction in every domain of our lives. It's a behavior so old, so embedded in how humans process the world, that nobody even notices it happening. Nobody writes op-eds about how the word "house" is making us architecturally illiterate.
But with the computer, things seem to be different.
A modern computer, at its essence, is actually a system of increasingly complex layers of abstraction, stacked one atop the other, enabling the experience we all recognize as computing today.
But despite our fancy devices, high-resolution screens, and wireless signals, computing today is very much the same at its core as it was 80 years ago. A computer has only ever done one thing: it routes electrons through microscopic gates to determine if a switch is open or closed.
On or off. One or zero. Everything else is just an illusion.
What makes our modern computing experience feel so much more advanced is that, for eight decades, layer after layer of abstractions have been stacked on top of the metal at the heart of the machine. Each abstraction a staggering feat of engineering. Each a magic trick that deliberately renders the one below it invisible.
The result is that today we can sit in the middle of an open field, tapping glass screens, swiping through apps, arguing about whether AI is making us stupid, all while easily forgetting — or never even knowing — the unbroken ladder of technical heritage that still links it all together.
The Ladder
In the 1940s, "programming" a computer meant physically rewiring its circuits on a large plugboard. Not typing. Not clicking. It was blue-collar work. Programmers (mainly women, by the way) hauled thick heavy cables across a room, plugging them into sockets to configure a particular calculation from the massive machine. They didn't tell the machine what to do. They literally rebuilt it into a different machine for each task.
In 1949, a team at Cambridge built EDSAC, one of the first computers that could store its own instructions in memory. Instead of rewiring the hardware for every task, you could now feed the machine a sequence of coded numbers and it would execute them. They called these sequences "initial orders", and they are the ancestors of every piece of software that has existed since. Programming went from electrical engineering to writing things down.
But writing in raw machine code, pure numbers representing circuit-level operations, was brutal and highly specialized. So by the early 1950s, programmers began developing Assembly Language: a system of short, human-readable abbreviations that stood in for the dense numeric codes. MOV instead of a string of ones and zeros. ADD instead of a binary address. It all still operated intimately close to the actual hardware though. And the programmer was still responsible for tracking which numbers were coming or going.
But the abstraction to Assembly code was progress. A layer of human-readable text now stood between the programmer and the circuits. The wiring became invisible. ‘Computing’ as a skill expanded.
Then, in the late 1950s, newer languages arrived that actually read like language. Fortran, short for "Formula Translation", came out of IBM in 1957 and let scientists write calculations that looked like math instead of machine instructions. COBOL followed in 1959, designed so business logic could be written in something resembling English sentences. C arrived in the early 1970s at Bell Labs and became the backbone of nearly everything that followed, including the mighty Unix and eventually, the internet itself.
The big advance among all of these languages was that now one line of code could trigger dozens of machine instructions underneath. You still had to learn the particular language's strict, unforgiving syntax to tell it exactly what to do, but you no longer needed to track which register held which number, because a compiler, itself another abstraction, handled that translation for you.
And so a new floor appeared beneath the programmer's feet. Below it though, the compiled code still spoke to the hardware, which still routed the electrons through the gates. The actual ‘computing’ was unchanged.
Then came perhaps the most significant abstraction of all. The Operating System.
In 1969, Ken Thompson and Dennis Ritchie at Bell Labs created Unix — one of the first operating systems to treat the computer's resources as a single, manageable whole. Before the OS, every program you ran had to manage its own memory, its own access to the printer, its own relationship with the screen. It was like every tenant in an apartment building having to maintain their own plumbing, electrical, and elevator. The operating system changed all that. It became the building superintendent: abstracting all the housekeeping up into one unified layer. Programs could now simply request what they needed. The machinery managed itself.
And then, in 1984 — came the metaphor… I mean, the Macintosh.
This is where it gets personal for a lot of us. Before the Mac, using a computer was the domain of a very particular type of mind. It was daunting to the average Joe. It meant memorizing commands. Typing instructions into a black void. You either remembered the syntax or you sat there staring at a cursor that offered no hints and no sympathy.
The GUI, the graphical user interface, put pictures on top of all that obscurity and formula. Folders we could see. Files we could drag. A little trash can sitting right there on the screen, just waiting for us to throw things away. It was almost absurdly intuitive.
Point at the thing you want. Click on it. Bob's your uncle.
However, beneath this ingenious metaphorical abstraction of icons and mouse clicks, the OS still wrote code, which still spoke to the hardware, which still routed the electrons through the gates. The actual computing still happened down on the metal.
The GUI made the syntax invisible. The command line vanished overnight. The metaphor instantly made computing accessible to millions of people who would never have typed cd /usr/local/bin into a terminal window. Because the critical layers they needed to harness had been abstracted up to where they could see and engage with them in familiar terms. Intuitively.
Ah, but the people who'd spent years mastering those obscure commands? They were not impressed. To many, the Macintosh was a toy. A gimmick. It was dumbing down computing for the masses. Real computer literacy meant knowing the syntax and writing code. This point-and-click nonsense was a cheat for people who weren't smart enough to handle the real thing.
Next, the internet arrived. And just like that, it made our file systems invisible. Then mobile arrived and made our desktops invisible. Then cloud arrived and made our hard drives invisible.
Each time: a new layer on top swallowed the layer below. And a new definition of what it means to "know computers" was established. And the previous generation of power users would shake their heads at how easy the new crowd had it.
Each time, the same accusation: this is making people dumber.
Each time, nobody ever went back. Each time, everybody got more productive.
Everywhere We Look
Let’s step back from the digital domain for a moment and recognize that abstraction layers aren't just a computing story. They’re a fundamental of human progress. They always have been.
We used to trade livestock for grain. Then coins replaced the barter. Then paper replaced the coins. Then plastic replaced the paper. Now we tap a phone against a terminal, hear a chirp, and walk out with a coffee. Each abstraction buried the mechanism it usurped in greater convenience, even though the underlying trade mechanics never changed.
Nobody stands in line at the Starbucks today wishing they could pay in live chickens.
We used to crank-start engines and manually adjust the fuel mixture while driving. Then ignition keys. Then automatic transmission. Now we press a button labeled START and every mechanical process between our intention and the road handles itself. And yet, I bet you have at least one person in your life who still says automatic shifting is "not real driving".
Calculators will destroy math skills. Spell-check will destroy literacy. GPS will kill our sense of direction. The pattern is the same every time: a new layer makes an old skill invisible, someone declares we're all getting stupider, and then quietly, without ceremony, everybody moves up.
Nobody ever moves back.

The Identity Layer
So why does AI feel so different to so many? Why does it scare people more than the arrival of the GUI?
Because every previous abstraction layer on the computing ladder expanded access downward.
Assembly let more people program than circuit-wiring did. High-level languages let more people program than Assembly. The GUI let more people use computers than the command line. Each time, the new layer opened the door wider. And while the people already inside grumbled about the newcomers, they were a small minority by comparison. Plus, their own skills were never really threatened. They could still do everything they did before. They just had greener company doing it now.
AI doesn't just open the door wider. It rearranges the whole room.
For the first time, the abstraction layer isn't only inviting new people onto the ladder. It's abstracting over the skills of many who've been standing on it for decades: the casual but competent computer users. The ones who spent twenty-plus years getting good at navigating files, managing apps, building spreadsheets, formatting documents, editing images, creating graphics, and Googling with surgical precision. They had never thought of themselves as standing on a layer because they had never known the ones before. They thought they were standing on the ground floor. Their computing competence felt like bedrock. It was part of their professional identity.
And now here comes AI, and it can do all of that interface-level work — and more. The clicking, the file management, the menu navigation, and the Google-fu are table stakes to AI. The skills that defined "good with computers" for an entire generation have collapsed into the equivalent of command-line mastery in 1984: specialized, optional, slow, and no longer the price of admission — or the measure of value.
That's why this layer feels different. Not because the technology is ‘alive’. It’s not.
Because this time so many people feel their identity is at stake. The casual user's capability moat — the thing they quietly took pride in, the competence that separated them from their less tech-savvy peers — just got abstracted out from under them.
Their reaction is the same as every previous generation of displaced power users. They call it cheating. They say it's making us dumber because it made what used to be harder, easier. They blame the technology for the fact that the ladder grew another rung and their comfortable floor isn't the top anymore.
And this time there are just so many more of them.
And they all have social media, and they aren’t afraid to use it.
Meanwhile, look at the domain professionals. The developers. The AI builders. The engineers who actually understand what's happening underneath. Are they panicking? Mostly no. Because whether or not they think about it consciously, they recognize the pattern. They've seen layers stack before. They know their value doesn’t go away. It goes up.
The value center rises along with the abstraction. That's where all the new jobs get created.
Consider: for twenty years, a Photoshop professional needed creative principles, good taste, and informed artistic judgment. But they also needed mastery of every panel, plugin, and menu in the software. An AI-equipped creator still needs the principles and the taste to make something useful. But the interface mastery just got abstracted away.
The value has moved straight up to the idea.
What the Studies Aren't Measuring
The ‘AI makes us dumber’ studies getting headlines right now are misleading because they are only measuring one very specific use profile: AI as a crutch. The one who asks a chatbot to write their email, or their report, and just hits send. The one who pastes a basic question and hands off the answer without reading it critically, or at all. The one who offloads their thinking entirely and just lets the machine fill in the blanks.
Use AI that way, and yes, your cognitive muscle will surely atrophy. The research isn't wrong about that.
But we know that's not the only way people use AI. It's certainly not the most interesting or rewarding way.
There's another user profile that gets almost no attention in the discourse. It’s what this newsletter has been about since day one. AI as cognitive amplifier. The person who uses AI to go deeper into a topic than they could alone. Who uses it to challenge and pressure-test their thinking, not to think for them. To be meticulous in articulating their idea, and still asking "what am I missing"? Who readily explores adjacent domains, follows threads they'd never find through a search engine, and builds understanding that compounds over time.
I really only know a small circle of colleagues operating at that level, and every single one of them says the same thing: working with AI like that — they have never felt smarter in their lives.
I know I have learned more deeply, about more things I've always wondered about, than in any other period of my life. More than college. More than two decades of Google and Wikipedia. Not because AI gave me the answers, but because it gave me a vehicle to easily traverse the deepest parts of my own muse, and a thinking partner that could meet me wherever I was — and then push me further.
Same Pattern. Same Panic. Different Choice.
Every abstraction layer offered the same choice. The GUI made some a passive clicker who would never exploit the full potential of a computer. For others, it unlocked their dreams and built them a life. We all recognize how the internet itself can make us a mindless doom-scroller at the mercy of the enragement algorithms, or it can make us the most informed person in the room.
Because the tools never decide. The user does.
AI is no different. It's just that this time, the abstraction layer has moved so far from the metal, and close enough to how we think, that it feels personal. It feels consequential.
But that's not a reason to panic. It’s a reason to be deliberate.
For our entire lives we've all been standing on invisible floors, benefiting from abstractions we didn't build, using skills that only exist because someone, somewhere, made the hard part disappear so the rest of us could do more with less effort.
The new floor presents a question. And it isn't whether it changes how we think. Of course it does. Every floor did.
The question is whether we use it to think less, or to think bigger.
Nobody ever goes back. And those who move up never regret it.
Is Google Making Us Stupid? (The Atlantic, Nicholas Carr, July/August 2008): Carr asked if the internet was rewiring our brains for skimming over depth. He traced the same anxiety back through the printing press, the typewriter, even Socrates warning that writing itself would destroy memory.
As We May Think (The Atlantic / W3C mirror, Vannevar Bush, 1945): Bush argued in 1945 that the point of computing was to extend what the human mind could do, not replace it.
A Brief History of Calculators in the Classroom (Hack Education, Audrey Watters, 2015): NCTM recommended classroom integration in 1980, and teachers were still fighting it in 1995. The policy battles and fears were nearly word for word what we hear about AI today.
How To Become A Centaur (MIT Journal of Design and Science, Nicky Case, 2018): Amateur chess players paired with AI consistently beat grandmasters playing alone, not because the game got easier, but because the value shifted from memorized patterns to knowing when to trust the machine and when to trust yourself.
How GPS Weakens Memory — and What We Can Do About It (Scientific American, Gonzalez-Franco et al., May 2021) Researchers found that habitual passive GPS users show measurable declines in spatial memory. Because passive use lets the navigation muscle atrophy while active use doesn't.
### Try This
The next time you use AI, notice which version you're reaching for. Are you handing it the work — or handing it the *question*?
There's a difference between asking AI to write the email, and asking AI to help you figure out what the email actually needs to say. One offloads the thinking. The other deepens it.
Try the second one. Once. See what happens to the quality of what comes back — and to the quality of what happens between your ears while you're doing it.
Quote to Steal:
The value has moved straight up to the idea.
Thanks for reading,
-Ep
Miss any past issues? Find them here: CTRL-ALT-ADAPT Archive
Know someone who thinks AI is making us all dumber? Forward this. Help them get on the ladder up.
Did this newsletter find you? If you liked what you read and want to join the conversation, CTRL-ALT-ADAPT is a weekly newsletter for experienced professionals navigating AI without the hype. Subscribe here —>latchkey.ai
