Issue #15: Leg Day

On a Wednesday in the spring of 1994, I'm sitting at my desk in my college apartment. My buddy is waiting for me at the gym. I’m late, but I’m in no hurry. I had just set up a new modem, and I was anxious to put it through its paces. I plugged the phone cord running from the wall jack into the back of it.

Do I fire it up now, or go workout?

The choice is easy. I am too pumped to walk away. I gotta know just how much of the World this new 'Wide Web' actually spans. My buddy will be fine. And Wednesdays are the worst anyway.

The modem cracks to life. We all remember the sound. The long mournful warble — shrieking like a heartbroken robot-whale giving up on its dreams.

Until... the signal clicks in. The whale goes quiet. And the handshake between your curiosity and the mysteries of the digital abyss is on.

What was out there in the spring of 1994? By today's standards, almost nothing. The Otter Pops, sure. But what else?

The Web was more like a digital terrarium, dotted with the cluttered file boxes of esoteric academic journals and bursts of random human expression. Just finding your way to it was half the cool factor.

I knew it was going to get better. It didn't occur to me that it was going to get dangerous. Or that those could be two sides of the same coin.

I just knew that the handshake gave me access to a new world.

While I was busy being amazed that any of it worked at all, there were already people on the other side of it who had moved past the wonder and awe, and started noticing all the things that didn't work.

At a small internet startup called Netscape, a 23-year-old engineer named Lou Montulli had encountered a problem he couldn't stop thinking about.

The web had no memory.

Click a link, and the page on the other side had no idea you’d ever been there before. This made the experience almost completely passive. The web felt more like a museum. Look but don’t touch. And it didn't even have a gift shop.

There was no Google. There were no login screens. There were no shopping carts — because there was no way for a cart to remember you or what you had put in it from one page to the next.

Montulli set out to fix it. The brute-force solution was the easy and obvious one. He could give every visitor a permanent ID number. The site would assign you a serial code the first time you showed up, and then recognize you forever. Problem solved.

He immediately rejected it.

He would later write that the permanent-ID solution "seemed too problematic for user privacy." A permanent number that followed you everywhere? A permanent number could be traded, he worried.

Instead he devised a new kind of handshake system. A tiny file the site could leave with your browser. A private little note that said, in effect, flash this when you come back, and I'll know it's you.

He called the file a "cookie."

That was the whole idea. A quick handshake for a better experience. That's it.

Here's the thing about making an experience better, though. The tradeoff is that it often raises the risk level at the same time. Or at the very least, improvements extract a cost we didn't see when we signed for the original package.

Within two years, the advertising industry had reverse-engineered Montulli's cookie handshake into a user tracking apparatus. They didn't exploit a bug, they used the cookie exactly as designed. Except instead of a site leaving one polite note for one user, ad networks embedded across thousands of sites left the same note on all of them. The cookie wasn’t a handshake anymore. It was a passport getting stamped at every border you crossed.

Montulli's privacy-conscious fix had become the cornerstone of modern online profiling. The cookie didn't malfunction. It was just no match for the economics of a wild-west era internet.

There were early signs that these new-fangled digital breadcrumbs could come back to bite us. It sounds crazy from today's vantage point, but in 2006 AOL chose to release what it described as an "anonymized" database of twenty million search queries from its users… for research. "Anonymized" meant the names had been redacted. But they left each user's random ID number. AOL said researchers could study people's search habits without ever knowing who any specific person was.

Within five days, two reporters at The New York Times had walked a random ID back to a real human being. User #4417749 was Thelma Arnold. She was 62 years old. She lived in Lilburn, Georgia. She had searched for information about her dog, her medical questions, her divorce paperwork, her loneliness, and a few hundred other things you would not tell a stranger on the street. "My goodness," she told the reporter when they showed her the file. "It's my whole personal life."

And that is why we can’t have nice things.

Over the ensuing twenty years since poor Thelma’s experience, the apparatus kept building. Cookies became tracking pixels — became device fingerprints — became real-time bidding — became a several-hundred-billion-dollar economy in which your attention is auctioned off to the highest bidder close to a thousand times a day. Your zip code is in there. Your purchase history is in there. The location your phone reported at 4:37 p.m. last Tuesday is in there. That ointment you googled is in there. And we’ve all had the experience of wondering, what the..? I was just talking about this yesterday. How does my phone know?

We all live with this reality today, yet we continue to use the technologies. Because our Background Calculator has determined that the convenience and value we get from them is worth the tradeoff. At the same time, we've developed skills and techniques to try and stay safe.

We’re conditioned to consider what we put on the internet. But we still do it, just cautiously.

CUT TO: 2023.

A new technology arrives. It's more powerful than anything you've used before. It's being installed everywhere, very fast. And you learn that in order to get the value out of it, you have to type your thoughts into a box.

Of course you flinch. You're not new.

Your Background Calculator has been filing receipts on this stuff since your first email from a Nigerian prince. That subroutine fires the moment you open ChatGPT and start typing. I have seen this movie before. I know how it ends.

That instinct is correct. But, it's also incomplete. There is something here to be careful of. But to assume it's the same as it's always been would be a mistake.

AI does not build a profile on you the way the cookie does. Instead, it absorbs your contribution into the product itself.

When a company "trains" its model on your conversations, they are not storing a copy. They are not stashing it in a folder somewhere marked "Doug's stuff."

What they're doing is converting your words into numbers. They strip out the meaning, but keep the patterns. And then they fold all of it back into the model's own machinery. Your phrasing, your reasoning style, the particular way you describe what you do for a living — all of it gets dissolved into a mathematical brew of billions of other people's patterns.

You didn't hand them your details. You gave them a behavior.

And that behavior cannot be un-given. You can delete the conversation. You cannot un-stir the cream from the coffee. There is no database entry with your name on it. Your contribution doesn't exist as a thing the company has — it's part of what the company becomes.

It’s weird, I know.

Unlike the cookie crooks, though, the AI companies aren’t re-selling your attention. There is no advertiser on the other end of this arrangement. The AI company's incentive is simpler: the thing you typed in makes the product better. The better the product, the more you keep using it. That is a meaningfully different transaction than having your behavior auctioned off a thousand times a day to whoever wants to sell you a mattress.

Think of it like this.

With ad-tech, you’re the product. With AI, you're the curriculum.

Is that a more intrusive arrangement than the one we've been living with for twenty years? That depends on what happens next. Because understanding the mechanism is only half the story. The other half is what the companies actually promise you, and who is holding them to it.

On the consumer tiers of both OpenAI and Anthropic — free and paid — the default setting at sign up is that your conversations are used for training. They offer a way to opt out, but you have to find the toggle yourself. Nobody sends you an email about it. There is no onboarding screen that says, "Hey, just so you know."

Interestingly, Google takes the opposite approach on their paid Gemini tier: training is off unless you turn it on.

Ok, but if I opt out, how can I be sure they mean it?

In the world of information security, there is something called a SOC 2 report. Think of it like a health inspector for software companies. An independent auditor comes in, examines the operation, and verifies that the company's privacy claims match its actual practices. All the major AI companies have invited the health inspector in. And so far, they all pass.

The catch is, they only let them inspect the enterprise wing. The consumer tier — the one you and I are on — is explicitly carved out of the audit scope. You can read the policies. The companies assure you they follow them. You just can't have anyone confirm it.

There is another layer as well. For legal liability reasons, every provider reserves the right to have a human review your conversations when they think it's warranted.

Enterprise customers negotiate contracts that set limits on that.

Consumer customers get more of a handshake agreement. An unverifiable privacy policy. Because our lawyers are not involved.

None of this is a scandal. It's just the architecture. It's just business. And now you can see it.

So now you have to fire up your Background Calculator and ring up a choice.

You can keep using these tools at the consumer tier with the defaults on, and let your words — your behaviors — keep improving them. That seems like a legitimate trade if you walk in with your eyes open. You're getting an extraordinary tool for free, or a relatively small monthly fee, and the cost on top of the cash is a contribution to its improvement over time. Some people will look at that and shrug. Fine. I was going to teach it anyway. I'm in. That's a defensible answer.

Or, you can trust the policy handshake, climb into your settings, find the toggle and switch the training off.

Either way, for now you might just want to treat the prompt box more like a postcard than a sealed letter.

You already know how to do this. You learned the cookie banners. You learned which sites get your real email and which sites get the burner. You learned to cover the webcam, deny the microphone, and close the tab before the popup could finish loading.

None of that came from a privacy policy or a terms of service. Because let's be honest, by the mid-2010s the combined total for terms of service you clicked "I agree" on, but never actually read, could train a whole AI model itself on writing the most boringly unreadable terms of service the world will ever know.

No, your savvy here came from thirty years of putting in the reps — getting burned, adjusting, and paying attention. Your Background Calculator isn't running on expired data this time. This is what it’s been conditioning for all along.

When you open ChatGPT and start typing, that little voice that says wait, who is going to see this — that’s the same instinct you've been calibrating since the dial-up days first opened a portal through the belly of a crestfallen robot-whale and into the magical world of… online.

That instinct just needed to understand how AI is different. It’s not the same as ad-tech, but there are many ways in which it rhymes.

If you've read this far, I have a bonus for you. I built a visual walkthrough showing exactly where the opt-out toggles live in ChatGPT, Claude, and Gemini.

Quote to Steal:

"With ad-tech, you’re the product. With AI, you're the curriculum."

-Ep
Thanks for reading,
-Ep

Miss any past issues? Find them here: CTRL-ALT-ADAPT Archive

Know someone still skipping leg day? Forward this. Help them find their toggles.

Did this newsletter find you? If you liked what you read and want to join the conversation, CTRL-ALT-ADAPT is a weekly newsletter for experienced professionals navigating AI without the hype. Subscribe here —>latchkey.ai

Reply

Avatar

or to participate

Keep Reading