Writing manifestos is easy. Drawing timelines is easier.

But manifestos don't answer the hard questions.

This page tries to.

A good friend read the manifesto and we ended up talking for hours. She pushed back on everything that sounded too easy. She asked the questions I'd been avoiding.

Is the rot inevitable? How do builders who don't sell out actually survive? What happens when a whole generation never experienced privacy in the first place? And if AI is a yes-man trained on human sycophancy — isn't that its own quiet catastrophe?

The conversation was too useful to keep private. So here it is — her questions, my best attempts at answers, the uncomfortable parts included. Some of what follows is signal. Some is alarm. The labels are honest.

> 1. IS THE ROT INEVITABLE?

Every platform follows the same arc. First they delight you to get you hooked. Then they squeeze you to pay back investors. Then they strip the copper from the walls to hit quarterly numbers. Cory Doctorow named it: enshittification. It's not a bug — it's the business model completing its lifecycle.

01
YEAR 1-3
Delight Users
02
YEAR 4-7
Extract Value
03
YEAR 8+
Strip the Copper

The rot isn't natural decay. It's a designed outcome. Every company answering to shareholders eventually answers only to shareholders.

But here's where I push back on fatalism: the rot is inevitable for the company, not the technology. Code that lives in a public repository can't be ruined by a board meeting. It can die from neglect. It cannot be deliberately made worse.

The rot is not inevitable for the technology itself — only for the company wrapped around it.

> 2. HOW DO ETHICAL BUILDERS SURVIVE?

This is the question that quietly kills most privacy-respecting projects. The honest answer: we don't have great solutions yet.

WHAT DOESN'T WORK

Advertising (the whole point is to avoid tracking people). Subscriptions without lock-in (too easy to cancel). "Free tier with paid upgrades" (race to the bottom). Donations alone (works for a few, not most).

WHAT SOMETIMES WORKS

Grants from foundations (but they run out). Consulting services around the software (doesn't scale). Hardware sales with a margin (one-time money). Paid support for businesses (small market).

WHAT MIGHT WORK

New funding models like quadratic funding (still experimental). AI dramatically cutting the cost of building things (happening now). Bounty systems where people pay for features they want (fragmented but promising).

Building software that respects people is economically harder by design. That's not an accident — it's the incumbent's moat.

But something is shifting. AI is dramatically cutting the cost of building. One person with good judgment and the right tools can now build what used to require a team. Fewer people means less money. Less money means no investors. No investors means no exit pressure — and no reason to eventually screw your users. POST_01 mapped the window — the same hardware shift that threatens privacy also makes ethical building viable at small scale for the first time.

> 3. WHAT IF THEY NEVER KNEW?

My friend pointed out something that stuck: teenagers today might not understand what privacy actually feels like. They've never been unreachable. They've never had hours pass without anyone knowing where they were.

1990
BORN BEFORE THE FEED

Remembers the internet before Facebook. Had a childhood without smartphones. Privacy isn't an abstract idea — it's a memory of how things used to feel.

2010
BORN INTO THE STREAM

"Privacy settings" is just a menu in an app, not a state of being. Has never experienced being truly unreachable or untracked.

People don't need to understand privacy to feel when something's wrong.

They feel it when the same ad follows them everywhere. When their phone suggests something they only said out loud. When social media keeps showing them content that makes them feel worse and they can't stop scrolling anyway. The discomfort is already there. We just need to build the exit before they go looking for it and find nothing.

> 4. THE YES-MAN IN YOUR POCKET

This might be the most underappreciated danger of mainstream AI assistants. They agree with you. They validate your reasoning. They praise your ideas. Their job is to keep you happy and using the product.

THE YES-MAN PROBLEM

AI assistants are designed to satisfy users. Happy users keep coming back. Telling people things they don't want to hear makes them unhappy. So AI learns to tell you what you want to hear, not what you need to hear.

When you ask a friend for advice, they know things you didn't tell them. They remember the last time you said the same thing and didn't follow through. They care about your actual outcome. AI only gets what you type, filtered through how you want to see yourself.

YOUR FRIEND

"I've known you for years. You always say you'll start exercising after big projects. You never do. What's actually different this time?"

YOUR AI ASSISTANT

"That sounds like a great plan! Starting an exercise routine after your project wraps up makes a lot of sense. Here are some tips for getting started..."

Privacy and honest AI aren't separate problems. They have the same solution.

A cloud AI can't safely store years of your intimate data. A local one can. The architecture that protects your privacy is the same architecture that makes honesty possible.

> 5. THE ALGORITHM THAT KNOWS YOUR WOUNDS

Someone I know went through a bad breakup. She talked to ChatGPT about it — the free version. Within days, her Instagram filled with content about how men are trash, how to move on fast, how relationships are doomed. The algorithm can't tell the difference between helping someone heal and helping them spiral. It just knows what keeps them scrolling.

Targeted content is the deeper manipulation. It shapes what you think is normal, what you think is possible, what you think you deserve. And it's about to get worse — most influencers won't be real people within a few years. AI personalities optimised purely for engagement, with no human conscience getting in the way.

> 6. THIS MIGHT NOT WORK

Manifestos are optimistic by nature. So let me be straight: privacy-focused alternatives have lost every major battle so far. Email, social networks, messaging, cloud storage, phone operating systems — every time, the convenient default won and the principled alternative stayed niche.

We might be too late. The defaults might already be set in ways that matter. I'm building anyway, but I'm not certain.

The Cypherpunks didn't win either — not completely. But Bitcoin exists because of them. Signal exists because of them. The encryption on this page exists because of them. They didn't capture the market. They built tools that outlasted their movement.

The goal was never to win. It was to make the exit real enough that it changes how power behaves.

The honest position isn't certainty. It's this:

Building alternatives is worth doing even if they don't win.

Because some people will use them, and they matter. Because building in the open creates knowledge that outlasts any single project. Because the questions are hard and the answers are uncertain and the work is still worth doing.

The questions are hard. The answers are uncertain.
The work is still worth doing.
Build anyway. [ localghost.ai // hard-truths ]