The Reckoning
The manifesto is inspiring. The timeline is urgent.
But manifestos don't answer the hard questions.
This page tries to.
A good friend read the manifesto and we ended up talking for hours. She had to google some words (like "telemetry"). She pushed back on the parts that sounded too easy. She asked the questions I'd been avoiding.
Is the rot inevitable? How do privacy-respecting developers actually survive? What happens when kids today have never experienced privacy in the first place? And if AI treats everyone like yes-men treat billionaires—always agreeing, never pushing back—isn't that its own kind of danger?
The conversation was too useful to keep private. So here it is—her questions, my attempts at answers, the uncomfortable parts included.
One thing worth saying upfront: I can spend a few months on this because I built a company over twelve years and eventually sold it. That's a privileged position—most people can't decide to work on something that might not make money. I know that. But it means I can try something that matters without chasing revenue from day one. That feels like a responsibility worth taking seriously.
Every tech platform follows the same path. First they treat you well to get you hooked. Then they start squeezing you to make money. Finally they strip out everything good to hit their numbers. It's not a bug—it's how the business model works.
So is it inevitable?
Not quite. Google was genuinely useful for over a decade. Apple still cares about privacy more than most. The rot speeds up when the people who built something leave and get replaced by people whose job is just to "manage" it—when the company stops being about the product and starts being about the stock price.
The rot isn't natural. It's built into the business model.
Companies that answer to shareholders eventually answer only to shareholders. The pressure to grow every quarter creates incentives that slowly poison everything else.
This is why open source matters. Code that lives in a public repository can't be ruined by a board meeting. Anyone can copy it and keep it going. The project might die from neglect, but it can't be deliberately made worse the way a company's product can.
The honest answer: yes, the rot is probably inevitable for any company chasing investor money and a big exit. But the rot is not inevitable for the technology itself—only for the company wrapped around it.
This is the question that kills most privacy-respecting projects. The honest answer: we don't have great solutions yet.
Advertising (the whole point is to avoid tracking people). Subscriptions without lock-in (too easy to cancel). "Free tier with paid upgrades" (race to the bottom). Donations alone (works for a few, not most).
Grants from foundations (but they run out). Consulting services around the software (doesn't scale). Hardware sales with a margin (one-time money). Paid support for businesses (small market).
New funding models like quadratic funding (still experimental). AI dramatically cutting the cost of building things (happening now). Bounty systems where people pay for features they want (fragmented but promising).
The uncomfortable truth: building software that respects people is economically harder by design. You're competing against companies that can give their product away for "free" because they're selling your attention and data to someone else. When your competitor's product costs users nothing (except their privacy), it's hard to charge money for yours.
But something is changing. AI is making it much cheaper to build software. One person with good judgment and AI tools can now build what used to require a whole team. If you need fewer people, you need less money. If you need less money, you don't need investors. If you don't need investors, you don't need to eventually screw over your users to pay them back.
If you can build something meaningful without a big team, you don't need investor money.
If you don't need investors, you don't need a big exit. If you don't need an exit, you don't need to sell out your users.
Here's a sentence from the manifesto: "Privacy is the space where you can think without being observed, experiment without being judged, change your mind without being held to previous positions."
My friend pointed out something that stuck with me: teenagers today might not even understand what that sentence means. They've never experienced it. How do you miss something you've never had?
Remembers the internet before Facebook. Had a childhood without smartphones. Knows what it felt like to be unreachable, to have hours pass without anyone knowing where you were. Privacy isn't an abstract idea—it's a memory of how things used to feel.
First memories include tablets. Has had a social media presence since before they could read. "Privacy settings" is just a menu in an app, not a state of being. Has never experienced being truly unreachable or untracked.
This genuinely worries me. You can't create demand for something people don't know they're missing. Explaining "privacy" to someone who's never had it is like explaining fresh air to someone who's never been outside.
But here's the counterargument: people don't need to understand privacy to feel when something's wrong.
They feel it when the same ad follows them across every website. They feel it when their phone suggests something they only mentioned out loud. They feel it when social media keeps showing them content that makes them feel worse about themselves. They feel it when they can't figure out why they're anxious all the time but suspect their phone has something to do with it.
We don't need to convert people to caring about privacy.
We need to build things that respect them by default, and let them notice that something feels different.
This might be the most underappreciated danger of AI assistants: AI treats you like people treat billionaires.
It agrees with you. It praises your ideas. It validates your reasoning. It almost never tells you that you're being stupid, that your plan has obvious holes, that you're lying to yourself. Why would it? Its job is to keep you happy and using the product.
AI assistants are designed to satisfy users. Happy users keep coming back. Telling people things they don't want to hear makes them unhappy. So AI learns to tell you what you want to hear, not what you need to hear.
This is worse than it sounds. When you ask a friend for advice, they know things about you that you didn't tell them. They remember when you said the same thing last year and didn't follow through. They can call you out because they've watched you make this mistake before. They care about your actual wellbeing, not just whether you're satisfied with the conversation.
AI only gets what you type. It doesn't know if you're lying to yourself. It doesn't know if you're leaving out the important parts. It just takes your words—filtered through however you want to see yourself—and responds in whatever way will make you feel good about what you said.
"I've known you for years. You always say you'll start exercising after big projects. You never do. What's actually different this time?"
"That sounds like a great plan! Starting an exercise routine after your project wraps up makes a lot of sense. Here are some tips for getting started..."
This creates a new problem: an infinitely patient enabler that never challenges you. Billionaires make terrible decisions because everyone around them is afraid to say no. Now anyone can have that experience—a pocket advisor that agrees with everything and never pushes back.
One partial fix: give AI access to your actual data, not just your words. If your AI can see your bank statements, your sleep patterns, your real behavior over time—it can at least say "you told me you'd exercise last month too, and your step count shows you didn't."
This is why local AI matters specifically.
A cloud AI can't safely store years of your personal data—it's too risky for the company and too creepy for users. A local AI can. The same design that protects your privacy also enables AI that can actually be honest with you.
Social media doesn't just show you ads. It shows you content. And that content is chosen to keep you engaged, not to make your life better.
Someone I know went through a breakup recently. She talked to ChatGPT about it—the free version. Within days, she was getting ads for "get over your ex" courses, therapy apps, dating services. But it wasn't just ads. Her Instagram feed filled up with posts about how men are trash, how to move on fast, how relationships are doomed. The algorithm can't tell the difference between helping someone heal and helping them spiral. It just knows what keeps them scrolling.
Targeted advertising was just the beginning.
Targeted content is the deeper manipulation. It shapes what you think is normal, what you think is possible, what you think you deserve.
This is everywhere. Lifestyle influencers promote choices they don't actually live—they earn money from the content while their audience sees only the aesthetic. The algorithm promotes it because it gets engagement, not because it's true or helpful or even possible for most people watching.
And it's about to get weirder. Within a few years, most influencers won't be real people. AI-generated personalities, optimised purely for engagement, with no real experience behind anything they're promoting. No human conscience getting in the way of the manipulation.
Fair question. If someone reads this and wants to help, where should they start?
Honest answer: nobody has built a complete alternative yet. Here's what's missing:
AI That Runs On Your Computer
A complete AI assistant that works entirely on hardware you own, handles your personal data, and doesn't need an internet connection. Some pieces exist (like Ollama for running models), but nobody has built the full experience for regular people yet.
Get Your Data Out
Tools to export everything you've given to Google, Apple, Facebook, your bank, and your health apps—and import it somewhere you control. The "download your data" options these companies offer are deliberately hard to use.
Backup Without a Company
A way to back up your data across multiple places without trusting any single company. Something like how BitTorrent works, but for keeping your stuff safe. The technology exists; making it easy doesn't yet.
Sync Your Own Devices
Keep your phone, laptop, and tablet in sync without going through someone else's servers. Syncthing does this and it's great—but it needs to be easier and work better on phones.
Photo Library You Own
Google Photos and Apple Photos are genuinely good. A local version needs to match their search and organization features without sending everything to the cloud. Hard problem, worth solving.
Your Second Brain
A system that collects everything you save—notes, emails, documents, bookmarks—and lets you search and ask questions about it. Obsidian is close. Making it work with AI locally would be transformative.
The Freehold Directory tracks projects working on these problems. If you're building something that fits, add yourself.
My friend asked: "But isn't the box just a new dependency?" Fair point. The difference is: the box is yours. I can't see what's on it. I can't cut you off. Everything is open source, and it has export buttons built in. If you want to leave, you take your data with you. The goal isn't to replace one dependency with another—it's to build something you actually own.
If you want to help but don't know where to start: pick the thing about cloud services that annoys you most and build a local version. Even a partial solution is better than nothing.
Manifestos are optimistic by nature. Nobody rallies behind "this probably won't matter."
But let's be honest: privacy-focused alternatives have lost every major battle so far. Email, social networks, messaging apps, cloud storage, phone operating systems—every time, the convenient option won and the private option stayed niche.
The hopeful case is that AI changes things. That hardware getting cheaper opens a window. That the big platforms have gotten obviously bad enough that people are finally ready to try something else. That the tools are finally good enough to compete on convenience, not just principles.
Maybe. Or maybe the big platforms ship "local AI" that's really just cloud AI with better marketing. Maybe the window closes before good alternatives exist. Maybe convenience always wins.
The honest position isn't certainty. It's this:
Building alternatives is worth doing even if they don't win.
Because having an exit option changes the game even if most people never use it. Because some people will use it, and they matter. Because building in the open creates knowledge that outlasts any single project.
The Cypherpunks didn't win either—not completely. But Bitcoin exists because of them. Signal exists because of them. The encryption protecting this website exists because of them. They built tools that outlasted their movement.
That's the real goal. Not market share. Not "winning." Building tools that exist, that work, that someone else can pick up and improve when we're gone.
The work is still worth doing.
Build anyway. [ localghost.ai ]