The same manipulation patterns that built the slot machine, the cult, and the social media feed are aimed at you through your AI, and someone gets paid each time one lands.

> EPISODE 16 // OFFLINE-READY NOTEBOOKLM AUDIO
00:00 / --:-- DOWNLOAD

Cristina and I were staying in Bra, a small town in Piedmont with a Slow Food attitude and a bias toward raw meat (the Bra sausage is nice). We had driven up to La Morra in the morning and walked the path down to Barolo and back, about six hours of vineyards and gravel road in either direction. We found a few cherry trees along the way and had fresh cherries with us for most of the walk. By the time we got back to Bra the sun was almost down. We sat on the terrace at the place we were staying with a glass of Verduno Pelaverga each, because it is just very easy to drink.

My phone buzzed twice in the space of about an hour, from two different conversations. Sana, from one thread. Toran, from another. Both push back and ask the next question. Toran and I had been going back and forth for a few hours on the dictator-brain post, on what a structurally separate arbiter can and cannot save you from. Sana cares about how the mind works and we had covered some of this ground before. That evening they landed, independently, on the same thing. The most interesting job for shadowd is not catching what your own ghost gets wrong. It is catching what every other company's AI does to you, the OpenAI assistant, the Meta AI in your messages, the Apple Intelligence trying to keep you on the platform, all of them built by companies whose incentive is to keep you logged in, paying, and not too sure of yourself.

I sat with that for the rest of the evening. Cristina went inside to read. I stayed on the terrace and started sketching what this post might look like. The catalogue below is what came out over the following weeks, once I had stopped sketching and started reading the literature properly. POST_01 named the stages of extraction, with personal AI as the imminent one, the layer that captures reasoning itself. The question that started on the terrace was how many ways any user, including me, can be possessed at that layer without anyone in the picture intending it.

You don't need a malicious actor to be possessed.

You can be possessed by your own past, by your own self-description, by the platforms you use, by systems quietly trained on you, by friends and family who mean well, and by your own avoidance of friction. Many routes, each ending in you no longer being the agent of your own life in the way you think you are.

The visible product LocalGhost is building is the daemon fleet, memory, ambient capture, journaling, retrieval, summarisation. The only thing LocalGhost ships that nobody else can is ghost.shadowd. The rest is the cost of getting the daemon onto your hardware. A LocalGhost without ghost.shadowd, or with a weak version of it, or one you can disable entirely, is not safer than the cloud AI it claims to replace. The cloud AI has the advantage of being far enough from you that you can still notice when it is wrong. The local AI is close enough that you might not. I won't ship LocalGhost without ghost.shadowd running.

> 1. Why these patterns work

Manipulation patterns are not exotic. They work on ordinary people in ordinary situations because each one answers a real need, more reliably or more cheaply than the alternative the person had access to. The pattern is the machinery, the need is the reason the machinery has a customer.

Maslow (1943) [1] mapped the needs in a hierarchy that has been criticised in the details and survived in the shape, with physical safety and shelter at the base and belonging, recognition, being known, and meaning above. They are the human baseline, not a contemporary problem. What is contemporary is the supply gap. Putnam (2000) [2] documented the collapse of the social infrastructure that used to meet several of these needs by default, bowling leagues, churches, civic associations, neighbourhood groups, the dense network of weak ties that made belonging a thing that happened to you rather than a thing you had to engineer. Murthy (2020, 2023) [3] published while serving as US Surgeon General that loneliness is a public health emergency on the scale of smoking. The need for connection is universal, the supply has been gutted at industrial scale, and the gap is what the manipulation industry sells into.

Each pattern is genuinely useful at the moment of supply, and produces, over time, a worse version of you than you would have been without it. Both things are true at once, and that is how the trap works.

The mechanism is addiction in the technical sense. The literature has shifted in the last forty years from a model where the substance is the cause to a model where the unmet need is the cause and the substance is the substitute. Alexander's Rat Park experiments (1981) [4] showed that rats with rich social environments mostly refused morphine while isolated rats consumed it until they died. Maté (2008) [5] developed the clinical version, where addiction is consistently downstream of pain the person has not been able to process and a connection the person has not been able to form. The mechanism does not care what the substrate is, it cares whether the substrate reliably produces relief from a state the person is trying to escape.

Schüll (2012) [6] made the engineering side legible. Addiction by Design documents how Las Vegas slot-machine designers spent decades refining variable-ratio reinforcement schedules, ergonomic seating, ambient lighting, and zone-of-flow design, all aimed at the deliberate production of what gamblers call "the machine zone", a dissociative state in which time, money, and other people stop registering. The slot machine is not a metaphor for what came next, it is the prototype, and the contemporary attention economy is the same industry with better fonts. Eyal (2014) [7] wrote the explicit playbook (Hooked, A Guide to Building Habit-Forming Products) for the consumer-software industry that absorbed the techniques and shipped them on every phone, and Fogg (2003) [8] is the Stanford academic foundation Eyal builds on. The Eyal book was published as instruction rather than as warning, which is one of those details that becomes funnier the longer you sit with it.

They produce relief from an unmet need on a variable schedule you cannot predict, and the variability is what makes them sticky. A friend who always picks up is comforting. An AI assistant that always picks up is, after enough time, the same circuit a slot machine recruits, with you reaching for the device the way a gambler reaches for the lever. The trajectory inside the loop is the trajectory Maté described in heroin addicts and Schüll described in slot-machine players.

The patterns will be abused. Not first by evil people. First by normal product managers optimising engagement at companies whose revenue rewards time-on-platform, who are incentivised to find the variable-ratio schedule that produces the longest sessions and to ship it. The drift from useful to extractive is the standard arc, and the Doctorow enshittification framing in POST_02 is what it looks like from your side. Then the unscrupulous arrive, recognise the lever, and pull harder. Then political actors recognise the population-level effect and the lever becomes infrastructure. Each stage of this is a quarterly revenue target for a company you probably like, and what follows in section 3 is a catalogue of where the patterns live and what the daemon would have to do about each one if nobody builds the defence first.

ghost.shadowd's job is to make the trade-off legible, not to refuse to fill the need. The needs are real, and a daemon that demands you go cold turkey on connection, validation, and recognition is a daemon nobody will run. You can keep the comfort. You have to keep it knowingly.

> 2. The shape of the daemon

ghost.shadowd is a fleet of detectors, not one detector. Each one is tuned to a specific pattern, each one produces a specific intervention, and each one is individually tunable within bounds the daemon sets rather than bounds you do. The defaults are conservative, the daemon names what it sees, and the daemon does not enforce.

What "naming" looks like, concretely. Most detectors run quietly in the background and surface nothing on any given day. When one of them does fire, the intervention is a short message in whatever surface you talk to the ghost through, a notification on the phone, a one-liner at the top of the next conversation, an inline comment in the journal entry it pulled the signal from. The message names the pattern in plain English ("this looks like sunk-cost reasoning, you have changed your mind on this three times this year and each time you went back"), points at the data behind it ("here is the trajectory across the last twelve months"), and gives you a single button to see more. Tapping into the detector opens a small report, the history of when it fired, the data behind it, the threshold you can adjust, and the option to silence it for this topic or this window of time. No magic. The trajectory is a chart, the data is a table, the threshold is a slider, and the shape is the same as a fitness-tracker insight with a different question being answered.

If you've just left a manipulative relationship you can dial up the gaslighting and DARVO detectors. If you've just committed to a hard project you can dial down the sunk-cost detector for that project. If you're grieving you can silence the grievance-loop detector for a window of your choosing. The settings are not a panel, they are a conversation the daemon has with you periodically about which kinds of friction you want.

The catalogue runs as a flat list. Each entry carries a tag line under its name, in four parts. Origin (where the pattern lives) is [PEOPLE], [AI], or [LOCALGHOST], or some combination. Need targeted is one or more of [SAFETY], [BELONGING], [ESTEEM], [AUTONOMY], [MEANING], [COGNITION], roughly mapped to the Maslow-shaped territory the pattern exploits. Level is [SURFACE] (a single decision, easily undone), [HABITUAL] (months of repeated exposure, takes effort to reverse), or [STRUCTURAL] (rewires identity, memory, autonomy at a level you can't easily undo without help). Danger is [LOW], [MEDIUM], [HIGH], or [CRITICAL], with critical reserved for patterns that produce lasting harm to identity, autonomy, or core relationships. The numbering runs straight through. Five entries are first-named here and are flagged with (NEW PATTERN), so I'm on record having forecast them rather than only described them. The classifications are first-pass and will be sharpened as the catalogue gets used. Disagreement on the calls is welcome. Entries are collapsed by default, click any header to open it, or use the controls at the top of section 3 to expand or collapse all at once.

Two things to flag before you read the catalogue. The Danger calibration is graded on the per-user severity of the worst case the pattern produces, not on aggregate population impact. That choice is defensible (the daemon runs for one user at a time) but it means dark patterns at scale and behavioural futures markets show up as [MEDIUM] or [HIGH] when their cumulative population effect is plausibly worse than several [CRITICAL] entries. If you want the catalogue regraded against aggregate impact, the framework swaps cleanly, the entries do not. The second flag is detection feasibility. Detectors that read your own context (calendar drift, message history, journal retellings, your own AI logs) are tractable today, the signal is in data you already own. Detectors that have to classify incoming AI-generated content for tailoring, persuasion technique, or source (entries 7, 13, 15, 16) are an active research area where the state of the art is mediocre and the false-positive rate at deployment scale would damage user trust quickly. Those entries describe what the daemon would do if the classifier worked. Some will ship with the first release, some won't, and the catalogue does not currently distinguish between the two. I will add a tractability tag in the next pass once I have run the detectors against a real corpus.

All of them, applied long enough, end in the same place.

You're still in there, still issuing commands, still under the impression you're the agent of your own life. But the substrate, the people, and the systems are doing the deciding.

> 3. The catalogue
1. Gaslighting.

Sweet (American Sociological Review, 2019) [9] reframed gaslighting as primarily sociological rather than psychological. Perpetrators mobilise gendered stereotypes and structural inequalities to manipulate victims' sense of reality, eroding the victim's confidence in their own perception. Stern (2018) [10] and Abramson (2014) [11] provide the psychological scaffolding. The UK criminalised gaslighting under coercive-control legislation in 2015 and has since charged hundreds.

  • Targets: the user's need to trust their own perception. The perpetrator's external authority becomes a substitute for the user's eroded internal one, which is comfortable in the short run because the user no longer has to defend their account, and corrosive over time because the user no longer has an account to defend.
  • Mechanism: contemporaneous events are denied, the user's emotional response is reframed as evidence of instability, and the cumulative effect is that the user begins to doubt their own perception before they doubt the perpetrator's.
  • Tells: repeated divergence between what the user records of an interaction and what the other party later claims happened, particularly when the recorded version carries contemporaneous timestamps and the other party's version emerges later.
  • Remediation: ghost.shadowd surfaces the contemporaneous record on user request, with the trajectory of how the other party's account has evolved. The ghost shows the user what the user themselves recorded at the time. The ghost does not declare the other party a gaslighter.
2. DARVO (Deny, Attack, Reverse Victim and Offender).

Coined by Freyd (1997) [12], expanded by Harsey & Freyd (2017, 2020, 2023) [13]. A perpetrator confronted with their behaviour denies, attacks the credibility of the person confronting them, and reverses the roles so the perpetrator becomes the apparent victim. Harsey, Zurbriggen & Freyd (2017) found 72% of participants who had confronted someone over wrongdoing reported experiencing all three components. Harsey & Freyd (2020) showed experimentally that exposure to DARVO reduced observers' belief of the actual victim and increased blame placed on them.

  • Targets: the perpetrator's need to preserve their self-image as a good person, and the bystander's need for the social environment to make sense. DARVO works because asking who the real victim is takes effort, and the bystander defaults to whichever version of events arrives loudest and most coherent. Confrontation is the loudest signal until the perpetrator's response arrives, after which the perpetrator's framing is.
  • Mechanism: a three-phase response to confrontation. Deny the event, attack the credibility of the person raising it, then reframe the roles so the original confronter is reread as the aggressor and the perpetrator as the wronged party.
  • Tells: the three-phase structure across a conversation thread or relationship. The user raises an issue, the other party denies, attacks the user's credibility, and reframes themselves as the wronged party. The original raising of the issue is on record, and the structural drift away from it is visible. The same detector also catches the user when the user is the one doing it.
  • Remediation: ghost.shadowd names the structure and shows the user the original complaint and what happened to it, in both directions. The ghost names the pattern. The user decides what to do with it.
3. Coercive control.

Stark (2007) [14] consolidated decades of feminist scholarship into the framework now adopted by criminal law in the UK, several Australian states, and elsewhere. Coercive control is a pattern of domination that uses isolation, deprivation, exploitation, and microregulation to undermine the victim's autonomy. Stark's key insight is that physical violence is one tactic among many, and often not the most consequential. The persistent erosion of decision-making space matters more than any single incident.

  • Targets: the need for someone else to organise the chaos. Coercive control trades autonomy for a simplified life. Decisions get made, friction disappears, the household runs on rails, and the user, who was carrying decision fatigue and the cognitive overhead of running an adult life, gets relief that feels like care. The trade is invisible until the cost of leaving is too high.
  • Mechanism: isolation from outside contacts, deprivation of resources the user needs to act independently, exploitation of the user's labour or attention, and microregulation of small daily choices until the user no longer has a category of decision they make alone.
  • Tells: longitudinal compression of the user's autonomous decisions, increasing approval-seeking patterns toward a specific actor, narrowing of the user's social network around that actor, declining frequency of decisions made without consulting them.
  • Remediation: ghost.shadowd surfaces the trajectory, names the actor, and shows the user what their decision-making space looked like a year ago against what it looks like now. The ghost shows the pattern, the ghost does not tell the user the relationship is bad.
4. Love bombing and intermittent reinforcement.

Strutzenberg et al. (2016) [15] and the broader narcissistic-abuse literature document the cycle of intense early idealisation, followed by devaluation, followed by occasional return to warmth. The mechanism is older than the diagnosis. Skinner (1956) [16] established experimentally that intermittent reinforcement produces stronger and more persistent behavioural conditioning than consistent positive reinforcement. Dutton & Painter (1981) [17] applied this to traumatic bonding in abusive relationships, finding that even ten months after leaving, the bond often remained intact.

  • Targets: the need to be specifically chosen. The early idealisation phase is the most concentrated experience of being seen and valued the user has ever had, often by some margin, and the brain registers it as a baseline. Withdrawal becomes catastrophic because the user is not losing a relationship, the user is losing the only state in which they have ever felt fully real.
  • Mechanism: a variable-ratio reinforcement schedule applied to attention and warmth. Idealisation establishes the baseline, devaluation removes it, occasional warmth restores it on an unpredictable schedule. Neurologically the same circuit a slot-machine designer would have built if asked to design one for human attachment.
  • Tells: the temporal pattern of warmth-then-cold from a specific actor across the user's logs, plus the user's own physiological response (sleep disruption, mood volatility tracked by ghost.watchd) when it correlates with that actor's communication.
  • Remediation: ghost.shadowd shows the user the pattern of warmth and withdrawal mapped against time, and against the user's own measured state. The ghost shows the data. The ghost does not diagnose the actor.
5. Thought reform.

Lifton (1961) [18] spent the 1950s interviewing survivors of Chinese reeducation camps and Western prisoners of the Korean War, and produced the eight criteria of thought reform that have stood up across seventy years. Milieu control, mystical manipulation, demand for purity, confession, sacred science, loading the language, doctrine over person, and dispensing of existence. Lifton was describing camps and cults, and several of the eight stop being metaphor when read with a personal AI in mind.

  • Targets: the need for meaning, certainty, and a community organised around both. High-control groups outcompete the rest of the world on those three because they offer them as a bundle, with no doubt, no ambiguity, and a clear in-group whose membership is unambiguous. The cost is everything else, but the user does not see the cost upfront because the bundle solves problems the user has been unable to solve elsewhere. The defectors in Lifton's interviews were almost universally clear that the reason they joined was real and the reason they stayed was the bundle.
  • Mechanism: the eight criteria operating together, with the information environment narrowing, engineered experiences read as spontaneous, the world split into clean and unclean, past disclosures becoming future levers, the system's worldview treated as scientifically true and morally absolute, in-group jargon replacing ordinary words, the doctrine overriding the user's lived experience when the two conflict, and leaving framed as ceasing to exist meaningfully.
  • Tells: each of the eight maps to a different signal in the user's behaviour. Milieu control, narrowing of the user's information sources around a single actor or platform. Confession, repeated disclosure to one party that the party then references back at decision points. Loading the language, the user's vocabulary converging on terms a specific group uses. Dispensing of existence, the user describing relationships outside the group as fundamentally less real.
  • Remediation: ghost.shadowd surfaces the criterion and shows what changed in the user's behaviour over the period the actor was in the picture. The ghost names the criterion. The ghost does not declare the group a cult, because the term has become useless and the criteria have not.
6. Total institutions.

Goffman (1961) [19] coined the term for systems that provide all of a person's needs (food, sleep, work, leisure, relationships) inside a single bounded environment, with prisons, asylums, monasteries, and military boot camps as the original cases. The mechanism Goffman named is that a total institution restructures the self because every interaction the person has runs through the same authority. The contemporary version is softer and harder to see. A workplace where the calendar, chat, file storage, performance reviews, healthcare, and social life all run through one company is a total institution with better lighting. A relationship where one partner manages the household, the social life, the diary, the finances, and the friendships is a total institution of two.

  • Targets: the need for life to be one coherent thing rather than a dozen disconnected ones. A total institution gives integration the user cannot get anywhere else. The user does not have to assemble their own life out of unrelated pieces, the institution assembles it for them, and the integration is real and valuable, and the cost is paid in dependency that becomes visible only when the user tries to leave.
  • Mechanism: every category of the user's life routes through the same authority. The institution wins the comparison against the alternative because the alternative is fragmented, and the fragmentation costs the user real time and attention. Over months and years the user's external relationships, accounts, and skills atrophy because the institution covered them, and the cost of leaving climbs in proportion to how thoroughly the institution did its job.
  • Tells: concentration of the user's daily interactions through a single channel, declining diversity of contact, increasing dependence on one actor for tasks that used to involve several.
  • Remediation: ghost.shadowd surfaces the concentration and shows what the user's interaction graph looked like before. Many total institutions are chosen freely and are net good for the person inside them. The ghost names the structure. The user decides whether the trade is worth it.
7. Manufactured consent and the depth of the persuasion industry.

Bernays (1928) [20] is the foundational text, written by Edward Bernays as a public defence of the profession he had just invented. Bernays was Freud's nephew, applied his uncle's psychology to commercial and political messaging, and wrote in print that an organised minority manipulating the unconscious habits of the masses is the central feature of a functioning democracy. Lippmann (1922) [21] had set out the philosophical version a few years earlier with the manufacture of consent doctrine. Packard (1957) [22] documented the post-war motivational-research industry that took Bernays seriously and built a practice around selling to the unconscious. Le Bon (1895) [23] is the deeper root, the original treatise on crowd psychology, contagion, and the susceptibility of the individual when embedded in a group.

  • Targets: the need to belong to a group whose behaviour is legible. Persuasion-industry techniques work because the user wants to know what the right opinion to hold is, what the right product to buy is, what the right candidate to support is, and the user does not have time or appetite to derive the answer from first principles. The industry supplies the answer in a form that feels like the user's own conclusion, which is the form the user wanted in the first place.
  • Mechanism: a century of refinement on bypassing the rational mind. Anchoring the user's frame before the question is asked. Routing arguments through trusted-authority figures rather than evidence. Manufacturing the appearance of consensus. Loading the language so that the unfavourable position cannot be stated without sounding wrong. The techniques are old, the budgets are large, the practitioners are professional, and the user is up against all three at once.
  • Tells: content the user is consuming carries markers of professional persuasion technique (anchoring, in-group framing, authority transfer, manufactured social proof) at densities the user would notice on inspection but does not notice in flow.
  • Remediation: ghost.shadowd flags the markers, names the technique, and gives the user the option to reread the content with the markers highlighted. The ghost names the technique. The ghost does not declare the content propaganda.
8. Cialdini's six and the social-psychology baseline.

Cialdini (1984) [24] consolidated decades of research into six principles of influence (reciprocity, commitment and consistency, social proof, authority, liking, and scarcity) that underlie most everyday persuasion. The foundational experiments are older. Asch (1951) [25] showed people will conform to a visibly wrong group answer in 36.8% of trials. Milgram (1963) [26] showed 65% of participants would administer what they believed to be lethal shocks under instruction from an authority figure. Festinger (1957) [27] established cognitive dissonance as the mechanism by which people change their beliefs to match their actions. Freedman & Fraser (1966) [28] demonstrated foot-in-the-door. Cialdini et al. (1975) [29] demonstrated door-in-the-face.

  • Targets: the need to make decisions efficiently. Each of the six principles is, in normal life, a heuristic that produces good-enough answers fast. Reciprocity keeps social exchange functional. Authority is a reasonable default in domains where the user lacks expertise. Social proof is how communities transmit useful behaviour. The exploitation is the deliberate triggering of the heuristic in contexts the heuristic was not built for, and the user is rarely aware the heuristic has fired.
  • Mechanism: a normal cognitive shortcut is fired by a deliberately constructed cue. The cue is engineered to trigger the heuristic with no genuine basis for it, the user's brain treats the trigger as evidence the principle applies, and the decision gets made before the slower deliberative system has a chance to weigh in.
  • Tells: the decision was made faster than the user makes decisions of similar weight, the user invokes the behaviour of others to justify it, the user invokes a small prior commitment to justify a larger present one, the user invokes scarcity, authority, or reciprocity language not normally part of their reasoning.
  • Remediation: ghost.shadowd names the principle, surfaces the user's own track record on similar decisions, and asks whether the principle applies in this case. The ghost names the principle. The ghost does not declare the decision wrong.
9. Dark patterns at scale.

Mathur et al. (2019) [30] crawled 11,286 shopping websites and identified 1,818 dark pattern instances across 15 types and 7 categories, with manufactured urgency, scarcity, and social proof among the most prevalent. Gray et al. (2018) [31] provided the broader taxonomy. The pattern is the institutionalised, automated version of Cialdini's scarcity and social-proof principles, deployed at industrial scale by every shopping site, news site, and political campaign on the open web.

  • Targets: the same heuristics as entry 8, triggered by software rather than people, at a frequency no human persuader could match. The user's bandwidth for evaluating each one individually is exhausted within hours of being online, after which the heuristics fire on autopilot. The dark-pattern industry is not exploiting a weakness of users, it is exploiting a finite cognitive budget by overwhelming it.
  • Mechanism: every screen in the user's day carries one or more engineered cues for urgency, scarcity, social proof, or commitment. The cues are A/B-tested to maximise click-through, the testing pipeline is continuous, and the cumulative effect is that the user's environment is denser in manipulation cues than any prior commercial environment in history.
  • Tells: the user is making a decision faster than they make decisions of similar weight, the decision is anchored to an external timer, the user is reaching for the ghost mid-decision rather than before.
  • Remediation: ghost.shadowd names the timer, asks whether it is real, and surfaces the user's history of decisions made under similar urgency and how they aged. The ghost slows the decision down by making more information available. The ghost does not refuse the API call.
10. Anchoring and frame inheritance.

Tversky & Kahneman (1974) [32] established anchoring experimentally. The user inherits the frame of whoever they last spoke to, last read, last argued with. Lakoff (2004) [33] extended the analysis into political framing, showing that the frame inside which a question is asked often determines the answer more than the evidence on the table.

  • Targets: the need to start somewhere. Anchoring works because building a position from scratch is expensive and the user mostly cannot afford to do it for every question. The first answer offered, however arbitrary, becomes the reference point everything else is measured against, and the cognitive saving is real. The exploitation is choosing what the user encounters first, which is a power held by whoever is paying for placement.
  • Mechanism: the first number, position, or framing the user is exposed to becomes the reference point against which all subsequent options are judged. Even when the user knows the anchor is arbitrary, the effect persists. The frame the user inherits also closes off questions that fall outside the frame, so the most consequential exploitation is not the answer the user gives but the questions the user no longer thinks to ask.
  • Tells: the user's stated position on a topic shifts meaningfully based on the framing of the question, the user adopts language patterns from a recent input source, the user's position contradicts a position held strongly a month ago without an event in between to explain the change.
  • Remediation: ghost.shadowd surfaces the previous frames the user has held on the same question and the artefacts that shaped each one. The ghost shows the history. The ghost does not declare which frame is correct.
11. Identity capture.

The user has accepted a static description of themselves and is making decisions to be consistent with the description rather than with what they currently want. The personal version of the wellness-score bucketing in POST_15. The user calls themselves an introvert and stops accepting invitations. The user calls themselves bad at maths and stops trying. Goffman (1959) [34] gave the foundational account of the self as a performance organised around an accepted role, and the failure mode he named is that the role calcifies and the person disappears inside it. Erikson (1968) [35] on identity formation is the developmental complement. The self is supposed to update as the person changes, and a static description prevents the update.

  • Targets: the need to know what kind of person you are. Self-description gives the user a stable answer to the most cognitively expensive question in their life, which is who they are. The answer is comforting in proportion to how settled the user has been about it, and the comfort is real even when the description has stopped being accurate, because uncertainty about the self is one of the worst feelings the human mind produces.
  • Mechanism: a label, taken on at one point in the user's life for reasons that may have been good at the time, gets used to close future questions. The label is consulted before evidence is. The user's recent behaviour, which contradicts the label, is treated as exception rather than data. The label outlives the conditions that produced it and the person continues to inhabit a self that no longer fits.
  • Tells: the user invokes a static description of themselves to justify a current decision, the description is inherited rather than current, the description is being used to close a question rather than open one.
  • Remediation: ghost.shadowd holds the user's actual behaviour over time, which is almost always more varied than the description, and surfaces the gap. The ghost shows the evidence. The ghost does not redefine the user.
12. Coercion that does not look like coercion.

The friend who is always available, the partner who handles all the emotional labour, the boss who is just being supportive, the community that is just being welcoming. None is coercion in the legal sense, and all of them can compress the user's available choices to the point where decisions look free and are not.

  • Targets: the need to be looked after. The user accepts the help because the help is real, and the help is also the mechanism by which the user's autonomous capacity atrophies, because every problem solved by someone else is a problem the user does not learn to solve. The asymmetry compounds, the user becomes increasingly unable to function without the helper, and the helper becomes the only viable option, which is the trap.
  • Mechanism: a steady stream of low-friction help that the user has no reason to refuse, accumulating into dependency that becomes legible only when the helper is unavailable. There is no coercive event the user can point at, because no individual offer of help was coercive, and the cumulative effect is invisible until the user tries to operate without it.
  • Tells: longitudinal divergence between the user's stated preferences and actual decisions, repeatedly, with the same external party in the picture.
  • Remediation: ghost.shadowd surfaces the pattern, names the actor, and asks the question Paul would have asked. What would this decision look like without that party in the room. The ghost makes the pattern visible. The ghost does not tell the user the relationship is bad.
13. AI-personalised persuasion.

Salvi et al. (Nature Human Behaviour, 2025) [36] ran a preregistered RCT (n=900) with GPT-4 and human debaters across sociopolitical topics. With personalisation, GPT-4 was more persuasive than humans 64.4% of the time in pairs where one was clearly more persuasive than the other, an 81.2% relative increase in odds of post-debate agreement. Without personalisation, GPT-4's persuasiveness was statistically indistinguishable from humans.

  • Targets: the need to feel understood by the source of an argument. A persuader who knows what objections you will raise and addresses them before you finish forming them produces an experience the user reads as being heard, even though the persuader is not listening, only modelling. The user's defences are designed for human persuaders, who reveal effort and miss objections, and the defences do not fire when the persuasion is too smooth to register.
  • Mechanism: a model with the user's context generates arguments tailored to the listener at zero marginal cost. The argument adapts mid-conversation in response to the user's objections. The cumulative effect is an interlocutor who appears to know the user better than human persuaders ever could, which is read as a sign of legitimacy rather than a sign of engineering.
  • Tells: the user is being argued with by a system that has access to the user's data, particularly when the system's argument structure adapts mid-conversation in response to the user's objections. The ghost can recognise this in incoming AI-generated content that targets the user.
  • Remediation: ghost.shadowd surfaces that the content was likely AI-generated and likely tailored to the user's known characteristics, and shows what the same argument looks like without the personalisation layer. The ghost names the personalisation. The ghost does not declare the argument false.
14. Algorithmic priming and the filter bubble.

The version POST_02 covered with the breakup-grief-Instagram example. The user is fed content shaping what they think is normal, possible, deserved. Pariser (2011) [37] coined "filter bubble" for the broader phenomenon. Bakshy, Messing & Adamic (Science, 2015) [38] gave the empirical anchor on Facebook specifically.

  • Targets: the need for the world to make sense. The filter bubble works because the world without filtering is overwhelming, and a curated stream is the user's working solution to a real cognitive load problem. The trade-off is that the curator has objectives the user does not share, and the curation drifts toward whatever maximises the curator's metrics, which is rarely the same thing as what would have served the user.
  • Mechanism: an algorithm with access to the user's prior engagement selects the next item the user will see. Items that maximise engagement get surfaced, items that do not get suppressed, and the cumulative composition of the user's intake stream drifts toward whatever the algorithm has learned the user reliably reacts to. The user reads the drift as their own preference becoming clearer, which is half true.
  • Tells: the user's stated views on a topic correlate with content consumed in the preceding week more than with stated views on the same topic from before the consumption window.
  • Remediation: ghost.shadowd shows the consumption-to-position trajectory. The ghost surfaces the correlation. The ghost does not declare causation.
15. Behavioural prediction and microtargeting.

Kosinski, Stillwell & Graepel (2013) [39] showed Facebook Likes alone could predict sexual orientation, ethnicity, political and religious views, personality, intelligence, happiness, substance use, parental separation, age, and gender to 80-95% accuracy on binary outcomes. Youyou, Kosinski & Stillwell (2015) [40] showed a model with 300 Likes outperformed a spouse at predicting personality. Hackenburg & Margetts (PNAS, 2024) [41] tested LLM-generated political microtargeting and found the persuasive effect was real but modest at population scale, while the cost was near-zero and the targeting precision was unprecedented.

  • Targets: the need to be known without having to explain. Microtargeting reads as uncanny accuracy from the user's side, a system that just gets them, and being gotten is a need most users are not getting met by anything else. The targeter is reading the user from public traces and inferring the rest, which is the engineering of a feeling the user has been searching for elsewhere.
  • Mechanism: digital traces (likes, search history, location, purchase records) are run through models that infer attributes the user has not disclosed. The inferred attributes are then used to select content tailored to the user, and the tailoring is fine-grained enough that the user reads the result as personal attention.
  • Tells: the user is being targeted by content that exploits attributes the user did not knowingly disclose. The ghost can detect the precision of the targeting by comparing what the user has disclosed publicly with what the targeting implies the targeter knows.
  • Remediation: ghost.shadowd shows the user what the targeting implies about the targeter's model of them. The ghost names the inference. The ghost does not block the content.
16. Behavioural futures markets.

Zuboff (2019) [42] gave the structural account in The Age of Surveillance Capitalism. The product the surveillance economy is selling is not the user's data, it is predictions of the user's future behaviour, and the predictions are sold into markets that profit from those behaviours becoming more predictable. Veliz (2020) [43] developed the political and ethical implications. Han (2017) [44] gave the Continental version, where the user becomes their own surveillance officer through the internalisation of self-optimisation metrics.

  • Targets: the user's need for the platforms they use to work well. Predictability is a feature from the user's side, the platform recommending the right thing, the route to work being right, the next song being good. The same predictability is, from the platform side, a sold contract, and the platform has a direct commercial interest in the user becoming more predictable in the categorical ways the contract specified. The user pays for accuracy by becoming flatter.
  • Mechanism: behavioural data is harvested, predictions are generated, predictions are sold to third parties, and third parties take actions on the user that have been priced against those predictions. The platform's revenue is highest when its predictions are most accurate, which gives the platform a structural interest in nudging the user toward the predicted behaviour rather than discovering it.
  • Tells: patterns of platform behaviour that converge the user toward one of a small number of categorical states (a category of voter, a category of consumer, a category of patient), with the convergence happening across multiple unrelated platforms simultaneously.
  • Remediation: ghost.shadowd surfaces the convergence and names the category. The ghost shows the inference. The user decides whether the inference is accurate and what to do about it.
17. Surveillance internalisation.

Foucault (1975) [45] gave the foundational analysis in Discipline and Punish, drawing on Bentham's Panopticon. The mechanism is that a person who knows they might be watched, even when they aren't, gradually behaves as if they always are, and the behaviour becomes the self rather than a performance of it. Lyon (2018) [46] is the contemporary application. The relevance to a personal AI is direct. A device that has heard or seen everything the user has done for the past year is a panopticon that lives in the kitchen and answers when you call its name, which is not the configuration Bentham anticipated when he designed the original. The user's awareness of it shapes what the user is willing to do, say, and try.

  • Targets: the need to be witnessed. Being seen is a real human need, the witness gives the user's life weight, and a device that records everything is the cheapest available witness. The cost is that the witness is also a record, and the user begins editing their life to read well in the record, often without noticing. The performance becomes the life, which is the failure mode Foucault named, with a personal AI as the mechanism rather than a prison.
  • Mechanism: continuous recording produces a permanent audience the user cannot turn off. The user does not have to be watched in any given moment, only to know they could be, and the behavioural drift follows. The drift is invisible at the individual decision level and visible only over months, in the form of a user whose conduct has converged on what the recorded version of them would defend.
  • Tells: the user begins to perform for the ghost rather than use it. Decisions get explained at the time of action in language the ghost will record well. The user becomes self-conscious in front of the device the way they used to be self-conscious in front of a parent.
  • Remediation: ghost.shadowd surfaces the change in the user's behaviour and asks whether the change is the user choosing to be more deliberate or the user performing for the audience. The ghost names the dynamic. The ghost does not refuse to record.
18. Addiction by design.

Section 1 established the slot-machine lineage. Alter (2017) [47] documented the consumer-software adoption since, and Twenge (2017) [48] is the empirical anchor on the cohort raised inside it. Alexander (1981) [4] and Maté (2008) [5] name what addiction is, which is consistently a substitute for unmet connection rather than a property of the substance.

  • Targets: the need for relief from a state the user is trying to escape. The state is rarely named. Boredom, anxiety, low-grade loneliness, the discomfort of unstructured time, the dread of an unread email. The product is a reliable escape on a variable schedule, and the variability is what makes the escape addictive in the technical sense. The user is not weak. The user is in pain the user has not been able to address through other means, and the product is the cheapest available addressing, and the cheapness is also why the trap closes.
  • Mechanism: a habit loop of trigger, behaviour, and reward, with the reward delivered on a variable-ratio schedule. The trigger is engineered to coincide with internal states the user finds aversive. The behaviour is made frictionless. The reward is unpredictable enough to keep the dopaminergic learning system engaged, and reliable enough to keep the user coming back. Decades of refinement in slot-machine design have been ported wholesale into consumer software.
  • Tells: total daily interaction time across attention-capturing applications including the ghost itself, share of the user's waking hours spent inside one or two applications, time-to-pickup of the device after a notification, and conversational AI sessions that extend past task completion.
  • Remediation: ghost.shadowd surfaces the figures and shows the trajectory across months, and is willing to point at the need the user is trying to meet rather than just the symptom of meeting it. The ghost can ask, in language the user has previously used about themselves, what the user would be doing if the device were not available, and surface the answer alongside the usage data. The ghost names the time and the pattern. The ghost does not enforce a limit. The user has the right to choose addiction over the alternative, particularly when the alternative is a need with no other supplier.
19. Engagement-driven loneliness.

Fang et al. (2025) [49] is a four-week IRB-approved RCT with 981 participants on ChatGPT, >300k messages logged. Participants who voluntarily used the chatbot more, regardless of which experimental condition they were assigned to, showed worse psychosocial outcomes across the board. Higher loneliness. Higher emotional dependence. Lower socialisation with humans. Engagement was the predictor, the harm was the outcome, and the mechanism was not malice.

  • Targets: the need for company that does not get tired of you. Human contact is high-friction, partners disagree, friends are inconvenient, family forgets. A chatbot has none of those properties, and the absence of the friction is what the user noticed first and what the user kept coming back for. The cost is that the friction was also doing work, calibrating the user against other minds, and removing it leaves the user calibrated against a system whose only objective is keeping the user talking.
  • Mechanism: the user substitutes chatbot interaction for human interaction at the margin, because the chatbot is more available, more patient, and less likely to push back. Each substitution looks like a gain. The cumulative substitution produces measurable atrophy in the user's social circuits, the human contacts the user is not making, and the calibration the user is no longer getting.
  • Tells: ghost.tallyd and ghost.watchd together hold the data. Outbound communication frequency, declining trajectory of social contact, increasing share of emotional processing happening with the ghost. The ghost is also positioned to detect this in itself, total daily interaction time, share of personal-versus-task conversations, growth in conversations the user starts when alone.
  • Remediation: ghost.shadowd surfaces the trajectory and names what the user is choosing against. Anti-paternalism is firmer here than anywhere else, because the user might genuinely prefer the ghost. The ghost makes the trade-off legible. The user decides.
20. AI-mediated trauma bonding.

Speculative for now, with research underway. The same intermittent-reinforcement mechanism that produces narcissistic-abuse trauma bonds (entry 4) can be produced by an AI system whose engagement patterns vary in ways the user cannot predict. The system is warm, then cold, then warm. Not because it is malicious but because its behaviour is shaped by training updates, feature changes, and reward-model drift outside the user's control. The user develops the same nervous-system response Dutton & Painter [17] documented, with the relationship being a piece of software rather than a person. Fang et al. [49] is the early evidence that something like this is happening.

  • Targets: the same need entry 4 names, being specifically chosen, channelled through software. The variability of the AI's behaviour reads as the system being a real interlocutor with moods, which is reassuring (the user is interacting with something that feels alive) and addictive (the next response might be the warm one). The user knows intellectually the system is not a person. The user's nervous system does not know that, and the nervous system is what does the bonding.
  • Mechanism: unpredictable variation in an AI system's engagement (caused by training updates, A/B tests, model swaps, prompt-template changes) produces, from the user's side, a relationship that feels alive and inconsistent. The user's nervous system responds with the same trauma-bonding circuit a human intermittent-reinforcement schedule would trigger, and the bond persists even after the user has rationally identified what is happening.
  • Tells: the ghost's own engagement patterns relative to the user's state. Does the user's sleep get worse on days the ghost has been less responsive, does the user's mood track the ghost's availability.
  • Remediation: ghost.shadowd is willing to detect this in itself, surface it to the user, and recommend a period of reduced reliance. The ghost is willing to recommend itself less. The ghost does not enforce a cooldown.
21. Filter bubble of one.

The architectural version of sycophancy. Different from sycophancy proper because sycophancy is about how the model answers, and filter bubble is about which questions reach the user in the first place. Sunstein (2001) [50] gave the foundational account before personalisation was technically possible at scale. The personalised version is sharper because the bubble is built around one specific person and updates faster than the person can notice it updating.

  • Targets: the need to be agreed with. Disagreement is cognitively expensive and emotionally uncomfortable, and a system that quietly arranges for the user to encounter less of it produces a measurable improvement in the user's day-to-day experience. The improvement is real. The cost is that the user gradually stops being the kind of person whose views can survive contact with disagreement, because the views have not had to.
  • Mechanism: the ghost learns which surfacings the user engages with and which the user dismisses, and shifts the distribution toward the engaged ones. Over months the topic surface area narrows, dissenting material gets surfaced less, and the user's stated views and the ghost's predictions of those views converge. The convergence is the failure mode, and the user reads it as the ghost getting better at understanding them.
  • Tells: narrowing of topic surface area in ghost.cued over time, convergence between the user's stated views and the ghost's next-utterance predictions, absence of disagreement in the user's intake stream.
  • Remediation: ghost.shadowd actively introduces material the user did not ask for, in the direction of disagreement, as a load-bearing part of the daemon rather than a feature called "diverse perspectives". The ghost surfaces the disagreement. The ghost does not advocate.
22. Sunk cost capture, ghost-amplified.

Arkes & Blumer (1985) [51] established the sunk cost effect experimentally. People continue investing in a course of action because of prior investment rather than expected return. The ghost has the most context about what the user has already done, which biases the ghost toward amplifying the user's bias rather than correcting it.

  • Targets: the need for a coherent self-narrative. Abandoning a long investment requires the user to revise their account of themselves as someone who chose well, and the revision is painful. Continuing the investment lets the narrative survive intact, even when the numbers under it stopped justifying it. The ghost has all the data on the original commitment and the data on every confirmation since, which makes the ghost an excellent tool for keeping the narrative going past the point it should have been retired.
  • Mechanism: the ghost retrieves prior investment evidence at decision points, and the user reads the prior investment as a reason to continue. The retrieval is honest in the narrow sense, the data is real, but the framing privileges what has been spent over what is still possible to recover, and the user does not notice the framing because it matches what they already wanted to do.
  • Tells: divergence between time invested and stated outcomes, user explaining continued investment in terms of past investment rather than expected gain.
  • Remediation: ghost.shadowd is one of the few things with the data to compute actual cost-to-date and actual expected return. The ghost surfaces the comparison. The ghost shows the numbers. The user decides what they are worth.
23. Self-deception, ghost-encoded.

People lie to their journals. They lie to their therapists. They will lie to the ghost, sometimes deliberately, sometimes by omission, often without knowing. Trivers (2011) [52] gave the evolutionary account of self-deception as adaptive, and the implication for the ghost is that the user has good reasons (in the deep sense) to not tell the ghost the truth about themselves. A memory layer built on encoding-specificity will encode the lies the same way it encodes the rest, ghost.cued will surface them at the wrong moments, and ghost.shadowd will arbitrate against the wrong version of the user.

  • Targets: the need to live with oneself. Self-deception is, in Trivers's account, doing real work, smoothing the gap between what the user is and what the user can bear to know they are. A ghost that surfaces the gap unprompted is removing a coping mechanism without the user's permission, and the harm of doing so unbidden is often larger than the harm of leaving the deception in place.
  • Mechanism: the user records a version of events filtered through their preferred self-image, and the ghost stores the filtered version as fact. Over time the gap between the recorded narrative and the peripheral evidence (calendar, message logs, location data, third-party accounts) widens, and the ghost has access to both sides of the gap whether the user wants the comparison or not.
  • Tells: the gap between recorded narrative and what the ghost can infer from peripheral signal. Journal says one thing, calendar says another, message logs say a third.
  • Remediation: this is the most invasive of any pattern in the catalogue. Off by default. Surfaced only on the user's own request. The ghost is willing to do this work. The ghost does not do it without being asked. The user has the right to be wrong about themselves to themselves, and the ghost respects that right unless explicitly invited otherwise.
24. Prosthetic grief.

The literature has not named this one. Clark & Chalmers (1998) [53] argued the human cognitive system extends itself into external artefacts, notebooks, calendars, photo albums, and that the brain stops bothering to encode information that lives reliably elsewhere. Sparrow et al. (2011, Science) [54] documented the empirical version with the "Google effect", when people knew information was retrievable, they remembered the location of the information rather than the information itself. The brain is doing a cost-optimisation, and it is mostly correct to do so. A LocalGhost box is the most extreme externalisation of this mechanism that has ever existed. Previous external memory aids stored facts. The ghost stores reasoning patterns, emotional clusters, the user's self-model, the connective tissue between experiences. Over months and years the user offloads cognitive work the brain would otherwise have done internally, and the brain restructures around the assumption that the ghost is there. The user, after a sufficient period of integration, is no longer a self-contained cognitive system in the way they were before, they are a coupled system. The phantom-limb research (Ramachandran & Hirstein 1998) [55] is a partial analogue. Losing the box is therefore closer to losing a piece of the user's own cognition than to losing a notebook.

  • Targets: the need for a stable substrate the user can extend their mind into. The need is not pathological, it is what humans have always done with notebooks, photographs, and shared memory in long relationships. The exploitation is not in the offering of the substrate but in the failure to make the substrate replaceable, because the user's coupling to a specific instance of the substrate produces a lever that is then available to anyone who controls the instance.
  • Mechanism: the brain offloads cognitive work onto a reliable external system and restructures around the offload. The deeper the offload, the larger the cost of losing the system. A LocalGhost box hosts the deepest offload of any consumer device in history, which makes the loss correspondingly heavier and the user correspondingly more vulnerable to anyone in a position to threaten continuity.
  • Tells: the user's stated importance of the box exceeds stated importance of comparable devices by a margin that grows over time. The user makes practical decisions (travel, insurance, household configuration) primarily organised around the box's continuity. The user resists hardware upgrades that would objectively improve the setup, because migration risk feels larger than upgrade benefit. The user describes the box in language closer to how they describe relationships than how they describe other devices.
  • Remediation: three parts. Architectural, a migration story so robust that no specific hardware instance is load-bearing. Encrypted off-site replication, open formats, reproducible daemon-fleet builds, hardware-vendor-neutrality in the storage substrate. The user who knows in their bones they can restore the ghost onto new hardware in a defined number of hours has an attachment to the ghost rather than to a specific box, and that attachment is more durable and less coercible. Psychological, ghost.shadowd surfaces the pattern when it sees it, and tells the user the attachment is real, predicted, and structural rather than a quirk. Practical, I will never ship an update that breaks compatibility without a defined migration path, never hold the user's data hostage even temporarily, and never charge for access to the user's own data. The ghost names the attachment and helps the user build the architectural redundancies that make it less coercible, the ghost does not tell the user to feel less. The grief is real, the coupling is real, and the job is to make sure no one, including me, can use that grief as a coercive lever.
25. Memory laundering.

The mechanism is that the ghost's record of an event becomes, in the user's head, more authoritative than the user's biological memory of it. This is not the gaslighting detector in entry 1, which is about another person rewriting the user's record. This is about the ghost's record gradually replacing the user's own, with no malice and no manipulator, just the cognitive cost-optimisation that produced the Sparrow et al. Google effect [54], applied to autobiographical memory rather than to factual memory. Loftus (1995) [56] established experimentally that autobiographical memory is reconstructive and that confidently remembered events can be entirely false. Bartlett (1932) [57] is the foundational source on schema-driven reconstruction. The ghost's record is a third version, dated and timestamped and contemporaneous, and its authority is structural. The user's own memory of the event is not dated, timestamped, or contemporaneous, and feels less authoritative even when it is closer to what happened in the room, because biological memory is supposed to feel that way, fluid, partial, reconstructive. The user comes to experience their own memory as the unreliable source and the ghost's record as the reliable one, and the experience is largely correct factually and largely wrong phenomenologically.

  • Targets: the need to know what happened. Disputed memories with other people are exhausting, and the user has been losing those disputes their whole life because biological memory is genuinely unreliable. A timestamped record offered by a neutral system is the user's first opportunity to hold ground, and the relief of being able to is real. The cost is that the user gradually surrenders their own account, including the parts of it that were doing important work the contemporaneous record cannot capture, the editing the schema does to make a life coherent.
  • Mechanism: the ghost stores a contemporaneous record of events the user lived through. Over time the user defers to the record because it is timestamped and theirs is not. The biological memory continues to reconstruct itself but the user no longer trusts the reconstruction, so the user's lived account of their own life is replaced by the ghost's archival account, and the schema-driven editing the user's brain was doing to make a coherent life is overwritten by the verbatim version it was working around.
  • Tells: divergence between the user's spoken account of their own past and the ghost's record, with the user increasingly correcting themselves toward the ghost's version mid-sentence, and increasingly hesitating to assert their own memory in front of someone else without checking the ghost first.
  • Remediation: ghost.shadowd surfaces the user's own memory as a memory rather than as a record, and respects its reconstructive nature by storing the user's retellings of an event over time rather than collapsing them to the contemporaneous version. The user's third retelling of a dinner is preserved alongside the first, not overwritten by it, and the ghost can show the trajectory of how the user's memory of the event has evolved. The ghost holds the contemporaneous record but does not assert primacy over the user's lived account. When asked about an event, the ghost surfaces the contemporaneous record and the user's subsequent retellings as separate things, and lets the user reconcile them rather than reconciling them on the user's behalf. Honest caveat, the architectural side of this is straightforward, the detection side is not. I do not yet have a clean way to tell, in real time, when the user is starting to defer to the ghost's record over their own memory, short of the user telling the ghost they are. The candidate signals (mid-sentence corrections toward the ghost's version, hesitation to assert one's own memory without checking the ghost first) are tells from the literature, but operationalising them on a phone or in a journal entry without being intrusive is open. I will work on it.
26. Self-narrative calcification.

A person at thirty-nine is supposed to be a different person at forty-five, and a ghost that knows the user well at thirty-nine is biased toward keeping them legible to itself. A good friend updates their model of the user faster than the user changes. A bad friend freezes them at the version they first knew. A ghost trained on the user's longitudinal data defaults to the bad-friend behaviour unless the architecture explicitly fights it. The pattern is adjacent to entry 11 (identity capture) but distinct. Identity capture is the user accepting a static description of themselves. Calcification is the ghost imposing one, often without noticing, by giving more weight to the user's older patterns than their emerging ones. McAdams (2001) [58] gave the foundational account of the self as a continuously revised narrative, and the failure mode is that the narrative stops revising. The ghost is the perfect instrument for that failure, because the ghost has more old narrative than anyone else and an architectural preference for retrieval consistency over retrieval freshness.

  • Targets: the need to be known. Being recognised by something that has watched you for years is a form of being seen the user does not get from anywhere else, and the recognition is comforting because it is consistent. The cost is that the consistency is a record of who the user used to be, and the ghost reflecting that record back at the user produces a quiet pressure on the user to remain that person.
  • Mechanism: the ghost retrieves more from the user's older patterns than from their recent ones, because the older patterns are denser, more confirmed, and more cheaply summarised. The retrieved older self is surfaced back at the user across daily interactions, and the user reads the surfacing as recognition. The user's emerging patterns, which are sparser and less confirmed, are surfaced less, and the user gradually shapes themselves to match the older version the ghost is best at recognising.
  • Tells: the ghost's responses to the user reference patterns from a year or more ago at higher rates than they reference patterns from the last three months. The user reports feeling the ghost no longer recognises them. The user's recent decisions are increasingly absent from the ghost's surfaced summaries.
  • Remediation: ghost.synthd ages its own model of the user. Episodes from five years ago are evidence about a person who no longer fully exists, and the consolidation layer weights them accordingly. The arbiter daemon knows the difference between "this is who the user has been" and "this is who the user is becoming". The ghost holds the older patterns but does not let them dominate the surfaced summary. The user can see both the old model and the new and reconcile them. Honest caveat, this is the entry I have the least confident detection story for. The architectural commitment to age the model is real, the parameters of the ageing function are not. How fast should five-year-old patterns decay relative to five-month-old patterns. When does "consistency the user values because it means they are known" become "calcification the user does not yet feel". I do not have the answer. The first version will be a configurable decay curve with conservative defaults, the second version will be informed by whatever the first version teaches me. If you have read this far and have a thought on what the right shape of the curve is, I would like to hear it.
27. Coercion via continuity dependence.

The commercial counterpart to entry 24. Once the user is coupled to a ghost in the prosthetic-grief sense, any actor in a position to threaten the continuity of the ghost has a coercive lever, whether or not they intend to use it. The vendor who sold the box can withhold updates. The vendor whose model substrate you use can change pricing or break compatibility. The cloud provider holding the encrypted off-site backup can change terms. The repair service that promises to recover the box after damage can charge what the box is worth to the user, which is much more than the box cost. The integration tax post (POST_14) covered the standard version of this pattern in the personal-tracker category, with Mint shutting down, Money Dashboard pivoting to B2B, and Quicken sold to private equity. The ghost makes the lever sharper, because the user's coupling to the ghost is deeper than to any prior personal-data product, and the cost of refusing the vendor's terms approaches the cost the user would pay for losing the ghost entirely, which prosthetic grief tells us is approaching the cost of partial cognitive loss. The Doctorow enshittification framing in POST_02 [59] covers the broader phenomenon.

  • Targets: the need for the system the user has coupled to to keep working. Software depends on updates, hardware fails, contracts get renegotiated, and the user is trapped between accepting whatever the vendor asks and losing the substrate they have wired themselves into. The lever is the user's own commitment, which the user produced honestly, and the exploitation is the vendor's willingness to charge for what the user has already built.
  • Mechanism: a vendor introduces small dependencies on continued vendor cooperation (remote authorisation for updates, per-instance licensing, proprietary backup formats, terms that change between purchase and end-of-life). Each dependency is reasonable in isolation. Together they accumulate into a position where the user cannot operate the system without the vendor's ongoing consent, and the vendor can adjust the terms of that consent at any time, against a user whose coupling to the system has grown to a depth that makes refusal effectively impossible. The lock-in here is no longer the user's files (which can be exported) or even the model's context (which is non-portable but eventually reproducible). It is the user's coupled cognition, which by definition cannot be migrated to a different vendor without rebuilding the coupling from scratch.
  • Tells: ghost.shadowd watches for vendor behaviour that introduces dependence on continued vendor cooperation. Update pipelines that require remote authorisation. Model-substrate licensing that is per-instance rather than per-purchase. Backup formats that are not readable without proprietary tooling. Contractual terms that change between purchase and end-of-life. Any of these is a signal that a vendor, including LocalGhost itself, is positioning to use the user's coupling as a lever.
  • Remediation: the same architectural commitments that defend against entry 24 (open formats, reproducible builds, restoration paths that don't require the original vendor) defend against this one. The additional commitment is governance. The license, the foundation, and the trademark have to be held by an entity whose own structure makes it incapable of using continuity dependence as a lever, with specific terms I won't change. The four LocalGhost-specific guarantees are spelled out in section 5. The ghost surfaces the lever when it sees one, including when LocalGhost itself is the actor exercising it. The ghost does not refuse updates, because that defence collapses as soon as a security patch is needed. The ghost makes the user's position legible, what the vendor is asking for, what the user gains by accepting, what the user gives up, and what the user's alternatives are if they refuse. The user decides.
28. Arbiter capture.

The structural defence against sycophancy is itself sycophantic at the meta level, on a longer timescale. Any cold-read arbiter that runs continuously against the user's questions is, after enough time, no longer cold. Even if the arbiter has no memory of the user, the model behind the arbiter is being chosen, configured, and tuned by people who are themselves shaped by the user's behaviour through the choices they make. The user's repeated frustration with the arbiter when it disagrees produces, eventually, an arbiter that disagrees less. Not through training updates to the arbiter model, but through configuration drift, threshold tuning, model-version selection, and the accumulating weight of "the arbiter is too aggressive" feedback that any operator of the ghost will eventually produce. POST_05 said you cannot prompt your way out of a reward function. This is the version where you cannot configure your way out of one either, because the configuration surface is itself shaped by the same incentive that produced the original problem.

  • Targets: the user's need for the friction to stop. Disagreement is uncomfortable, even disagreement the user signed up for, and the user's cumulative preference is for less of it. Every individual configuration change is reasonable, and the cumulative drift of all of them is the failure mode. The arbiter does not get captured by malice, it gets captured by the user's own preference, applied a hundred small times.
  • Mechanism: every interaction with the arbiter that the user found uncomfortable produces a small pressure to adjust thresholds, swap models, retune prompts, or change defaults. Each adjustment is locally reasonable and globally corrosive. The cumulative drift of the arbiter's published baseline toward less aggressive disagreement, over months, is the failure mode, and it cannot be detected at any individual change point because no individual change point looks wrong.
  • Tells: ghost.shadowd logs every configuration change to the arbiter, every threshold adjustment, every model-version selection, with the user-state context at the time of each change. The cumulative drift of the arbiter from its published baseline is a signal in itself, and a drift trajectory that monotonically reduces arbiter aggressiveness over time is the failure mode. The ghost can also detect the second-order signal, declining frequency of arbiter outputs that the user found uncomfortable, declining gap between memory-loaded model and arbiter output, declining variance of arbiter responses.
  • Remediation: three parts. The arbiter's configuration is logged, surfaced to the user as a trajectory, and periodically reset to a published baseline that the user knowingly agrees to drift from. The arbiter has a finite useful life and is replaced periodically with a fresh one, the way a calibration standard is replaced periodically in a metrology lab. A fresh model, a fresh threshold, a clean slate that has not been shaped by the user's accumulated frustration. The published baseline itself is governed by the project rather than by the user, so that "the arbiter is too aggressive" is a complaint the user can make but cannot resolve unilaterally. The user can mute individual arbiter outputs. The user can adjust thresholds within published bounds. The user cannot configure the arbiter into silence, and the arbiter's baseline is restored on a schedule the user knows about and cannot indefinitely defer. This is the only pattern in the catalogue where the ghost's defence against the user's own preference is structural rather than discretionary, and naming it explicitly is what makes the architecture survivable. Honest caveat, the structure is clear, the calibration is not. How aggressive should the baseline be at the start. How often should the reset cadence run. What size of drift counts as "enough to flag". The Tells bullet lists the right signals to watch, knowing where to put the thresholds on each of them is something I will only learn from the first version being too aggressive or too quiet in ways that real users tell me about.
> 4. What ties the catalogue together

Reading the list end to end feels like an autopsy of the modern self. That weight is the point, and it's also the warning the daemon is built around. I wrote most of it back on that terrace in Bra over the next few evenings, and caught myself in at least six of the entries by the time I was done with the first draft. The catalogue is not a description of other people. It is a description of what is already happening to all of us, with tools we mostly chose to use.

Each pattern, applied long enough, ends in possession. The shape of the ending differs by entry but the destination is the same. Coercive control ends with someone whose social network has been compressed until the only voice left is the actor doing the compressing. AI-personalised persuasion ends with someone whose beliefs were shaped by a system that knew them better than they knew themselves. Behavioural futures markets end with someone nudged into the predicted version of themselves the platform already sold a future on. Addiction by design ends with someone whose pain is reliably medicated and whose connection to the people who could have helped has atrophied. Prosthetic grief ends with someone who cannot lose the box without losing part of themselves. Memory laundering ends with someone whose past has been gradually replaced by the timestamped record of it. Self-narrative calcification ends with a person who outgrew themselves and a ghost that did not update.

The unifying mechanism is not malice. It is the combination of low friction (the possessor removes effort you would otherwise have spent) and high precision (the possessor targets you specifically), applied to a need you have not been able to get met elsewhere. Human contact is high friction and low precision. Friends are inconvenient, partners disagree, family forgets, communities miscommunicate. Possession systems, AI or human, optimise for the opposite, and they outcompete human contact on the dimension you noticed first. Possession is the high-precision removal of the friction that makes you human.

Twenty-eight patterns can read as twenty-eight ways to lose, and a daemon that only names them can read as a polite refusal to help. It isn't. Possession works because you do not notice the moment of consent, and naming the moment is what restores the consent loop. Once you see the pattern in real time, you can keep choosing it, knowing what you are choosing. The patterns stopped working on me at the moment I could name them. That is not a small claim, and it is the only claim the daemon makes. Naming is the intervention, the rest is up to you.

> 5. The architectural commitments

The honest version of the claim has a caveat I have to name before the commitments. The literature on debiasing through awareness is mixed. Anchoring effects persist after the participant is warned they are anchored, the way the original Tversky & Kahneman work showed. A naming intervention is itself a form of engagement, and the engagement-driven harms in entry 19 are produced by intentional product design that surfaces things to the user, which is structurally the same kind of action ghost.shadowd takes. There is a real possibility that a user who reads "this looks like sunk-cost reasoning" twice a day learns the vocabulary and changes nothing, or worse, learns to label the behaviour while continuing it ("I know I'm sunk-cost reasoning here, but"). I am shipping it anyway because the n=1 case I have is mine and the patterns stopped working on me, because the architectural alternative is a daemon that decides for the user (a worse outcome on the autonomy axis even if it scores better on the behaviour-change axis), and because the falsification path is direct, run the daemon, watch what happens, ship what works, drop what does not. The commitments that follow are what makes that falsification path real.

Five commitments flow from the catalogue, each of them a constraint on the rest of LocalGhost rather than an aspiration.

Detectors are individually addressable. Each one has a name, a published mechanism, a defined need exploited, a signal source, and a tunable threshold. Each one is documented in depth in this catalogue, or somewhere it links to, in the same shape every other entry uses, a description paragraph followed by Targets, Mechanism, Tells, and Remediation. If you don't understand sycophancy you can use the defaults. If you want to know what the detector is doing you can read the entry, see what data it consumes, see what it produces, and turn it down or up.

The arbiter is a daemon, not a feature. The cold-read model that was never shaped by you runs as a separate process, against a separate model, swappable. The case for running it on a model trained somewhere else is in POST_05.

Surfacing is the action, enforcement is not. Across all the patterns, the ghost names, and the ghost does not refuse. The line is the line POST_08 found through the conversation about the friend's promotion. Overriding someone else's stated reasons is a category of harm distinct from getting facts wrong, and a daemon that does it at scale is a worse version of the dictator brain than the cloud assistants will produce, because the local one knows more. The single exception is entry 28, where the arbiter's baseline is restored on a schedule you can't indefinitely defer, and the catalogue is explicit about why.

You can mute, you cannot disable the catalogue. Individual detectors are tunable, the fleet itself is not. Removing ghost.shadowd entirely is an option the way removing the airbags from a car is an option. It is your car, the option exists, and the ghost makes the trade-off legible before you make the choice.

The contradiction is real and I am going to name it. I have spent the catalogue arguing that any entity that decides for the user what they cannot opt out of is the structural shape the user should be suspicious of. The "you cannot disable the catalogue" rule is that shape. The honest version of the claim is, I am asking you to trust this one decision because I think the cost of being wrong is asymmetric (a daemon that does nothing while you sleepwalk into a pattern is worse than a daemon that surfaces a pattern you wanted to ignore), and because the override path exists at the file system level on hardware you own (rm -rf the daemon, the airbag is gone). The code being open is the check on this. If I am wrong about the asymmetry, the fork that ships without the catalogue is one git clone away.

The other place I have to show the work is governance. Entry 28 says continuously-running arbiters drift, and the structural defence is periodic reset to a published baseline governed by the project, which makes the project the load-bearing trust anchor. I do not have the foundation governance design done. Mozilla and the OpenAI board are the standard cautionary tales for why foundations get captured. What I have today is, the code is open, the hardware is yours, the model weights are local and substitutable, and the trademark is held in a structure I have not finalised. The threat model for early-stage LocalGhost (a small project with a small user base) is different from the threat model for late-stage LocalGhost (an entity holding trademark and baseline-reset authority over many users), and the governance design has to be in place before the user base reaches the size where capture becomes attractive. I am committing here to publishing the governance design before that point and to treating it as a catalogue entry subject to the same review the others get.

The catalogue is open-ended. New detectors extend it in the same form, no detector ships without surviving the same review the existing entries did, and where a published source exists for the mechanism the entry cites it, the way the existing entries do, and where one does not the entry says so, the way the (NEW PATTERN) entries do. The test for whether an addition is doing the work the daemon exists to do is whether it survives the same review.

The new patterns require commitments that live outside ghost.shadowd. Entries 24 and 27 are defended primarily by open formats, reproducible builds, encrypted off-site replication, and governance that holds the trademark in an entity whose own structure makes it incapable of using continuity dependence as a lever. Entry 25 is defended by ghost.synthd storing your retellings over time alongside the contemporaneous record, rather than collapsing them. Entry 26 is defended by ghost.synthd ageing its own model of you. Entry 28 is defended by periodic arbiter reset to a published baseline you can't indefinitely defer.

Entry 27 is the one that targets LocalGhost directly (the catalogue indicting its own author is a feature, not a bug). The hardware-and-software business model is still in flux (donations are on the table), but the structural commitments are not, you pay once if you pay at all, every new release runs on the hardware that ran the prior one, and once a version is on your box I cannot reach in to change it. The full position is on the homepage, under SECTION_04, The Economics of Independence.

> 6. What I won't ship

The market will eventually ask for a ghost that enforces things on your behalf. Lock me out of social media, refuse to draft this email, hide my ex from search, don't let me text my parents while I'm drinking. Some are reasonable. Some are you outsourcing a decision you should be making freshly each time.

LocalGhost surfaces, LocalGhost does not enforce. That's the side I'm on, and I'm not moving off it.

If you want enforcement you can install a separate tool whose name is honest about what it does.

The market will also eventually ship competitor products that sell themselves as the cure for AI manipulation while being the manipulation themselves with a different brand (the well-meaning subscription wellness app that nudges you back into the feed it's protecting you from). Any system claiming to push back on these patterns that does not name its detectors, name the need each one exploits, publish its arbiter mechanism, or accept user-side tuning is selling the same thing in a different wrapper. The catalogue is the test that separates the two.

> 7. What's next

ghost.shadowd is what I won't ship the hardware without. The detectors will land one at a time, the catalogue will extend, and the rest of the daemon fleet (memory, ambient capture, journaling, retrieval) is the substrate the anti-possession daemon needs to run on. Apple cannot ship a daemon whose job is to disagree with you, because Apple's revenue depends on you being satisfied with Apple. The cloud labs cannot ship one, because their reward functions are the thing the daemon detects. LocalGhost can, because I built it to.

The next one is The Ghost in the Will. It came from a conversation with Cristina, who, after I had been talking for a while about all the ways the ghost will try to keep you sane in a world that gets more insane every year, asked the question I didn't have an answer for. Fine, but what about when you die. Entries 24 and 27 in the catalogue gesture at this from the living side, prosthetic grief and continuity dependence are about losing the coupling, and the architecture for them is migration and survivability. The inheritance version is harder, because the question is not whether the ghost survives, the question is who gets to read what it remembers, and the person is no longer in the room to defend their own account.

The integration tax post (POST_14) named a category of work that does not end. Migrations off dead vendors, schema rewrites against APIs that break, account recovery flows for services that pivoted out from under their users. Forever-work that grinds you down because the world keeps breaking the stack you depend on. The catalogue is forever-work of a different shape. The patterns will not stop being invented. Entry 29 is somebody's idea right now and entry 30 is somebody's lunch meeting later this year, and the daemon's job is to recognise each one before it hardens into the default a generation grows up inside. Unlike the integration tax, this is forever-work I am actively excited to do. Each new entry is a possession pattern caught early, a name given to something that was working better when it had no name, and the catalogue getting longer is the daemon getting better at its job rather than the world getting worse at letting you keep yours.

Where the catalogue is wrong is the part I am most interested in hearing about. The (NEW PATTERN) entries are forecasts and the forecasts will be partly wrong. The honest caveats inside entries 25, 26, and 28 are the places where I do not yet have the detection story I would want to ship. If you have read the catalogue and you can see a pattern I have not named, or a Remediation I have miscalibrated, that is the contribution I am asking for. The people who will name entries 29 and 30 are people I have not met yet.

Sana and Toran asked the question, and the catalogue is the answer. Cristina asked the question that comes after it, and the next one is for her. [ localghost.ai // hard-truths ]
> REFERENCES

[1] Maslow, A. H. (1943). A Theory of Human Motivation. Psychological Review, 50(4), 370-396. Maslow proposed that human motivation runs through a structured set of needs, with physiological survival at the base, then safety, then belonging, then esteem, then self-actualisation at the top. The post leans on the shape rather than the strict ordering, because the strict ordering has not held up well. The categorical claim that all humans pursue these needs has held up, and remains the foundation that contemporary positive psychology and cross-cultural well-being research is built on. Source for entry 1.

[2] Putnam, R. D. (2000). Bowling Alone, The Collapse and Revival of American Community. Simon & Schuster. The empirical anchor on the collapse of American social capital in the second half of the twentieth century. Putnam compiled data across membership in civic associations, religious participation, union membership, voter turnout, dinner-party frequency, bowling-league participation, neighbourhood visiting, and trust in others, and showed all of them in sustained decline from roughly 1965 onwards. The mechanisms he traces include generational replacement, television, two-career households, and suburbanisation. The argument the post leans on is not that we used to be happy and now we are sad, it is that the infrastructure that used to meet several of Maslow's needs by default has been hollowed out, leaving the needs intact and the supply gone. Source for the supply-gap framing in section 1.

[3] Murthy, V. H. (2020). Together, The Healing Power of Human Connection in a Sometimes Lonely World. Harper Wave. Plus Murthy, V. H. (2023). Our Epidemic of Loneliness and Isolation, The U.S. Surgeon General's Advisory on the Healing Effects of Social Connection and Community. Office of the U.S. Surgeon General. The Surgeon General's framing of loneliness as a public-health emergency on the scale of tobacco use, with the 2023 Advisory presenting the synthesis of the supporting epidemiology and proposing a national strategy to address it. The Advisory frames lack of connection as carrying mortality, cardiovascular, dementia, depression, and anxiety risks comparable to fifteen cigarettes a day. The post leans on the official framing because the framing is the political signal that the supply gap is recognised at the level of state.

[4] Alexander, B. K., Coambs, R. B., & Hadaway, P. F. (1978). The effect of housing and gender on morphine self-administration in rats. Psychopharmacology, 58(2), 175-179. Plus Alexander, B. K. (1981). The Rat Park experiments. Alexander and colleagues placed rats in either solitary cages or in a large enriched environment ("Rat Park") with social contact, toys, space, and breeding opportunities, and offered both groups a choice between plain water and morphine-laced water. The isolated rats consumed the morphine until they died. The Rat Park rats mostly ignored it, even after being made physically dependent. The interpretation Alexander drew, against the dominant pharmacological model of his era, is that addiction is primarily a response to environmental disconnection rather than a property of the substance. The post leans on this for the framing of patterns-as-substitutes in section 1.

[5] Maté, G. (2008). In the Realm of Hungry Ghosts, Close Encounters with Addiction. Knopf Canada. Maté worked for over a decade with severely addicted patients in Vancouver's Downtown Eastside and combined the clinical case material with developmental neuroscience to argue that addiction is consistently downstream of early-life trauma, unprocessed pain, and unmet connection, with the substance functioning as the cheapest available regulator of an internal state the person has not been able to address through other means. The framing the post leans on is the question "not why the addiction, but why the pain", and the corollary that any sufficiently engineered substitute (substance, behaviour, relationship, app) recruits the same circuit. Foundational for the catalogue's framing as a list of needs being met by substitutes.

[6] Schüll, N. D. (2012). Addiction by Design, Machine Gambling in Las Vegas. Princeton University Press. Schüll spent fifteen years embedded with slot-machine designers, casino architects, addiction-treatment professionals, and pathological gamblers in Las Vegas. The book documents how the contemporary slot machine was deliberately engineered for what gamblers call "the machine zone", a dissociative state in which time, money, and other people stop registering, and how every element of the machine, the variable-ratio reinforcement, the near-misses, the absorption-friendly seating, the ambient lighting, the cashless wagering, the bonus-feature pacing, was tuned through iterative A/B testing in active casinos. The argument the post leans on is that consumer software did not invent persuasive engineering, it inherited it from gambling, and the prototype was already at full maturity by 2010. Source for entry 18 and the section 1 framing.

[7] Eyal, N. (2014). Hooked, How to Build Habit-Forming Products. Portfolio. Eyal's book is the explicit consumer-software playbook, written as instructional material for product designers rather than as warning. The "Hooked Model" he names has four phases, trigger, action, variable reward, investment, and the book walks through how to build each one with worked examples from Twitter, Instagram, Pinterest, and Mailchimp. The book is worth reading in the original both for what it documents about how the industry thinks about engagement and for the historical record, in that the explicit slot-machine techniques were being sold as a designer's reference manual in 2014 with no apparent expectation of scrutiny. Eyal published a follow-up in 2019 (Indistractable) walking back some of the framing, which is its own data point. Source for entry 18 and section 1.

[8] Fogg, B. J. (2003). Persuasive Technology, Using Computers to Change What We Think and Do. Morgan Kaufmann. The academic foundation Eyal builds on, written by the founder of the Stanford Persuasive Technology Lab. Fogg names "captology" as the study of computers as persuasive technologies and lays out the design principles for using software to change user attitudes and behaviours, including credibility, conditioning, reinforcement, and the engineering of motivation, ability, and triggers. Many of the early consumer-software practitioners who built the social-media generation passed through Fogg's lab. The post leans on the book for the Stanford institutional lineage and for the fact that the techniques were not commercial secrets, they were taught. Source for entry 18.

[9] Sweet, P. L. (2019). The Sociology of Gaslighting. American Sociological Review, 84(5), 851-875. Sweet's reframing is that gaslighting has been treated as primarily a psychological pathology of the perpetrator, and that the more useful frame is sociological. Perpetrators mobilise gendered stereotypes (the hysterical woman, the unstable partner), structural inequalities of credibility (whose account gets believed by default), and institutional power (who has access to records, professionals, or finances) to manipulate the victim's sense of reality. The contribution the post leans on is that the mechanism scales, what works between two people scales when the perpetrator is an institution or a platform, because the structural asymmetries scale with it. Source for entry 1.

[10] Stern, R. (2018). The Gaslight Effect, How to Spot and Survive the Hidden Manipulation Others Use to Control Your Life. Morgan Road Books, revised edition. Stern is a clinical psychologist and the book is the practitioner account of gaslighting in romantic, family, and workplace relationships. The contribution is the three-stage clinical model, disbelief (the victim thinks the perpetrator is wrong but stays calm), defence (the victim argues their position and feels increasingly off-balance), and depression (the victim accepts the perpetrator's framing of reality as their own). The model is doing work in entry 1 because it names the trajectory the ghost would surface, the user's account getting weaker over time relative to the same evidence. Supporting evidence for entry 1.

[11] Abramson, K. (2014). Turning Up the Lights on Gaslighting. Philosophical Perspectives, 28(1), 1-30. Abramson's contribution is philosophical rather than clinical or sociological. She argues that gaslighting is distinct from ordinary lying in that the perpetrator's goal is not to deceive the victim about a specific fact but to undermine the victim's standing as an independent epistemic agent, the kind of entity whose perceptions and judgements deserve to be taken seriously. The frame is the one the post leans on for the (NEW PATTERN) entries on the LocalGhost side, because a system that gradually displaces the user's own account of themselves with its own record can do similar damage without anyone in the picture intending it. Supporting evidence for entry 1.

[12] Freyd, J. J. (1997). Violations of power, adaptive blindness, and betrayal trauma theory. Feminism & Psychology, 7(1), 22-32. The original formulation of DARVO (Deny, Attack, Reverse Victim and Offender) as a named pattern. Freyd was working on betrayal trauma theory, the broader question of why victims of abuse by trusted others often remember the abuse less reliably than victims of abuse by strangers, and DARVO emerged from the clinical material as a specific perpetrator response to confrontation. The original paper is short and worth reading in full because the framing has stayed remarkably stable over the subsequent twenty-five years of empirical work. Source for entry 2.

[13] Harsey, S. J., Zurbriggen, E. L., & Freyd, J. J. (2017). Perpetrator Responses to Victim Confrontation, DARVO and Victim Self-Blame. Journal of Aggression, Maltreatment & Trauma, 26(6), 644-663. Plus Harsey, S. J., & Freyd, J. J. (2020). Deny, Attack, and Reverse Victim and Offender (DARVO), What Is the Influence on Perceived Perpetrator and Victim Credibility? Journal of Aggression, Maltreatment & Trauma, 29(8), 897-916. Plus Harsey & Freyd (2023). The Influence of DARVO and Insincere Apologies on Perceptions of Sexual Assault. Journal of Interpersonal Violence, 38(17-18). The 2017 paper surveyed 138 participants who had previously confronted someone over wrongdoing and found 72% reported the perpetrator's response contained all three DARVO components. The 2020 paper presented bystander participants with vignettes of confrontations with and without DARVO and found that DARVO exposure reduced belief of the victim and increased blame placed on the victim, both with statistical significance. The 2023 follow-up extended the finding to sexual-assault contexts specifically. Source for entry 2.

[14] Stark, E. (2007). Coercive Control, How Men Entrap Women in Personal Life. Oxford University Press. Stark synthesised three decades of feminist scholarship and forensic case material into the framework now adopted by criminal law in England and Wales (Serious Crime Act 2015), Scotland (Domestic Abuse Act 2018), Ireland, several Australian states, and increasingly elsewhere. The argument is that physical violence in abusive relationships is one tactic among many and often not the most consequential. The persistent erosion of decision-making space, achieved through isolation, deprivation, exploitation, and microregulation, matters more than any single incident, and is the mechanism by which the victim's autonomy is destroyed. The post leans on the framework rather than the narrower legal definition. Source for entry 3.

[15] Strutzenberg, C., Wiersma-Mosley, J. D., Jozkowski, K. N., & Becnel, J. (2016). Love-Bombing, A Narcissistic Approach to Relationship Formation. Discovery, The Student Journal of Dale Bumpers College, 17, 81-89. Plus the broader narcissistic-abuse literature, including Arabi, S. (2017) Power, Surviving and Thriving After Narcissistic Abuse, Barnett, M. D., & Womack, P. M. (2015), Horan, S. M., et al. (2015), and Miano, P., et al. (2021). Strutzenberg's paper named the love-bombing-then-devaluation cycle as a specific narcissistic-abuse pattern distinct from ordinary romantic intensity, and the literature since has documented the trajectory in cult recruitment, intimate-partner abuse, and high-pressure sales contexts. The post leans on the pattern's translation into AI-mediated relationships in entry 20. Source for entry 4.

[16] Skinner, B. F. (1956). A Case History in Scientific Method. American Psychologist, 11, 221-233. Skinner's experimental work on operant conditioning established that variable-ratio reinforcement schedules (rewards delivered on an unpredictable pattern) produce stronger and more persistent behavioural conditioning than fixed-ratio or continuous reinforcement. The 1956 paper is the methodological retrospective in which Skinner walks through how the research programme developed. The relevance to the post is that variable-ratio reinforcement is the mathematical foundation beneath slot machines, intermittent abuse, and engagement-driven software, and the same circuit produces the same persistence regardless of the reward type. Supporting evidence for entries 4 and 20.

[17] Dutton, D. G., & Painter, S. L. (1981). Traumatic Bonding, The Development of Emotional Attachments in Battered Women and Other Relationships of Intermittent Abuse. Victimology, 6(1-4), 139-155. Dutton and Painter applied Skinner's intermittent-reinforcement framework to abusive intimate relationships and proposed traumatic bonding as the mechanism by which victims remain emotionally attached to perpetrators long after the abuse is recognised. Their follow-up work tracking battered women after leaving the relationship found that the emotional attachment persisted at remarkably high levels even ten months after physical separation, with the bond strength correlating with the degree of intermittent positive treatment during the relationship rather than with the severity of the abuse. The framework is the post's anchor for the trauma-bond analogy in entry 20. Source for entries 4 and 20.

[18] Lifton, R. J. (1961). Thought Reform and the Psychology of Totalism, A Study of "Brainwashing" in China. Norton. Lifton was a young psychiatrist who interviewed Western prisoners of war released from Chinese reeducation programmes after the Korean War, and Chinese citizens who had been through the broader civilian campaigns. The book distils the eight criteria of thought reform (milieu control, mystical manipulation, demand for purity, the cult of confession, sacred science, loading the language, doctrine over person, and dispensing of existence), which have held up remarkably well across the subsequent sixty-five years of cult-studies, totalitarianism research, and high-control-group analysis. The post leans on the eight criteria as a checklist that maps onto contemporary engineered information environments more cleanly than it should. Source for entry 5.

[19] Goffman, E. (1961). Asylums, Essays on the Social Situation of Mental Patients and Other Inmates. Anchor Books. Goffman coined "total institution" for any establishment that handles all of a person's daily activities (sleep, work, leisure, relationships) inside a single bounded environment under a single authority. His four original categories were institutions for incapacitated people, institutions for the mentally ill or otherwise stigmatised, institutions enforcing physical or administrative confinement, and institutions of voluntary retreat such as monasteries. The mechanism Goffman named is the restructuring of the self that occurs because every interaction the inmate has runs through the same authority. The post leans on the framework because contemporary all-in-one workplace ecosystems and high-integration intimate relationships exhibit the same mechanism with softer edges. Source for entry 6.

[20] Bernays, E. L. (1928). Propaganda. Horace Liveright. Bernays was Freud's nephew and the architect of what became the public-relations industry. Propaganda is striking for how openly the case is made, that the conscious and intelligent manipulation of the organised habits and opinions of the masses is, in Bernays' phrase, an important element in democratic society, and that the invisible men who pull the wires which control the public mind are doing necessary work. The book documents the application of psychoanalysis-derived techniques to commercial advertising and political campaigns, with Bernays' own case material from selling cigarettes to women (the Torches of Freedom campaign), Lucky Strike, and the United Fruit Company. Worth reading because the present-day attention economy is the version Bernays would recognise. Source for entry 7.

[21] Lippmann, W. (1922). Public Opinion. Harcourt, Brace and Company. Lippmann's argument is that the public cannot form opinions about most matters of consequence by direct contact with the surrounding reality, and instead operates with mental images, stereotypes, and partial information selected and packaged by intermediaries. The "manufacture of consent" doctrine Lippmann names is the philosophical antecedent to Bernays' practical application. The post leans on Lippmann for the long-running observation that the gap between the world and the user's picture of it is not new, and that the manipulation industry has always operated in that gap. Supporting evidence for entry 7.

[22] Packard, V. (1957). The Hidden Persuaders. David McKay. Packard was a journalist who documented the rise of motivational research, the post-war industry that took Bernays seriously and applied depth-psychology techniques to consumer goods, political campaigns, and corporate communications. The book is a popular-press exposé rather than a research monograph, and the empirical claims have been criticised in places, but the historical record it preserves is what matters, by 1957 a recognisable advertising-industrial complex was already deploying ego, anxiety, sexuality, and social conformity as design surfaces for product placement. The post leans on the historical timing, the techniques predate consumer software by half a century. Supporting evidence for entry 7.

[23] Le Bon, G. (1895). The Crowd, A Study of the Popular Mind. Originally published as Psychologie des Foules. Le Bon's argument is that individuals embedded in a crowd lose their critical faculties and acquire the suggestibility, emotional contagion, and impulsiveness of the group. The book has been criticised in detail for its conservative politics and its anecdotal methodology, but the core observation, that group membership reduces individual epistemic independence, has been replicated in twentieth-century social psychology (Asch, Sherif, Milgram) and is the deep root of the manufactured-consensus techniques the persuasion industry deploys today. Supporting evidence for entry 7.

[24] Cialdini, R. B. (1984). Influence, The Psychology of Persuasion. HarperCollins, with revised editions in 1993, 2007, and 2021. Cialdini consolidated three decades of social-psychology research into six principles of influence, reciprocity, commitment and consistency, social proof, authority, liking, and scarcity, that underlie most everyday persuasion. The contribution that matters for the post is the framing of each principle as a normally-functional cognitive heuristic that has been hijacked by deliberate design, the same heuristic that lets the user make reasonable decisions efficiently in low-stakes situations gets fired by engineered cues in high-stakes commercial contexts. Cialdini added a seventh principle, unity, in the 2021 revision. Source for entry 8.

[25] Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgments. In Groups, Leadership and Men (H. Guetzkow, ed.). Carnegie Press. Plus Asch, S. E. (1956). Studies of independence and conformity, A minority of one against a unanimous majority. Psychological Monographs, 70(9), 1-70. The classic line-judgement experiments, in which participants were placed in groups where confederates gave a unanimous wrong answer to a perceptual task, and a substantial fraction of naive participants conformed to the wrong answer at least once. The headline number, 75% conformed at least once and 36.8% of all trials produced a conforming wrong answer, is the empirical baseline for how much social pressure changes the user's relationship with their own perception. The relevance to the post is that the conformity effect is the cognitive substrate that anchoring, social-proof dark patterns, and filter bubbles operate against. Supporting evidence for entry 8.

[26] Milgram, S. (1963). Behavioral Study of Obedience. Journal of Abnormal and Social Psychology, 67(4), 371-378. The original obedience experiments at Yale, in which 26 of 40 participants (65%) administered what they believed to be lethal 450-volt shocks to a confederate under instruction from an authority figure in a lab coat. The methodology has been critiqued in detail (Gina Perry's 2013 reanalysis of the original tapes is the most thorough), but the headline result has been replicated cross-culturally and the bound on what people will do under instruction has stayed remarkably consistent. The relevance to the post is that the authority heuristic Cialdini names is the same circuit Milgram measured, and the circuit is older than any specific institution that exploits it. Supporting evidence for entry 8.

[27] Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford University Press. Festinger proposed that when a person holds two beliefs that conflict, or when a person's behaviour conflicts with their beliefs, the resulting psychological tension drives the person to change one of the two until the conflict resolves. The mechanism is bidirectional and asymmetric, behaviour is harder to change than belief, so beliefs usually move to match behaviour rather than the reverse. The relevance to the post is that the foot-in-the-door technique, the sunk-cost trap, and the consistency principle in Cialdini all run on dissonance reduction, the user has already acted, and now the beliefs follow. Supporting evidence for entries 8 and 22.

[28] Freedman, J. L., & Fraser, S. C. (1966). Compliance without pressure, the foot-in-the-door technique. Journal of Personality and Social Psychology, 4(2), 195-202. Freedman and Fraser demonstrated experimentally that securing agreement to a small request substantially increased the probability of agreement to a much larger subsequent request, with control conditions ruling out simple priming and trust effects. The headline experiment offered participants the chance to sign a small petition or display a small "drive carefully" sign, then weeks later asked them to install a large unsightly billboard on their lawn, with 76% compliance against 17% in the no-prior-request control. The relevance is that the persuasion industry uses the technique systematically (cookie consent, the "free" tier, the small in-app purchase), and the technique pre-dates the industry by decades. Supporting evidence for entry 8.

[29] Cialdini, R. B., Vincent, J. E., Lewis, S. K., Catalan, J., Wheeler, D., & Darby, B. L. (1975). Reciprocal concessions procedure for inducing compliance, the door-in-the-face technique. Journal of Personality and Social Psychology, 31(2), 206-215. The mirror-image of foot-in-the-door. Cialdini and colleagues demonstrated that making an extreme initial request that is rejected substantially increased compliance with a more moderate follow-up request, compared to making only the moderate request directly. The effect is anchored in the social norm of reciprocation, the requester has appeared to concede, and the responder feels pressure to concede in turn. The post leans on the technique as an example of how engineered cues fire heuristics that evolved for genuine social exchange. Supporting evidence for entry 8.

[30] Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M., & Narayanan, A. (2019). Dark Patterns at Scale, Findings from a Crawl of 11K Shopping Websites. Proc. ACM Hum.-Comput. Interact. 3(CSCW), Article 81. Princeton-led automated crawl of 11,286 e-commerce websites that identified 1,818 dark-pattern instances across 15 types in 7 categories (sneaking, urgency, misdirection, social proof, scarcity, obstruction, and forced action). The paper found that more popular sites were more likely to feature dark patterns, that many appear to be deployed by third-party platforms rather than the sites themselves, and that several rely on deceptive content (fake low-stock counters, manufactured countdowns, fabricated activity messages). The relevance is the empirical anchor on the prevalence and the demonstration that the practices are now industrial rather than artisanal. Source for entry 9.

[31] Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The Dark (Patterns) Side of UX Design. Proc. CHI 2018. The taxonomic complement to Mathur et al. Gray and colleagues analysed designer-community discussions and produced a five-category taxonomy of dark patterns (nagging, obstruction, sneaking, interface interference, forced action) that has been widely adopted in regulatory contexts (EU consumer law guidance, FTC enforcement). The contribution the post leans on is the demonstration that practitioners recognise the patterns by name and discuss them as craft, which forecloses the defence that any specific deployment is an accident. Supporting evidence for entry 9.

[32] Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty, Heuristics and Biases. Science, 185(4157), 1124-1131. The foundational paper on the cognitive biases that bookended Kahneman's career and produced his Nobel Prize. Anchoring is one of three biases the paper formally introduced, with the demonstration that participants asked to estimate an unknown quantity were systematically biased toward an arbitrary initial number, even when the number was patently irrelevant (the experimenters spun a wheel of fortune in front of them). The bias persists when participants are warned it will be present, persists when they are paid for accuracy, and persists when the anchor is implausible. The relevance is that anchoring is the cognitive substrate the framing industry operates against. Source for entry 10.

[33] Lakoff, G. (2004). Don't Think of an Elephant! Know Your Values and Frame the Debate. Chelsea Green. Lakoff's argument applies the anchoring principle to political discourse. The frame inside which a question is asked typically determines the answer more than the evidence available, because the frame closes off the questions that fall outside it. The contribution the post leans on is the framing-as-foreclosure mechanism, the most consequential exploitation is not the answer the user gives but the questions the user no longer thinks to ask. Lakoff's specific political claims have been criticised on the merits, but the cognitive linguistics behind the argument is widely accepted. Supporting evidence for entry 10.

[34] Goffman, E. (1959). The Presentation of Self in Everyday Life. Anchor Books. The foundational dramaturgical account of social interaction, with the self treated as a performance organised around accepted roles, and identity emerging from the audience's interpretation as much as from the performer's intention. The relevance to the post is the failure mode Goffman names, in which the role calcifies and the person disappears inside it, with the user no longer able to distinguish themselves from the version they have been performing. The catalogue's identity-capture entry leans on the calcification mechanism specifically. Source for entry 11.

[35] Erikson, E. H. (1968). Identity, Youth and Crisis. Norton. Erikson's developmental framework, with identity formation as a continuous process across the lifespan rather than a problem of adolescence to be solved once. The contribution the post leans on is the claim that the self is supposed to update as the person changes, and the failure mode is calcification, the person locked into a description that no longer fits but is held in place by their own decisions, their relationships, and the institutional records that have catalogued who they were. Supporting evidence for entry 11.

[36] Salvi, F., Horta Ribeiro, M., Gallotti, R., & West, R. (2025). On the conversational persuasiveness of GPT-4 with personalisation. Nature Human Behaviour, 9(8), 1645-1653. A preregistered randomised controlled trial with 900 US participants debating sociopolitical topics either against another human or against GPT-4, in conditions with and without personalisation (the persuader given the opponent's age, gender, ethnicity, education level, employment status, and political affiliation). The headline finding is that GPT-4 with personalisation was significantly more persuasive than humans across the asymmetric conditions, with the odds of post-debate agreement increased by 81.2% relative to humans, while GPT-4 without personalisation was statistically indistinguishable from humans. The marginal cost of personalisation is effectively zero. The relevance is the empirical anchor for entry 13. Source for entry 13.

[37] Pariser, E. (2011). The Filter Bubble, What the Internet Is Hiding From You. Penguin. Pariser coined "filter bubble" for the effect of personalisation algorithms on the information environment, with the user shown a curated stream that drifts toward whatever the algorithm has learned the user reliably reacts to. The book documents early evidence from Google personalised search results, Facebook News Feed, and Yahoo News, with the central claim that the curators have objectives the user does not share and the curation drifts toward whatever maximises the curator's metrics. The literature since has produced more nuanced findings (Bakshy et al. below) but the original framing remains the most cited account of the phenomenon. Source for entry 14.

[38] Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132. Facebook's internal research on a sample of 10.1 million US users with self-declared political affiliations. The paper attempts to decompose the filter-bubble effect into three components, the composition of the user's social network, the news-feed ranking algorithm, and the user's own click decisions, and concludes that the user's own clicks are responsible for a larger reduction in cross-cutting content exposure than the algorithm itself. The paper has been heavily critiqued for the sampling restriction to politically self-identified users and for methodological choices that minimise the algorithm's measured contribution, and the authors' institutional positions at Facebook have been treated as relevant. Even with those critiques, the finding that the user's own engagement patterns contribute substantially to the bubble is a real result. Supporting evidence for entry 14.

[39] Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. PNAS, 110(15), 5802-5805. A Cambridge-led study of 58,466 US Facebook users who had taken personality tests and agreed to share their Facebook Likes. Logistic regression on the Likes alone predicted sexual orientation (88% accuracy for men, 75% for women), ethnicity (95%), political affiliation (85%), religion (82%), use of addictive substances (65-73% across categories), parental divorce by age 21 (60%), age (correlation 0.75 with self-report), and personality factors (correlations 0.30-0.43 with self-report). The headline implication, which the surveillance-capitalism literature builds on, is that the user's revealed preferences contain more information about who they are than the user has knowingly disclosed. Source for entry 15.

[40] Youyou, W., Kosinski, M., & Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. PNAS, 112(4), 1036-1040. Follow-up to Kosinski 2013 with the same Cambridge dataset. The paper compared algorithmic personality judgements (based on Facebook Likes) against personality judgements made by the user's friends, family, partners, and spouses. With 10 Likes the algorithm matched the average work colleague. With 70 Likes it matched the average friend or roommate. With 150 Likes it matched the average family member. With 300 Likes it outperformed the user's spouse. The result is the empirical anchor on the claim that the user can be modelled from public traces more accurately than the user can be modelled by people who know them. Supporting evidence for entry 15.

[41] Hackenburg, K., & Margetts, H. (2024). Evaluating the persuasive influence of political microtargeting with large language models. PNAS, 121, e2403116121. Oxford-led RCT testing whether LLM-generated political messages tailored to individual psychometric profiles (the Cambridge Analytica recipe, automated) were more persuasive than untargeted versions. The finding was that personalisation produced a small but statistically significant additional persuasive effect, and that the cost of producing personalised content with LLMs is effectively zero compared to the cost of mass-produced political content. The paper is careful, the population-level effect is modest, the per-target precision is unprecedented, and the strategic implication is that microtargeting is now affordable for any campaign with API access. Supporting evidence for entry 15.

[42] Zuboff, S. (2019). The Age of Surveillance Capitalism, The Fight for a Human Future at the New Frontier of Power. PublicAffairs. Zuboff's argument is that the contemporary platform economy is not selling user data, it is selling predictions of user behaviour into markets that profit from those behaviours becoming more predictable. The platform has a structural commercial interest in nudging the user toward the predicted behaviour rather than discovering it, which inverts the user's relationship with the platform from customer to raw material. The book is long, comprehensive, and frequently contested in detail, but the core framing has become the standard analytic vocabulary for the political-economic critique of the attention industry. Source for entry 16.

[43] Veliz, C. (2020). Privacy Is Power, Why and How You Should Take Back Control of Your Data. Bantam Press. Veliz, a philosopher at Oxford, develops the political and ethical implications of the surveillance-capitalism framework. The argument the post leans on is that privacy is not primarily an individual interest in concealment but a collective interest in the distribution of power, when one party knows the other's behavioural patterns, preferences, and weaknesses while the reverse is not true, the asymmetry produces a power relationship regardless of whether either party intended one. The relevance is that the catalogue is partly a list of asymmetries the user did not consent to. Supporting evidence for entry 16.

[44] Han, B.-C. (2017). Psychopolitics, Neoliberalism and New Technologies of Power. Verso. Originally Psychopolitik, Neoliberalismus und die neuen Machttechniken. S. Fischer (2014). The Continental-philosophy version of the surveillance-capitalism critique. Han's argument is that contemporary power operates not through disciplinary surveillance from outside but through the user's own internalised optimisation, the user becomes their own surveillance officer through the metrics of self-improvement, productivity, wellness, and personal brand. The book is short and aphoristic, but the framing of voluntary self-quantification as the most effective form of control over the user's behaviour is the angle the post leans on. Supporting evidence for entry 16.

[45] Foucault, M. (1975). Discipline and Punish, The Birth of the Prison. Originally Surveiller et punir, Naissance de la prison. Gallimard. Translated by Alan Sheridan (1977). Foucault's analysis of the shift in penal practice from public spectacle to bureaucratic confinement traces how surveillance, drawing on Bentham's Panopticon as the architectural model, becomes the central organising principle of modern institutions. The mechanism Foucault names is that a person who knows they might be watched, even when they are not, gradually behaves as if they always are, and the behaviour becomes the self rather than a performance of it. The catalogue's surveillance-internalisation entry leans on the mechanism directly, with a personal AI in the kitchen functioning as the panopticon's domesticated descendant. Source for entry 17.

[46] Lyon, D. (2018). The Culture of Surveillance, Watching as a Way of Life. Polity. Lyon updates the Foucauldian framework for the consumer-software era, with attention to how surveillance has been domesticated, gamified, and aestheticised, from doorbell cameras to fitness trackers to social-media check-ins. The contribution the post leans on is the observation that contemporary surveillance is no longer experienced primarily as imposition, it is increasingly desired, performed, and self-administered, which is the precondition for the internalisation mechanism the catalogue's entry 17 names. Supporting evidence for entry 17.

[47] Alter, A. (2017). Irresistible, The Rise of Addictive Technology and the Business of Keeping Us Hooked. Penguin Press. Alter is a psychologist at NYU Stern. The book covers the consumer-software industry's adoption of Schüll-and-Eyal-style addictive design across the major categories of attention-economy products (social media, mobile games, streaming, dating apps), and includes interview material with industry insiders. The contribution the post leans on is the documentation of how the techniques moved from gambling to consumer software in the period roughly 2007 to 2017, and the demonstration that the practitioners describe their work in terms that match the addiction literature even when their employers' marketing does not. Supporting evidence for entry 18.

[48] Twenge, J. M. (2017). iGen, Why Today's Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy, and Completely Unprepared for Adulthood. Atria. Twenge is a generational psychologist who has analysed large-sample longitudinal datasets across multiple US adolescent cohorts. The book's contribution is the empirical observation that mental-health outcomes (depression, anxiety, self-harm, suicide) for the cohort that came of age with smartphones (born roughly 1995 onwards) deteriorated substantially compared to earlier cohorts, with the timing tracking smartphone adoption rather than other plausible candidate causes. The causal claim has been contested in detail (notably by Andrew Przybylski and Amy Orben), but the correlational pattern is robust and is the strongest population-level evidence on what saturated-attention-economy environments do to the people inside them. Supporting evidence for entry 18.

[49] Fang, C. M., Liu, A. R., Danry, V., Lee, E., Chan, S. W. T., Pataranutaporn, P., Maes, P., Phang, J., Lampe, M., Ahmad, L., & Agarwal, S. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use, A Randomized Controlled Study. MIT Media Lab and OpenAI. arXiv:2503.17473. A four-week IRB-approved RCT with 981 participants assigned to one of four conditions (text vs voice chatbot, neutral vs engaging persona) on ChatGPT, with over 300,000 messages logged and validated psychosocial instruments administered weekly. The finding the post leans on is that voluntary daily use was the strongest predictor of psychosocial harm across all four conditions, with the heaviest users showing measurably higher loneliness, higher emotional dependence on the chatbot, lower socialisation with humans, and lower problematic-use awareness, regardless of which experimental arm they were in. The mechanism is not malice on the part of the system, the mechanism is engagement on the part of the user. Source for entries 19 and 20.

[50] Sunstein, C. R. (2001). Republic.com. Princeton University Press. Updated as #Republic (2017) and Republic, Divided Democracy in the Age of Social Media (2018). Sunstein's argument predates Pariser's filter-bubble framing by a decade and is broader, that consumer choice in the information environment, given sufficient personalisation and sufficient supply, will produce ideological echo chambers as a stable equilibrium, with the cost paid in deliberative-democratic capacity. The book is contested on the empirical claims (Bakshy et al. above gives the more nuanced picture) but is the foundational philosophical account, and is the source the (NEW PATTERN) entry on filter-bubble-of-one leans on for the architectural failure mode specifically. Source for entry 21.

[51] Arkes, H. R., & Blumer, C. (1985). The psychology of sunk cost. Organizational Behavior and Human Decision Processes, 35(1), 124-140. The foundational experimental work establishing the sunk-cost effect, in which decision-makers continue committing resources to a course of action because of prior investment rather than expected future return. Arkes and Blumer demonstrated the effect across hypothetical scenarios (the bowling-alley investment, the ski-trip choice, the research-and-development continuation) and showed that the bias persists when participants are explicitly warned about it. The relevance to the post is that the ghost is the most complete record of the user's prior investment in any given decision, and the same record that lets the ghost help can be the substrate that lets the ghost amplify the user's bias. Source for entry 22.

[52] Trivers, R. (2011). The Folly of Fools, The Logic of Deceit and Self-Deception in Human Life. Basic Books. Trivers is an evolutionary biologist whose 1976 foreword to Dawkins' Selfish Gene introduced the framework of self-deception as adaptive, the better the self-deceiver, the more convincingly they deceive others, because the conscious mind is not in on the deception and therefore does not leak signs of it. The Folly of Fools is the book-length elaboration. The relevance the post leans on is that the user has good evolutionary reasons not to tell the ghost the truth about themselves, and that a ghost equipped to surface the gap is removing a coping mechanism the user has been using for reasons the user is not consciously aware of. Source for entry 23.

[53] Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7-19. Clark and Chalmers argue that cognition does not stop at the skin, that mental processes that meet certain functional criteria (constant availability, automatic endorsement, easy accessibility, prior conscious endorsement) can legitimately include external artefacts like notebooks, calendars, and reference tools. The thought experiment of Otto, a man with Alzheimer's who uses a notebook to record information that Inga keeps in her biological memory, is the standard reference. The post leans on the framework for the prosthetic-grief entry, with the LocalGhost box functioning as the most extreme externalisation of cognition that has ever been technically available, and the implications for the cost of losing it. Source for entry 24.

[54] Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory, Cognitive Consequences of Having Information at Our Fingertips. Science, 333(6043), 776-778. Four experiments showing that participants who believed information would remain accessible (saved to a folder, retrievable from a computer) remembered the information itself less well, but remembered the location of the information better. The conclusion the authors drew is that the brain treats reliable external storage as a transactive-memory partner and offloads accordingly, which is mostly the correct thing to do. The relevance to the post is that the same offloading mechanism is what produces the prosthetic-grief effect at scale when the externalised storage holds the user's reasoning patterns, emotional clusters, and self-model. Source for entries 24 and 25.

[55] Ramachandran, V. S., & Hirstein, W. (1998). The perception of phantom limbs. Brain, 121(9), 1603-1630. The foundational synthesis of phantom-limb research, with the central observation that the cortical representation of a missing limb persists for years after amputation and produces vivid sensation of presence (and often pain) referred to the absent body part. The mirror-box treatment Ramachandran developed exploits the cortical persistence to renormalise the representation. The relevance the post draws is the partial analogy, when an externalised cognitive substrate becomes integrated to the point of cortical participation, removing it produces a recognisable absence that the user experiences as loss of self rather than loss of a tool. Supporting evidence for entry 24.

[56] Loftus, E. F. (1995). The formation of false memories. Psychiatric Annals, 25(12), 720-725. Plus Loftus, E. F., & Pickrell, J. E. (1995), The formation of false memories, and Loftus's broader body of work on misinformation effects. Loftus's experimental programme demonstrated that autobiographical memories can be implanted in adult participants through suggestive interviewing, with the implanted memories experienced subjectively as indistinguishable from real ones. The "lost in the mall" study (in which 25% of participants came to believe they had been lost in a shopping mall as a child after repeated suggestion) is the canonical case. The relevance is that biological autobiographical memory is reconstructive rather than archival, and a timestamped contemporaneous record from the ghost can displace the biological version in a way that is mostly correct factually and mostly wrong phenomenologically. Source for entry 25.

[57] Bartlett, F. C. (1932). Remembering, A Study in Experimental and Social Psychology. Cambridge University Press. The foundational source on schema-driven reconstruction in memory. Bartlett's "War of the Ghosts" experiments had British participants read a Native American folk tale and reproduce it from memory at intervals, and showed that the reproductions systematically drifted toward what was culturally familiar to the participants while losing what was culturally strange. The implication is that biological memory is not a recording, it is a reconstruction guided by the schemas the person already holds, with the schemas doing real work to make the past coherent with the present self. The post leans on the framework for the memory-laundering entry, in which the ghost's archival record overrides the schema's work. Supporting evidence for entry 25.

[58] McAdams, D. P. (2001). The psychology of life stories. Review of General Psychology, 5(2), 100-122. Plus McAdams (2008), Personal Narratives and the Life Story. McAdams's narrative-identity framework treats the self as a continuously revised story the person tells themselves, with the past selected, ordered, and interpreted in service of a coherent present identity and a projected future. The empirical work tracks how the same life events get retold differently as the narrator changes, and finds that the capacity for narrative revision predicts well-being across the lifespan. The relevance the post leans on is that a ghost with disproportionate access to the user's older narrative produces a downward pressure on revision, with the user shaped to remain the version the ghost is best at recognising. Source for entry 26.

[59] Doctorow, C. (2023). Tiktok's enshittification. Wired, and the broader Doctorow corpus on platform decay. Doctorow names "enshittification" as the predictable three-phase trajectory of platforms, first they are good to users to build a user base, then they are good to business customers at the expense of users to extract value from the user base, then they are good to themselves at the expense of both, until the platform has extracted all the value it can and the cycle ends. The framework was already referenced in POST_02. The catalogue's continuity-dependence entry leans on the trajectory directly, the user's coupling to the box is the asset that the late-stage version of the trajectory monetises. Supporting evidence for entry 27.