Behavioral
Behavioral

Psychic Driving — The Cameron Failure

Behavioral Mechanics

Psychic Driving — The Cameron Failure

Plato described memory as a waxen tablet: what you impress upon it, you remember; what gets wiped away, you forget and don't know. Ewen Cameron read this and thought it sounded like a treatment plan.
developing·concept·1 source··May 2, 2026

Psychic Driving — The Cameron Failure

What He Was Trying to Do

Plato described memory as a waxen tablet: what you impress upon it, you remember; what gets wiped away, you forget and don't know. Ewen Cameron read this and thought it sounded like a treatment plan.

Cameron was the founding director of the Allan Memorial Psychiatric Institute at McGill and one of the most prominent academic psychiatrists of the 1950s. He was impatient — with the slow pace of psychotherapy, with patients who wouldn't improve quickly, with the messiness of minds that had years of accumulated problems. His solution was the tabula rasa: destroy the existing patterns first, then write new ones.1

The name he gave to the second step — the writing — was psychic driving. Tape recordings of therapeutic messages, played through pillows and helmets at 10-20 hours per day, for days or weeks, while patients were sedated into sensory deprivation or under drug disinhibition. If the conditioned reflex could be taught through repetition, then enough repetition would implant new patterns in the newly cleared substrate.

He was wrong about this in a way that became one of the clearest demonstrations of what behaviorist assumptions cannot do with human minds.


The Protocol

Cameron's full protocol ran in two stages:

Stage 1 — Depatterning. Before psychic driving could begin, existing patterns had to be removed. Cameron's depatterning sequence combined:

  • Intensive electroconvulsive therapy — not the careful, targeted ECT in use today, but Page-Russell-protocol bilateral ECT at high voltages, administered multiple times daily for weeks. Cameron had proposed using ECT to "denazify Germany" by administering it to every German above age twelve; his therapeutic application had similar extremity.2
  • Drug-induced regression — combinations of five or six drugs simultaneously: amphetamines, barbiturates, chlorpromazine, PCP, and LSD. Cameron called these his "talking out capsules." The goal was to reduce patients to a regressed, dependent state — incontinent, unable to care for themselves.
  • Prolonged sleep therapy — patients kept deeply or groggily asleep twenty hours a day for roughly three weeks, woken periodically for feeding and toileting. Staff could administer ECT to twenty patients per hour with this arrangement.3

The combination reliably achieved its stated aim. Patients emerged with "vast chunks of their memories obliterated" over intervals ranging from six months to ten years before treatment. Sixty percent of Cameron's follow-up patients showed significant memory impairment.4

Stage 2 — Psychic driving. With the substrate cleared, Cameron played tape loops of therapeutic messages for up to 20 hours per day, 10-15 days, in sensory deprivation conditions with patients semi-sedated or asleep. He had crafted the messages himself — distillations of each patient's "core issue." The sequences alternated negative and positive:

"Do you realize that you are a very hostile person? Do you know you are hostile with the nurses? Do you know that you are hostile with the patients? Why do you think you are so hostile? Did you hate your mother? Did you hate your father?"

Then, days later, switching to:

"It's all right to be myself. I am affectionate and warm-hearted. It is good to be affectionate and warm-hearted. . . . I don't need to drive myself. People like me as I am."5

He modulated the tape technically — adjusting treble and bass, varying volume, introducing echo — to trigger what he described as a Pavlovian orientation reflex, keeping patients from habituating to the message.


Why It Failed

Cameron admitted the failure publicly in his 1963 presidential address to the American Psychopathological Association, though he framed it as a setback on the way to success:

"If this thing worked after thirty repetitions, it was only common sense to see what would happen if the repetition was increased tenfold, a hundredfold or even more. . . . We soon found, however, that it did not work out quite as we had planned it. . . . We found it was possible for the individual to be exposed to the repetition . . . a quarter to one-half million times and yet be unable to repeat these few short sentences. . . . But we have made a beginning."6

A quarter of a million repetitions and the patient couldn't parrot back a few sentences. Cameron called this making a beginning.

His CIA site-visitors reached a different conclusion. Harold Wolff visited the Allan, reviewed the charts, and looked at Cameron's "successful" patients lying in their beds staring at the ceiling. "Are these typical of your successes?" Wolff asked. Cameron confirmed they were. Wolff's colleague Monroe observed: "We were distinctively living in two worlds. His and the real one." The CIA concluded that the intervention "didn't seem to really live up to the expectations that we had hoped might come from it."7

Donald Hebb, whose sensory deprivation work Cameron had appropriated, gave the verdict that's hardest to argue with: "Cameron was irresponsible — criminally stupid, in that there was no reason to expect that he would get any results from the experiments. Anyone with any appreciation of the complexity of the human mind would not expect that you could erase an adult mind and then add things back with this stupid psychic driving."8


The Mechanism of Failure: What the Tabula Rasa Gets Wrong

The technical failure of psychic driving reveals a deep mistake in how Cameron conceptualized human minds.

The tabula rasa model assumes that memories and behavioral patterns can be destroyed selectively — that you can wipe the problematic contents while leaving the substrate intact and receptive. This is wrong in two directions.

First, the depatterning process that Cameron used (ECT + drugs + sleep deprivation) didn't selectively destroy problematic patterns. It destroyed the neurological substrate itself. Patients weren't left with cleared, receptive minds ready for new content. They were left with degraded neurological function — unable to form new memories, unable to maintain attention, unable to process the repetitive input Cameron's driving was delivering. The wax tablet metaphor fails because the wax and the impressions aren't separable: the neurological hardware that holds patterns is the same hardware that makes new learning possible. Erase sufficiently and you've destroyed the medium, not just the content.

Second, psychic driving assumes that conditioning operates the same way in conscious, socially embedded minds as in Pavlov's dogs in isolated chambers. It doesn't. Human belief change requires more than repetition — it requires the social and epistemic context that gives messages meaning. A message loop saying "It's all right to be myself" means nothing if the patient has no intact sense of self to apply it to, no relational context in which to evaluate it, no narrative memory in which to integrate it. The depatterning had destroyed the very structures that would have made the driving meaningful.


What the Failure Reveals About Behaviorist Coercion Theory

Cameron wasn't an outlier — he was the clearest test case of a model that all the major coercive persuasion research of the 1950s was implicitly using: the mind as mechanism, beliefs as contents that can be extracted and replaced through sufficient technical intervention.

The drugs chapter's truth-serum research tested the same model pharmacologically and found the same limit. The psychic driving research tested it cognitively and found the same limit. Both failed at the same point: the point at which the intervention that was supposed to clear the substrate destroyed the capacity for the intended outcome.

What Cameron's failure demonstrates is not that minds can't be changed — they clearly can, and the DDD framework documents how. It's that coercive belief implantation fails when it destroys the mind's own generative capacity. You can produce compliance through DDD. You can produce genuine belief change through extended social-environmental coercion (the Korean War reeducation results, while contested, showed real change in some cases). What you can't do is destroy the belief-formation apparatus and then operate it.


The CIA's Investment and Its Return

Cameron's work was funded as MKUltra Subproject 68, approved in January 1957 and supported for three years. The CIA's interest was clear: if you could reliably erase memories and implant new behavioral patterns, you'd have the perfect covert operative — someone who remembered their cover story and nothing else.

What the CIA actually got was a catalog of psychically destroyed patients, a lawsuit settled by the US and Canadian governments for damages, and Hebb's verdict that the entire enterprise was criminally stupid. CIA observers characterized the work as "culpably negligent, professionally unethical, bordering on illegal, repugnant, and totally abhorrent."9

The patients were the actual cost. Women admitted for mild postpartum depression, executives with anxiety disorders, alcoholics — all subjected to the full Cameron protocol because Cameron's eligibility criteria were, in his words, a matter of "singular difficulty" to assess. Sixty percent emerged with memory impairments covering years of their lives.10


Tensions

  • Research norms defense: Cameron's defenders noted that research ethics standards were different in the 1950s — informed consent wasn't yet codified as it is today. This doesn't explain why Cameron specifically excluded patients from the record when they failed to respond to treatment, suppressed negative data, and continued at dramatically escalating doses after the failure was evident.
  • The funding question: Cameron had been doing versions of this work before the CIA funded him, and Dimsdale notes he "probably would have continued doing so in any event." The CIA didn't corrupt a careful scientist — it funded a reckless one who had already lost his moorings.
  • Cameron's intent — preserved ambiguity: The evidence doesn't cleanly establish whether Cameron knew his procedures were harmful and proceeded anyway, or whether he was genuinely deluded about what he was producing. The suppressed negative data and dramatically escalating doses after failure was evident point toward awareness. The grandiose reformer self-conception, the era's absence of codified informed consent, and the sincere "But we have made a beginning" framing point toward possible genuine delusion. Dimsdale holds this ambiguity rather than resolving it — and it matters: the villain reading calls for stronger oversight of individual researchers; the delusion reading calls for structural reform that can catch sincere, credentialed scientists whose self-belief has outrun their evidence. These are different institutional interventions.

Author Tensions & Convergences

Dimsdale frames Cameron as a tragic figure — a reformer with genuine care for patients, corrupted by ambition and the era's primitive research ethics. He emphasizes the institutional context: Hebb, Penfield, and Lehmann were all doing important work nearby; Cameron was the outlier who couldn't keep the clinical impulse and the scientific discipline in balance.

Meerloo's framework, in Why Do They Yield: The Psychodynamics of False Confession, offers a different angle: Cameron was attempting to engineer externally what the Soviet interrogators produced through the psychodynamics of transference, regression, and substitute-father dependency. Meerloo would argue that the Soviet results (genuine belief-change in some cases, documented compliance in nearly all) worked precisely because they operated through the patient's own psychological architecture — the infantile regression, the dependency, the substitute-father bond. Cameron tried to bypass that architecture with technology, and the architecture's absence was exactly why his results were empty.

The combined reading is sharp: what Meerloo's framework predicts is that coercive belief change requires the target's own psychological processes to do some of the work. You can drive those processes into a particular channel; you can't replace them with tape loops.


Cross-Domain Handshakes

Psychology → Identity Disruption Under Coercive Pressure: Cameron's depatterning produced what looked like blank slates but were actually identity-disrupted patients — people whose narrative continuity and self-concept had been shattered. The handshake: the psychology page describes what identity disruption does to the target's relationship to their own beliefs and agency; the Cameron failure page shows what happens when a practitioner attempts to exploit that disruption for belief implantation. The insight neither page produces alone: identity disruption doesn't make a person receptive to new implanted content — it destroys the cognitive architecture that makes any content stick. The space that opens up when identity is disrupted is not an empty receptive field; it's a cognitive void that can't organize new material.

Behavioral-mechanics → DDD Framework: Cameron's protocol was an attempt to engineer DDD conditions deliberately and push them to their maximum intensity. He succeeded at DDD — debility (sleep deprivation and drug-induced regression), dependency (total institutional control), dread (the trauma of the protocol itself). But his error was assuming that maximum DDD + psychic driving would produce durable belief change. What DDD produces is compliance — behavioral change during the period of the conditions. Cameron wanted implantation — new beliefs that persist after the conditions end. The failure shows these are different outcomes requiring different mechanisms.


The Live Edge

The Sharpest Implication

Cameron got a quarter million repetitions and couldn't get patients to parrot back a sentence. This isn't just a clinical failure — it's an important theoretical result. It shows that the mechanism of coercive persuasion that actually works — the mechanism documented across the Korean War cases, the Stockholm cases, the cult cases — operates through the target's own psychological architecture, not around it. Every documented case of real belief change involves the target's own attachment system, their need for approval, their infantile dependency structures, their group identification doing some of the generative work. The behaviorist fantasy of the mind as mechanism — blank it, then rewrite it — failed at every point it was seriously tested. The implication: you can't engineer the substrate out of the process. Effective coercion is always collaborative, in the most disturbing sense of that word.

Generative Questions

  • Cameron produced compliance (patients would repeat messages while in the institutional environment) but not persistence (the messages left no lasting cognitive trace). Is this the fundamental limit of purely technical coercive implantation — that it produces context-dependent compliance but not transferable belief? What would distinguish a protocol designed for compliance from one designed for genuine belief change?
  • Hebb and Cameron were in the same building, using some of the same methods, reaching different research conclusions. Hebb stopped when evidence said stop; Cameron continued past the limits of reason. What institutional and personal factors distinguish the researcher who maintains scientific integrity under institutional pressure from the one who doesn't?

Connected Concepts

  • DDD Framework — Cameron's protocol as a deliberate attempt to engineer maximum DDD conditions; why it produced debility but not lasting belief change
  • Isolation Architecture — Cameron's sensory deprivation protocol; the isolation component of psychic driving
  • MKUltra Institutional Architecture — Cameron as MKUltra Subproject 68; the CIA funding and its context

Footnotes

domainBehavioral Mechanics
developing
sources1
complexity
createdMay 2, 2026
inbound links3