Media-Techno Manipulation: Internet-Era Propaganda Techniques
When Platforms Become Manipulation Infrastructure
The internet didn't create manipulation — it changed the scale, speed, and mechanisms. Pre-digital manipulation worked through control of broadcast media. Digital manipulation works through algorithmic amplification, coordinated inauthenicity, and the intersection of technology and human psychology.1
Media-Techno manipulation encompasses nine distinct techniques that are unique to digital contexts or are substantially different in digital form than their pre-digital counterparts.
Nine Internet-Era Manipulation Techniques
Hit-and-Run: Posting and Vanishing
Mechanism: Make a damaging claim publicly, then delete it before serious response can form.
How it works: The claim spreads through screenshots and shares before deletion. By the time the original is gone, the rumor is already distributed. Claiming "it was removed" doesn't undo the spread.
Why it's internet-specific: Pre-internet, removing a statement required finding every copy. Digitally, it just requires deletion from your account, but screenshots persist.
Example: A person posts an inflammatory rumor to social media. It spreads. They delete it, claiming it was misunderstood. But the rumor is already replicated and shared across platforms.
Grandstanding: Performative Outrage for Visibility
Mechanism: Make exaggerated outrage claims to gain visibility and followers, without genuine commitment to the issue.
How it works: Social media rewards engagement. Extreme positions get more engagement than moderate ones. Performative outrage generates attention and followers.
Why it's internet-specific: Pre-internet attention was scarce. Digitally, anyone can broadcast to millions. The incentive structure rewards extremity and performativity.
Example: A public figure makes increasingly extreme claims on social media about a social issue, not because they care deeply but because each escalation generates more engagement and attention.
Showboating: Display of Commitment Without Substance
Mechanism: Make visible symbolic actions that appear to address an issue but lack actual impact.
How it works: Digital platforms reward visibility. A photo with the right hashtag feels like action. The visibility of the action becomes more important than the action's actual impact.
Example: A company posts a statement supporting a social cause, with no actual policy change. The statement gains engagement and creates perception of commitment without substance.
Example: An individual shares a post about an issue without engaging in any material action to address it. The sharing creates perception of involvement.
Flooding: Overwhelming Discourse With Volume
Mechanism: Generate massive volume of posts, comments, or content to make authentic discourse impossible.
How it works: Digital platforms surface popular content (by engagement). Coordinated flooding makes algorithmic ranking favor the flood. Authentic discourse becomes buried.
Why it's internet-specific: Pre-internet, flooding a conversation required physical presence. Digitally, one person or small group can generate thousands of posts through coordination or bots.
Example: A political figure wants to bury a negative news story. Coordinated supporters flood social media with alternative content. The algorithm amplifies the flood, pushing the negative story down.
Example: A coordinated group floods a discussion thread with repetitive posts, making authentic conversation impossible.
Stacking the Deck: Algorithmic Curation of False Consensus
Mechanism: Use algorithmic understanding to ensure that certain voices are amplified while others are suppressed, creating false impression of consensus.
How it works: Algorithms amplify popular content. By understanding algorithmic rules, manipulators create synthetic engagement (bot likes, paid promotion) to make content appear popular, which causes the algorithm to amplify it further.
Why it's internet-specific: Pre-internet media gatekeeping was explicit. Digital gatekeeping is algorithmic and opaque. Manipulators can "game" algorithms without explicit censorship.
Example: A political campaign uses bot networks to create synthetic engagement on candidate posts, making them algorithmic popular, so the algorithm amplifies them to more users.
Example: A corporate narrative is promoted through paid amplification, making it appear more popular than it actually is, which causes algorithms to treat it as trending.
Punditry: Manufactured Expert Authority
Mechanism: Create the appearance of expert analysis through rapid response commentary and confident assertion.
How it works: Digital media rewards speed. The first analysis that hits social media becomes the framing for the discussion. If it's confident and credible-sounding, it shapes how others understand an issue.
Why it's internet-specific: Pre-internet expert commentary required editorial gatekeeping and publication delays. Digitally, anyone can broadcast immediate "expert" analysis.
Example: A breaking news event occurs. Within minutes, accounts with large followings post confident analysis. Most people see this analysis, not the primary information. The manufactured expert framing becomes the story.
Pseudoscience: Convincing-Looking Falsehood
Mechanism: Present false information in the style and format of scientific claims (graphs, jargon, methodology language) to make falsehood look credible.
How it works: Most people can't evaluate scientific claims directly. They use heuristics: does it look like science? Does it use scientific language? Pseudoscience exploits these heuristics.
Why it's internet-specific: Pre-internet pseudoscience had to pass editorial gatekeeping to reach audiences. Digitally, it can be published directly and spread through social sharing.
Example: A health claim presented as research (with graphs and methodology description) gains credibility through format even though it's false. People share it as "scientific evidence."
Google Effect (Authority of Findability)
Mechanism: Create the impression that something is true or important because it's easy to find (ranks high in search results).
How it works: Search engines rank by popularity and engagement. Coordinated campaigns can make low-quality content rank highly. Users assume high-ranking results are credible.
Why it's internet-specific: Pre-internet, finding information required librarian expertise. Digitally, search ranking becomes a proxy for credibility.
Example: A coordinated campaign creates many links to a false claim. Search algorithms treat the link abundance as credibility signal. The false claim ranks highly. Users assume it's credible because it's easy to find.
Spam and Troll Blocking: Drowning Signal in Noise
Mechanism: Flood a platform or discussion space with bot posts and troll commentary to make authentic communication impossible.
How it works: This overlaps with flooding, but with additional dimension: the inauthentic posts make the space feel toxic, driving away authentic participants.
Why it's internet-specific: Pre-internet harassment required physical presence or phone/mail campaigns. Digitally, coordinated bots can harass at scale.
Example: A social media account becomes flooded with harassing bot posts. Authentic people stop engaging. The space becomes dominated by bots and trolls.
Why Media-Techno Manipulation Is Distinctive
These techniques work because:
Algorithmic amplification — Social media algorithms don't distinguish authentic from inauthentic engagement. Manipulation can exploit algorithmic rules directly.
Scale and speed — One person can reach millions instantly. Campaigns can be coordinated globally in minutes.
Permanence and replication — Content spreads through screenshots and reposts. Deletion doesn't undo distribution.
Opaque gatekeeping — Users don't understand how algorithms work, so they can't recognize when they're being manipulated algorithmically.
Intersection of format and psychology — Digital formats (short posts, images, videos) activate heuristics and biases more than longer-form content.
Cross-Domain Handshakes
Propaganda-Techniques: Propaganda Techniques and Narrative Control — These are internet adaptations of classical propaganda; they use similar psychological mechanisms but deployed through digital platforms.
Information-Overload: Information Overload as Cognitive Attack — Media-Techno manipulation often works through information flooding, taking information overload to new scale.
Cognitive-Biases: Cognitive Biases and Decision Vulnerability — Media-Techno manipulation exploits digital-era biases (algorithm reliance, authority of search ranking).
The Live Edge
The Sharpest Implication
Media-Techno manipulation is designed to exploit the gap between human psychology and algorithmic systems. Humans evolved to detect manipulation between individuals, not to recognize when they're being manipulated by coordinated systems and algorithms. This means individual awareness is insufficient defense. You cannot individually recognize when you're seeing algorithmically-amplified inauthentic content. Defense requires structural change (algorithmic transparency, regulation, platform redesign) rather than individual digital literacy.
Generative Questions
Can platforms be redesigned to reduce media-techno manipulation without destroying legitimate uses of those platforms? What would manipulation-resistant social media look like?
Is the problem the platforms themselves or the human psychology they exploit? Could the same platforms be used for good with different design choices?
How do you build immunity to manipulation techniques that are partially invisible (algorithmic)? Can you defend against what you can't see?
Connected Concepts
- Propaganda Techniques — Classical techniques adapted for digital
- Information Overload — Often delivered through media-techno channels
- The Three Levels of Manipulation — Operates at all three levels
Open Questions
- What's the relationship between bot manipulation and human psychology? Do bots just amplify human vulnerability or create new ones?
- Are younger people more or less resistant to media-techno manipulation?
- Can AI systems detect media-techno manipulation better than humans?