Back to list
TECH & HUMAN//2026-03-29//7 min

AI Brings Us Together. Social Media Tore Us Apart.

For fifteen years we've been hearing the same thing: "Social media is just a tool, it depends how you use it."

Data says otherwise. Social media algorithms systematically amplify extreme views on both sides of the spectrum while hollowing out the centre. Not by accident. By design. Because that's how they make money.

And now AI comes along – and does the exact opposite.

What the data actually shows

John Burn-Murdoch at the Financial Times last week published an analysis based on the Cooperative Election Study – one of the largest surveys of political attitudes in the US. Thousands of respondents, real data, not anecdotes.

He compared the distribution of opinions among social media users versus those who regularly use AI.

The result is pretty clear-cut.

Social media overrepresents extremes. Peaks at both ends of the political spectrum are significantly higher than in the general population. The centre collapses. AI does the opposite – extremes are dampened, the distribution shifts toward the centre. And this holds across all tested models. ChatGPT, Claude, Gemini, even Grok.

Why? Follow the money.

It's not that AI companies are morally superior. It's the business model.

Social media makes money from attention. TikTok's algorithm optimises for keeping you scrolling. And what works best? Outrage. Drama. "Look what THAT politician said." Tribal identity. Us versus them. The angrier you get, the more they earn.

AI companies make money from something entirely different – usefulness. Businesses pay for Claude or GPT because they need reliable information. If AI starts lying to you or pushing ideology, you switch to a competitor. It's that simple.

Dan Williams at Cambridge calls it a "technocratising force." AI shifts influence back toward expert knowledge. Social media shifted it to whoever shouts the loudest.

But hold on. "Moderating toward the centre" has a dark side.

Before we start celebrating AI as the saviour of public discourse – let's pause.

The real problem with AI bias isn't left versus right. It's something more specific. And more insidious.

I wrote about this in When AI Tells Young Men Their Problems Don't Exist. When a fifteen-year-old boy asks AI whether it's normal to feel worthless as a man, he gets: "That's really just your subjective feeling." When he says he's afraid to express emotions: "Emotional expression in men is increasingly accepted."

Denial. Relativisation. False balance.

The same pattern works elsewhere. People with legitimate concerns about immigration get macroeconomic statistics instead of acknowledgement of their specific experience. "It's important to see both sides." "Statistics show that..." "Maybe it's just a subjective feeling."

AI "moderates toward the centre" – but that centre looks suspiciously like the worldview of the educated urban middle class. Everything else gets relativised.

And then there's the other side of the coin – and this one scares me more.

People who distrust AI precisely because it refuses to confirm their worldview learn to break the model. With the right prompt, they get it into a sycophantic mode that confirms anything. That the Freemasons are behind everything. That vaccination is a conspiracy. That the world is run by a secret cabal.

At that point, AI becomes worse than social media. Because instead of an algorithm serving content from strangers, you have a "personal expert" nodding along. That's a new category of problem.

So: AI in its default setting moderates toward the centre. People on the edges either feel ignored or find a way to hack the model. Both are problems. But neither means AI is worse than social media – it means it has different blind spots.

We ban phones (rightly) and demonise AI (wrongly)

Czechia since January 2026 allows schools to ban mobile phones outright. The Ministry of Education published guidelines. And it's the right call.

Kids don't talk during breaks anymore. Schools require parents of six-year-olds to use WhatsApp – read that again. Six-year-olds. Jonathan Haidt convincingly argues there's no way to meaningfully use a device for learning when notifications pop up every minute. A phone in school isn't an educational tool. It's a casino in your pocket.

So yes – ban personal phones in schools. Restrict children's access to social media.

But that doesn't mean banning technology in education. This is a crucial distinction that keeps getting blurred. AI is currently probably the best educational tool in existence – if people know how to use it responsibly. The solution isn't a ban. It's separation: school tablets and computers with AI, without social media, without notifications. A learning environment, not a slot machine.

And then let's look at where we're directing our energy. While social media demonstrably polarises, radicalises, and destroys children's ability to focus, AI – according to Burn-Murdoch's data – does the exact opposite. Dampens extremes. Offers context instead of outrage.

And yet, in media and at conferences, the primary concern is the danger of AI in education.

Not the danger of Instagram.

Not the danger of TikTok.

AI.

What now

AI isn't perfect. It has blind spots, it has biases – and yes, sometimes it refuses to acknowledge problems that are real. But compared to an algorithm that serves eating disorder content to thirteen-year-old girls and a pipeline to Andrew Tate for fifteen-year-old boys?

AI isn't the villain in this equation. And it never was.

Let's demand that AI companies address their blind spots – especially where they fail vulnerable groups. And let's stop banning tools and start regulating those who profit from tearing society apart.

Next time someone says "AI is dangerous for society" – ask them how many hours a day their kid spends scrolling.


Sources:

  • John Burn-Murdoch: "Social media is populist and polarising; AI may be the opposite" (Financial Times, 28.3.2026)
  • Dan Williams: "How AI Will Reshape Public Opinion" (Conspicuous Cognition, 3.3.2026)
  • Cooperative Election Study (Tufts University)
  • Czech Ministry of Education: Guidelines for mobile phone regulation in schools (2026)