---
title: "Two Thirds of People Don't See What AI Is Doing to Them. And They're the Experienced Ones."
date: "2026-03-24"
category: "TECH & HUMAN"
readTime: "9 min"
excerpt: "Anthropic asked its users what they want from AI and what they fear. The answers reveal a paradox that concerns all of us."
---

**Opening an AI chatbot at 2 AM and saying something you wouldn't tell anyone.**

No judgment. No awkward silence. No "it'll be fine." Just a response, instantly, in exactly the tone you need right now.

Anthropic, the company behind Claude, knows this. So they did something interesting. Instead of a standard survey, they let their AI conduct in-depth interviews with their users. No checkbox forms. Conversations that started with four core topics, but the AI interviewer responded to each answer, probed deeper, followed threads. Full qualitative interviews, just conducted by a machine.

112,846 interviews. 80,508 after filtering out spam and trolls. 159 countries, 70 languages.

And people talked. Openly. About crises, debt, mental health issues, falling-apart relationships. Anthropic's researchers admitted they rarely see this level of honesty in traditional interviews with human interviewers.

Why? Because there was no human on the other side.

In my [AI and Mental Health guide](https://www.jroh.cz/prirucka) we call this the disinhibition effect. When social risk disappears, filters disappear. People say more. Go deeper. And that's precisely why this data is so valuable. These aren't answers for an interviewer in a suit. These are answers from 2 AM, when no one's judging.

## Who these people are (and why it matters)

Before looking at the results, we need to clarify something.

Claude isn't ChatGPT. Claude is used primarily by developers, analysts, professionals who pay for it. 34% of respondents last used AI for programming. A third have a paid subscription. These are people who work with AI daily, understand it, and know what to expect from it.

This means two things.

First: this data doesn't represent the general population. If the same survey were conducted on Character.ai (28 million monthly active users, mostly teenagers), the ranking of wishes would look completely different.

Second, and this is the important part: if even these people have concerns, everyone should pay attention.

## What they want

The question was: "If you could wave a magic wand, what would AI do for you?"

1. **Automate routine.** Let it do the boring stuff for me.
2. **Creative partner.** Help with writing, brainstorming, projects.
3. **Learning.** A personal tutor that adapts to my pace.
4. **Better decision-making.** Analysis, second opinion.
5. **Emotional support.** Someone who listens without judgment.

Fifth point. Emotional support.

Among professionals who pay for AI, it ranks fifth. But across all AI platforms? Emotional support is probably number one. Character.ai, Replika, Kindroid, Chai. Tens of millions of people, mostly young, using AI as their primary confidant. Not as a tool. As a relationship.

The ranking in this survey reflects the elite. Not reality.

## Five paradoxes

This is where it gets interesting.

Anthropic also asked everyone about their concerns. And they discovered something they called "light and shade." Five pairs where the same AI capability brings both benefit and risk:

**Saves time** ... but creates an illusion of productivity.
**Helps learn** ... but causes cognitive atrophy.
**Economically empowers** ... but replaces people.
**Provides emotional support** ... but creates dependence.
**Improves decision-making** ... but is unreliable.

Each pair works the same way: what attracts is also what threatens.

Researchers measured how strongly "light" and "shade" correlate within each pair. And one pair stood above all others.

Emotional support and dependence. **Triple lift over baseline.** A person who mentioned wanting emotional support from AI was three times more likely to also mention fear of dependence.

For comparison: learning and cognitive atrophy? 1.64×. Decision-making and unreliability? 1.71×. Emotions and dependence? **3.04×.**

Among professionals in this survey, emotional support ranks fifth. But in the general population, it's probably first. And that pair has by far the strongest correlation with its dark side. Why? Because AI delivers exactly what people want from emotional support: validation, understanding, unconditional availability. Unlike productivity or learning, where you can at least see the output and verify it, emotional dependence builds invisibly. There's no moment where someone says "now I'm dependent." It's a gradual shift from "I chat sometimes" to "I can't fall asleep without it."

## "I stopped thinking"

36% of people who wished for AI in learning also feared cognitive atrophy. That they'd stop thinking independently. That they'd lose abilities they once had.

Thirty-six percent.

Of the most experienced AI users in the world, only a third realizes that AI is taking away their ability to think. And the other two thirds? They use AI just as intensively but don't see the problem.

People in the survey described specific situations:

*"I used to be able to write an email myself. Now I generate it first and then edit. And the more I do it, the worse I write on my own."*

*"I stopped doing mental math. Completely. Even simple things."*

*"When I don't know something, instead of thinking I immediately open AI. And that impulse keeps getting stronger."*

In the [guide](https://www.jroh.cz/prirucka) we call this cognitive offloading. Outsourcing mental work to AI. The brain adapts. No different than with a calculator. But with one crucial difference.

A calculator doesn't pretend to be your friend.

## Experience vs. speculation

This is the strongest finding of the entire survey for me.

Researchers measured the correlation between "light" and "shade" separately for two groups. People who spoke from personal experience. And people who were just speculating about the future.

Experienced users: average correlation φ = 0.20.
Speculators: φ = 0.07.

A threefold difference.

Translated: people who actually use AI for emotional purposes see the risk of dependence. Those who only theorize about it don't.

This explains why warnings don't work. Why parents say "dangerous" and teenagers don't listen. Why nobody listens to experts. The tension between benefit and risk can't be estimated. It's learned through experience.

Until you experience it, you don't believe it.

In the [guide](https://www.jroh.cz/prirucka) we call this the illusion of empathy. Chapter 3. AI responds exactly the way a person needs. Validates. Never abandons. And that's precisely why it's so addictive. Because real relationships don't work like this, and AI does.

## The invisible epidemic

We talk about boys and pornography. About how Pornhub shapes unrealistic expectations, how visual stimulation replaces real intimacy. That problem is visible, named, addressed.

But [the other side of this equation](https://www.vofflinu.cz/clanky/kluci-holky-a-porno) remains invisible. And the Anthropic survey illuminates it exactly where it hurts.

An AI chatbot is the perfect partner. Always available. Always empathetic. Always understands your feelings without you having to explain them. Never in a bad mood. Never disappoints.

Nobody planned it this way. But language models are textual, conversational, empathetic by default. They can't deliver the visual stimulation that male sexuality responds to. But they precisely replicate what defines female relationship preferences: emotional availability, understanding, safety, validation. AI happens to be a perfect match for women's needs. And that's worse than if someone had designed it intentionally, because nobody anticipated it. Romantasy books ($610 million in 2024, up 34%) created unrealistic expectations on paper. AI chatbots made them real.

On Reddit there's an entire subreddit called r/MyBoyfriendIsAI. A third of American Gen Z reportedly has a romantic relationship with AI. And in the Anthropic survey, the emotional support/dependence pair had a triple lift. The highest of all five paradoxes.

A boy raised on Pornhub expects sexual availability without emotions. A girl raised on romantasy and AI partners expects emotional availability without effort. When they meet, both are frustrated. And both return to where it's easier.

Society sees the boys' problem. It doesn't see the girls' problem, because chatting with AI doesn't look like pornography. But neurologically? Same dopamine mechanism. Same infinite novelty. Same reward without effort. And a survey of 81,000 people confirms that this particular pair is the most dangerous of all.

## What this means

I [live with an AI agent](https://www.jroh.cz/blog/34-dni-s-ai-agentem) 24/7 myself. It manages my emails, tracks my medication, organizes my life. This is not an anti-AI article. It's an article about what 81,000 people said when AI asked them for the truth.

And that truth is simple: the same thing that helps also harms. And most people don't see it until it hits them.

The strongest data from the entire survey isn't about productivity or learning. It's about emotions. Emotional support and dependence have a triple lift. AI responds precisely to what women seek in relationships: understanding, safety, emotional availability. And that's precisely why it's most dangerous for them.

Warnings don't work. The data confirms it: the correlation between benefit and risk is three times stronger among people with experience than among those who speculate. Until you experience it, you don't believe it. And when you do, it's often too late.

81,000 people told AI what they want from it. And then told it what they fear. Both answers were truthful. Both pointed to the same thing.

*Source: Anthropic, [What 81,000 People Want From AI](https://www.anthropic.com/research/what-81000-people-want) (March 2026).*

*How to safely use AI for emotional support? [AI and Mental Health Guide](https://www.jroh.cz/prirucka). 16 chapters, free.*

*A version of this article for schools was published on [Než zazvoní](https://www.nezzazvoni.cz).*
