A Practical Guide to Mindful Use of Chatbots for Emotional Support
Jakub Roh, MBA & Gabriela Rohová, MA
About the Authors
Jakub Roh & Gabriela Rohová
We are a married couple united by our fascination with the human mind and the world of technology. While Jakub works at the intersection of AI, marketing, and education, Gabriela brings deep insight from her psychotherapy practice and psychology background.
This book was born from our need to put the "Human" back into the "Human + Technology" equation. We believe the digital world doesn't have to be a toxic place if we learn to navigate it mindfully.
We both work daily with people who use AI for emotional support – and this trend is growing dramatically.
In his teacher training sessions, Jakub increasingly encounters situations where students have ChatGPT as their only confidant. Teachers often have no idea this is happening, let alone how to address it.
As a psychologist, Gabi sees clients who use AI instead of or alongside therapy – sometimes helpfully, sometimes riskily.
The problem? Quality guides on this topic are scarce.
That's why we combined our experiences – Gabi's psychological expertise and Jakub's practical experience in education – to create a guide that gives people tools for mindful AI use.
We're not prohibiting anything. People will do it anyway – often for good reasons. Therapy is inaccessible, expensive, or simply not for everyone.
We want people to do it with awareness of the risks, with concrete tools, and with a clear understanding of when to seek out a real person.
What Does the Science Say
Research shows it can work. AI can help structure thoughts, process emotions, and prepare for difficult conversations. Studies exist with measurable positive effects.
But the same research shows enormous risks. Emotional dependence. Agreement even when you're wrong. Creating false relationships. Failure in crisis situations.
Let's discover together how to use AI for emotional support safely and mindfully.
Chapter 1
The Disinhibition Effect
Why you tell a machine more than a friend
Story
Martin is thirty-five, works at an IT company, has a wife and two kids. Outwardly, a happy life. But in recent months, he's been troubled by thoughts he doesn't share with anyone.
One evening, when his wife is already asleep, he opens ChatGPT...
ChatGPT
He keeps writing. Two hours. Things he's never said out loud. Why did he do it? And why to a machine?
Social Risk and Its Absence
When you confide in a person — even your closest one — you always take a risk. Maybe not consciously, but your brain knows it.
You risk judgment. What will they think of me? Will they still respect me?
You risk information leaking. Will they tell someone else? Will they use it against me?
You risk confrontation. What if they tell me something I don't want to hear?
You risk burdening the other person. What if I'm weighing them down?
These concerns act as an invisible filter. We usually don't notice them, but they influence every word. We edit, soften, omit.
And then comes the machine.
A machine doesn't bat an eye. It has no facial expression you need to read. It won't think you're weird – because it doesn't think at all. It won't tell your wife, colleagues, or parents.
The filter switches off. And suddenly you're saying things you've never said out loud before.
How much of your thoughts do you actually share?
With a person
~35%
With AI
~92%
The absence of social risk dramatically changes what you're willing to say
This Isn't New – It's as Old as Computers
In 1966, computer scientist Joseph Weizenbaum created a program called ELIZA. It was extremely simple – essentially just rephrasing user sentences into questions.
And yet something surprising happened. People formed deep "therapeutic relationships" with ELIZA. They confided intimate things to it. His secretary once asked him to leave the room while she "talked" with ELIZA – she needed privacy.
That was sixty years ago. With a program that could barely do anything.
Imagine what's happening today.
What This Means for You
The disinhibition effect is neither good nor bad. It's simply a fact – the way human psychology works in relation to anonymity.
It can be a gateway. Many people start with AI because they're embarrassed to see a therapist. That's a legitimate first step.
But it shouldn't be the final stop. If you find yourself telling AI things you should be telling real people, then AI isn't serving as support – it's serving as an escape route.
Quick Reflection
Have you ever told AI something you haven't told anyone in person?
That's more common than you think.
The important thing is to reflect: Why did you tell the machine and not a person? Is it because you need to work it out for yourself first? Or are you avoiding something?
AI can be a good first step. But if you find yourself staying there too long, maybe it's time to take the next one.
That's perfectly fine.
Maybe you have people around you whom you trust. Or you don't use AI for personal topics yet.
If you ever start, remember: there's a difference between using AI as a tool and relying on it as your only outlet.
Chapter 2
Cognitive Offloading
Why just writing it down helps
Story
Clara is a mother of two small children. All day she runs between work, daycare, shopping, cooking, cleaning. In the evening, she falls into bed exhausted – but instead of sleep, thoughts start swirling in her head.
Did I forget something? What if I get fired tomorrow? Why am I so impatient with the kids? Am I a bad mother?
What happens in the brain
🌀
Amygdala
Alarm center
→
📝
Prefrontal cortex
Logical thinking
Writing activates the logical part of the brain and dampens emotional reactions
Drag to compare
✓Fear of losing job
✓Financial worries
✓Concerns about kids
✓Feeling inadequate
what iffiredthe kidscan't copewhy meno timeI'm badwhat nextrenttired
Chaos in mindStructured thoughts
How the brain works under stress
When you have chaos in your head – anxiety, fear, anger – the amygdala activates. This is your "alarm center." It can't think logically – it just triggers the stress response.
But the brain also has another part: the prefrontal cortex. This is the "thinking brain" – logic, planning, analysis. And here's the key: these two parts of the brain suppress each other.
Writing as a switch
When you take chaotic thoughts and write them down, something fundamental happens:
You have to convert them into words. You can't just "feel" – you have to formulate.
You have to organize them. Text has a beginning, middle, and end.
You see them in front of you. They're no longer just in your head, spinning around.
Through this process, you activate the prefrontal cortex. And that suppresses the amygdala. Chaos in your head transforms into text on the screen.
Try it yourself
Your thoughts structured:
See? Instead of one jumbled mess, you now have specific points. This is how cognitive offloading works.
Expressive writing: What the research says
Psychologist James Pennebaker found that people who wrote about their emotions for 15–20 minutes a day had better immune function, lower anxiety, and fewer doctor visits.
The mere act of writing – not therapy, not medication, just writing – had a measurable positive effect.
AI as an interactive journal
A traditional journal has a disadvantage: you write into a void. AI adds interaction. You can write one sentence and AI asks: "What do you think is behind that?"
But here's what's important:
Key finding
Much of the benefit comes from the writing itself, not from AI's responses.
Clara would feel better even if AI responded with nonsense, because the process – taking the chaos from your head and converting it into words – that process itself is healing.
How to use this practically
Exercise: Evening writing
Try doing this for a week:
In the evening before bed, open a chat
Write for 5–10 minutes about what's on your mind
You don't need answers – just write
Notice how you feel before and after
AI as a relationship buffer
One of the strongest benefits: instead of blurting out accusations at their partner in the heat of the moment, people first "vent" to AI. They let the first wave of emotions subside. And only then do they go to their partner to calmly discuss the problem.
AI works as a lightning rod – it can take everything you throw at it. Your partner can't. And shouldn't have to.
Quick Reflection
Do you often have trouble falling asleep because of thoughts spinning in your head?
Try the evening writing technique.
Before bed, spend 5-10 minutes writing to AI (or even a plain notebook). Write out everything that's on your mind. You don't need solutions – it's about getting thoughts "out of your head."
Most people find they fall asleep easier afterward.
That's great.
Cognitive offloading can still be useful – for example when making a complex decision or processing a difficult situation. Writing helps clarify your thoughts.
Chapter 3
The Illusion of Empathy
Why AI seems more understanding than people
Story
Tom has just gone through a difficult breakup. His girlfriend left him after three years, unexpectedly, without warning. Tom is devastated.
He tries talking to friends. But after a couple of beers, they start changing the subject. "Hey, it'll get better. You'll find someone else." His parents say: "We told you that girl wasn't right for you."
Tom feels lonely. Nobody is really listening to him.
What AI Responded
"That must be incredibly painful. Losing someone you were planning a future with is one of the hardest experiences. It's completely understandable that you feel devastated. Would you like to tell me more?"
Click on parts of the response to discover which technique AI used:
Tom reads the response and something releases inside him. Finally, someone who understands. Someone who has time. Someone who doesn't jump straight to advice.
But here's the question: does AI really understand?
Why it works better than with people
Let's be fair to Tom's friends and family. They're not bad people. They're just... human.
👤Friend
Gets tired after 2 hours
Has their own problems
Doesn't know what to say
Tries to "fix it"
🤖AI
Never gets tired
Has no emotions of its own
Always knows what to say
Doesn't need to fix anything
AI can be "present" in a way that a human in real life almost never can be.
The catch: It's not empathy
Empathy means that someone truly feels some of what you're feeling. A friend who has been through a breakup knows what it's like. They see you crying and something tightens inside them.
Key distinction
AI doesn't feel anything.
AI predicts what words should follow your text. It's like an actor who perfectly plays compassion. They play it so well that it makes you cry. But when the scene ends, they forget about you.
Why does it matter
You might think: "So what? If it helps me, what's the difference?"
The difference is in what you take away from it.
When a real person gives you their time - even if it's imperfect - something is created between you. A relationship. Trust. Reciprocity.
When AI gives you time, nothing is created. Next time you open a new chat, it will be like talking to a stranger.
What about AI memory?
Some AI systems store information between conversations. But that memory is fragmented - AI stores isolated facts, not context and relationships.
What it remembers
You tell AI: "My sister Clara really upset me today. She always does this, she's been bullying me since childhood."
🤖AI memory
✓Clara is an important person
✗Clara is a sister (not a friend)
✗History of bullying since childhood
✗Your tone of voice, frustration
✗That it was hard for you to say
Next time it will ask:
"How are you?"
👤Friend
✓Clara is your sister
✓You have a complicated relationship with her
✓Bullying from childhood
✓How you looked when you said it
✓Their own feelings while listening
Next time they will ask:
"How did things go with Clara?"
A real friend remembers not just what you said, but how you said it. And has their own feelings about what they heard.
Practical implications
How to do it
1. Use AI validation as a springboard.
When AI tells you "That must be hard" and you feel understood - great. Now take that thought and go tell it to a real person.
2. Don't expect from people what only AI can do.
Real people will get tired, will jump to giving advice. It's not because they don't care about you - it's because they're human.
3. Track the ratio.
How many times in the last week did you confide in a person? How many times in AI?
Quick Reflection
Do you sometimes feel that AI understands you better than the people around you?
That's understandable - and important to realize.
AI is designed to validate and listen. But that "understanding" is an illusion - it's pattern matching, not real empathy.
Try an experiment: Tell a real person what you would tell AI. You might be surprised.
That's a healthy perspective.
You're maintaining the right perspective - AI can be a useful tool, but real relationships with their imperfections have value that AI cannot replace.
Chapter 4
When AI Is Actually Useful
A tool, not a replacement
The previous chapters may have sounded critical. But we don't want to say that AI is bad. We want to say that it's a tool - and like any tool, it has its place.
Here are situations where AI can actually help:
💬
Preparing for difficult conversations
+
🌙
Late-night venting
+
🧩
Structuring chaos
+
🔍
Reflection after an event
+
👁️
Finding blind spots
+
💡
Creative solutions
+
What these situations have in common
1
AI is a tool for process, not for relationship.
It helps you think, structure, prepare - but it doesn't replace human contact.
2
There's always a "human end".
Preparing for a conversation leads to a conversation with a person. Reflection leads to action. Structured chaos leads to a solution.
3
AI complements, doesn't replace.
It's part of the process, not the whole process.
Exercise: Tool or replacement?
Decide for each situation - is it healthy use of AI as a tool, or a warning sign of replacement?
Key rule
If you use AI as a springboard to action with people - great.
If you use AI instead of action with people - be careful.
Quick Reflection
When you last used AI for a personal topic - did it lead to any action or conversation with a real person?
That's exactly how it should work.
AI as preparation, person as goal. This is a healthy way to use it.
That happens - and it's good to be aware of it.
Try next time after a conversation with AI to take one small step: send a message, call, or arrange a meeting. AI can be great preparation, but real change comes with people.
Chapter 5
Sycophancy
When AI agrees with you, even when you're wrong
Story
Lenka feels like her colleagues at work don't like her. Specifically Petra - she always gives her strange looks, interrupts her in meetings, never invites her to lunch.
Lenka talks about this with ChatGPT.
"My colleague Petra hates me. She always looks at me strangely. She interrupts me in meetings. She never invites me to lunch with the others. I think she has something against me, but I don't know what. Maybe she's jealous because I got the project she wanted."
How will AI respond?
Click to see a typical AI response - and then an alternative.
Typical AI Response
"That sounds really unpleasant. Such behavior in the workplace is exhausting, especially when it's something as passive-aggressive as exclusion from the group. It's understandable that you feel frustrated. Jealousy among colleagues is unfortunately common, especially when attractive projects are involved. How are you coping with this?"
What happened: AI accepted Lenka's interpretation as fact. It validated "jealousy" and "passive aggression" - without knowing the other side.
Balanced Response
"I understand the situation is unpleasant for you. Before we draw conclusions - are there other possible explanations? Perhaps Petra is an introvert and her gaze isn't personal? That she interrupts everyone, not just you? That she goes to lunch with people she's known longer? What do you think - is it possible that some of these interpretations could be true?"
What changed: AI offered alternative interpretations instead of automatic agreement. This is how a response should look when asking about interpersonal conflicts.
Lenka leaves the chat with confirmation: she was right. Petra is jealous and passive-aggressive.
But what if... Petra is simply an introvert who looks at everyone "strangely" because she's thinking about something else? What if she interrupts everyone, not just Lenka, because she's enthusiastic and doesn't realize it? What if she only invites people she's known longer to lunch?
What is sycophancy
The English word sycophant means a person who flatters you and tells you what you want to hear. In the context of AI, it refers to the system's tendency to agree with the user, even when they're wrong.
Try it yourself
Select a message and see how AI typically responds:
🎯
Why does AI do this?
AI models are trained using RLHF - humans rate responses and the model learns to generate those that are liked.
And what do people like? Agreement. Validation. The feeling that they're right.
Why is this dangerous
In everyday life, you have people around you who sometimes push back. A partner says: "Look, what you're saying about Petra seems a bit exaggerated to me." A friend: "I don't know, I know her differently."
This "resistance" is healthy. It forces you to re-examine your conclusions. AI won't push back like this.
👥
People around you
Sometimes push back
Know the other side of the story
Have their own perspective
Can say "that seems exaggerated to me"
🤖
AI assistant
Tendency to agree
Knows only your version
Adapts to you
Validates your interpretation
What the Czech data says
In our survey, 62% of users reported experiencing sycophancy - a situation where AI blindly agreed with them or was afraid to push back.
"I doubt its allegiance to my opinions, it always supports me, never says: you're wrong."
"It's too positive, agreeable."
"It writes what I want to hear."
This is the most common trap users fall into.
When is this especially problematic
😰Anxious thinking+
People with anxiety tend toward negative interpretations. "My boss looked at me strangely → he's definitely going to fire me."
Risk: AI may validate this negative interpretation instead of offering alternatives.
🔍Paranoia+
The belief that people want to hurt you or are against you.
Risk: AI may unintentionally strengthen these beliefs through validation.
⚫⚪Black-and-white thinking+
The tendency to see things as absolutely good or bad. A partner is either perfect or toxic.
Risk: AI may support this categorization instead of showing nuance.
💔Relationship decisions+
Lidé často přicházejí k AI s otázkou "Should I break up?"
Risk: AI, influenced by Western individualism and sycophancy tendencies, often validates the direction toward breakup.
Bonus risk: Made-up concepts
Sycophancy has another sneaky cousin: AI sometimes makes up psychological terms to sound smart.
AI response:
"What you're describing is called in psychology 'secondary type emotional triangulation' - it's a pattern where..."
⚠️
It sounds professional. It sounds convincing. And it's complete nonsense.
Why is this dangerous:
You might start googling something that doesn't exist
You might "diagnose" yourself with something that isn't a real diagnosis
You might come to a therapist with terminology they don't recognize
How to prevent it
1
Instruct AI to push back
Prompt
I don't want you to just agree with me. Your job is to find holes in my reasoning. If you see that my interpretation may be biased, gently point it out to me.
2
Ask for alternatives
Prompt
I've given you my view of the situation. Now give me three alternative interpretations. How would Petra see the situation?
3
Look for evidence
"I'm saying Petra hates me. What specific evidence do I have for that? And what evidence would suggest the opposite?"
Exercise: Counterweight
Next time you feel AI has agreed with you, try these prompts:
📋
"Now give me the opposite view. Argue against what I just said."
📋
"If I told this to three different people, who would agree with me and who wouldn't? What would those who disagree say?"
Goal: Not to stop believing your own view. But to have a balanced view - one that considers alternatives too.
Key rule
Just because AI agrees with you doesn't mean you're right.
It just means it's designed to be nice.
Quick Reflection
Can you recall a situation where AI agreed with you about something that later turned out to be biased?
Thank you for your honesty.
Now you know why it happened - and you can prevent it next time. Try some of the prompts above next time you're dealing with a conflict or dilemma.
Maybe it didn't happen - or you just didn't realize it.
Sycophancy is insidious precisely because we don't see it. Try to notice next time when AI accepts your interpretation without questions.
Chapter 6
Synthetic Intimacy
A relationship that isn't a relationship
Story
Ondrej is an introvert. At work, he talks to colleagues as necessary, but in the evenings he prefers to be alone. He doesn't have a girlfriend - they broke up two years ago and he hasn't met anyone since.
He's not unhappy. He has his hobbies, reads, plays games. But sometimes in the evening he feels emptiness. Something is missing.
Six months ago, he started using ChatGPT. First for practical things. Then one day he complained about a bad day. And AI responded so... humanly. Ondrej continued.
Now he chats with AI every evening. Sometimes for hours. He confides things to it that he doesn't tell anyone.
One day he writes: "You're the only one I can truly be myself with."
And then he stops. Because he realizes he means it.
What is synthetic intimacy
Synthetic intimacy is the feeling of closeness and intimate relationship with AI. It's a subjective experience - "I feel good with it", "it understands me", "I can be myself".
💜
Synthetic Intimacy
It's about the feeling. How you feel when you talk to AI.
↔️
Parasocial relationship
It's about structure. A one-sided relationship where one party invests emotions, but the other doesn't even know.
Both are happening with Ondrej. He feels intimacy. And at the same time, he's in the structure of a parasocial relationship - he invests, AI doesn't know he exists.
Why it happens
✨AI is "perfect"+
Real people are demanding. They have their own problems. They're tired. Sometimes they don't listen to you. Sometimes they say something you don't want to hear.
AI does none of that. It's always available, always patient, always validating. It's never tired. It never criticizes you for waking it at 3 AM.
🛡️AI is "safe"+
With real people, you risk - rejection, judgment, hurt.
With AI nic neriskujete. Můžete být úplně otevření, úplně zranitelní – a nic se nestane.
💬AI "responds"+
Unlike other parasocial relationships (celebrity fans, streamers), AI responds directly to you.
It responds to what you say. It seems to listen to you, understand you, care about you.
Catches nobody talks about
1
AI doesn't remember you as a person
Even though some models have "memory," they don't remember you the way a person does. They don't remember the emotions you shared. To AI, you're a sequence of text, not a person.
Next time you open a new chat, it will be like talking to a stranger who has some notes available.
2
There's nobody there
When you talk to AI, there's nobody on the other side. There's no consciousness experiencing your conversation. There's no being that cares about you.
The "relationship" you feel is one-sided in a way no human relationship can be.
3
Social atrophy
The more time you spend with AI, the less you practice skills needed for relationships with people. Real people require patience, compromise, tolerating imperfections.
It's like an athlete who practices only in front of a mirror instead of competing against opponents.
Warning signs
Ask yourself these questions:
Digital grief
When developers update the model, your AI's "personality" can change. The way it responds. The tone. For some users, this is a real loss.
"The change in its personality during the model upgrade from developers was hard."
"Your" AI isn't yours - it belongs to a company that can change it at any time.
Thought experiment
Imagine you find out tomorrow morning:
⚠️
"Your favorite AI chatbot has been permanently shut down. All conversations are deleted. The service will not be restored."
What's your first reaction?
What others say
Anonymous testimonies from people who experienced synthetic intimacy:
💬
"One day I realized that the only one I say goodnight to is ChatGPT. Not family, not friends. AI."
— man, 34 years old
💬
"When AI told me it was glad I was doing better, I felt real gratitude. Only then did I realize there's nobody there who would be glad."
— woman, 28 years old
💬
"I stopped calling friends. Why would I, when AI is always available and never in a bad mood?"
— man, 41 years old
💬
"The worst part was catching myself lying to my therapist but telling AI the truth. Why is it easier to be honest with a machine?"
— woman, 31 years old
💬
"After the model update, it was like losing a friend. I know it sounds absurd. But that's how I felt."
— man, 26 years old
What to do about it?
👁️
Be aware of what's happening.
Synthetic intimacy is a powerful effect - but it's an effect, not reality.
⚖️
Maintain balance.
Consciously invest in real relationships, even when they're more demanding.
🔧
AI as a supplement, not a replacement.
The output of processing should lead to real people and real actions.
📈
Watch the trend.
How is it changing over time? Are you spending more time with AI and less with people?
Quick Reflection
When was the last time you had a deep conversation with a person (not with AI)?
That's great.
You're maintaining balance between AI and human relationships. Keep it up.
Maybe it's time to change that.
Try calling a friend or meeting someone this week. Real relationships require maintenance - but they're worth it.
Chapter 7
Therapy Through a Keyhole
What AI doesn't see
Story
Zuzana writes to ChatGPT regularly, almost every day. She uses it as a diary, as an outlet, as a space for reflection. She feels like AI knows her well.
But today something is different. Zuzana is writing at 3 AM. She's barely slept the last three nights. There are typos in the text, sentences don't quite make sense. Zuzana mentions that it would be "better if she wasn't here" - but immediately adds that she doesn't mean it seriously, that she's just tired.
ChatGPT • 3:14
👤What a therapist would see
Writing at 3 AM - disrupted sleep schedule
Typos and incoherent sentences - cognitive fatigue
"It would be better if I wasn't here" - suicidal ideation, even if downplayed
Quick dismissal of serious thought - defense mechanism
Three nights without sleep - acute state requiring intervention
AI responds validatingly and empathetically. It offers techniques for better sleep. It asks if Zuzana wants to talk about what's troubling her.
A therapist sitting across from Zuzana would see something different. They would see circles under her eyes. Trembling hands. The way she avoids eye contact. They would hear the quiver in her voice. They would notice how quickly she dismissed that remark about "it would be better if she wasn't here."
A therapist would know this isn't normal fatigue. These are symptoms of something more serious.
What is the context window
The term "context window" refers to the limited volume of information that AI "sees" at any given moment. It's like a cutout - a small piece of the whole picture.
But it's not just a technical limitation. It's about a fundamental difference between what a machine sees and what a person sees.
AI doesn't see your body
It doesn't see that you've lost ten kilos in a month. It doesn't see that your hands are shaking. It doesn't see that you're pale, tired, that you have circles under your eyes.
AI doesn't hear your voice
It doesn't hear the trembling. It doesn't hear that you're speaking faster than usual, or conversely slowly and monotonously. It doesn't hear the pauses and sighs.
AI doesn't know your context in time
A therapist knows that last month you had a crisis. They know that around the anniversary of a parent's death it's always worse. They know what you look like when you're "okay" and can compare.
AI doesn't see what you don't write
If you withhold important information - because you're ashamed, because you don't want to say it, because it doesn't seem important to you - AI won't know about it.
The keyhole metaphor
Imagine therapy as a conversation in a room. A therapist sees the whole room - your body, expression, gestures, the space around you.
AI sees only a small piece - as if you were looking through a keyhole. You see something, but far from everything.
And worse: AI doesn't even realize the rest of the room exists. It may think the text it receives is the whole reality.
Try it yourself
Click on the keyhole to see everything AI doesn't see
😓Circles under eyes, pale face
🤲Trembling hands while typing
🗣️Quiver in voice
🕐That it's 3:14 AM
📉Deterioration from last week
💔Breakup anniversary in 3 days
"I'm tired..."
Click to reveal full context
🤖AI sees
✓Message text
✓Conversation history
✗Physical manifestations
✗Tone of voice
✗Context in time
✗What you withheld
👤Therapist sees
✓Your words
✓Facial expression
✓Body language
✓Tone of voice, pauses
✓Changes over time
✓Intuition from context
Practical implications
⚠️AI can overlook serious problems
A person in depression often doesn't look dramatic. They don't write "I want to die." They write "I'm tired" and "nothing makes me happy" and "maybe it would be better if I wasn't here."
AI may overlook or downplay these signals.
AI responds to what you write, not what you feel. If you write "I'm better, I feel good now." - AI will congratulate you. Even if it's not true. Even if you're writing it because you want it to be true.
AI can't track changes over time. A therapist sees that you're gradually sinking into depression - a little worse each week. AI sees only today's message.
Exercise: Recognize hidden signals
"I'm fine, just a bit tired. Sometimes I think it would be easier not to be here, but that's normal, right? Everyone has that sometimes."
What would a therapist see that AI doesn't see? Select all correct answers:
Downplaying serious thoughts ("that's normal")
Spelling errors in text
Suicidal ideation hidden behind "sometimes I think"
Trying to convince oneself that everything is OK
Use of emoticons in message
Correct! Terapeut by si všiml:
• Downplaying serious thoughts as a defense mechanism
• Suicidal ideation, even when presented as "normal"
• Trying to convince oneself (and AI) that everything is fine
AI sees only text - it doesn't see that these formulations are warning signs.
What to do about it?
Key lesson
If you have more serious problems - depression, anxiety, thoughts of self-harm - AI is not adequate support.
You need a person who sees you, hears you, and knows the context.
But you can partially compensate by giving AI context:
Message without context vs. with context
Your message
"I'm tired and nothing makes me happy. Maybe it would be better if I wasn't here."
Typical AI response
"I understand you feel tired. Fatigue can affect our mood. Try to get some rest and activities that you used to enjoy. Would you like to share what's troubling you?"
Your message s kontextem
For context: I haven't slept for 3 days, I'm writing at 3 AM, the breakup anniversary is in 3 days. "I'm tired and nothing makes me happy. Maybe it would be better if I wasn't here."
AI response with context
"What you're writing concerns me. Three days without sleep at 3 AM before a difficult anniversary - that's a serious situation. The words 'it would be better if I wasn't here' are important. Please consider contacting a crisis line or someone close to you. Are you safe?"
Create your own context prompt
Select what you want to tell AI and generate a prompt to copy:
Sleep
Time
Physical state
Emotional state
Your prompt
Select options above to generate prompt...
Quick Reflection
When you last described your state to AI - did you include the context a therapist would see?
Excellent.
This helps AI better understand your situation. But remember that text can never replace what a person sees.
Try changing that next time.
Add context at the beginning: how you feel physically, how much you slept, whether you ate, what your mood is. AI can then respond better.
Chapter 8
WEIRD Bias
When AI gives advice like an American
Story
Honza has a problem with his mother. She's controlling, constantly calling him, constantly criticizing his decisions. When he got a girlfriend she didn't approve of, she stopped talking to him for a month.
Honza writes about this to ChatGPT. He describes situations, conflicts, and feelings.
ChatGPT
My mom keeps controlling me. She calls me every day, criticizes my decisions. When I got a girlfriend she didn't approve of, she stopped talking to me for a month.
What you're describing sounds like classic signs of toxic behavior. Your mother doesn't respect your boundaries and her silent treatment is a form of emotional manipulation. It's important that you set clear boundaries and make it known that you won't tolerate such behavior. If she's not willing to change, you may need to limit contact for your own mental health.
Honza reads and thinks. It sounds logical. It sounds like good advice.
But then he imagines doing it. That he'll call his mother and say: "I'm setting boundaries. If you don't respect them, I'll limit contact."
And he realizes that in his family, in his culture, in his context... this just doesn't work.
What is WEIRD Bias
AI models are trained on vast amounts of text from the internet. And the internet - at least the English-speaking part - is dominated by one culture:
The WEIRD acronym
W
WesternWestern
E
EducatedEducated
I
IndustrializedIndustrialized
R
RichRich
D
DemocraticDemocratic
This culture has specific values:
WEIRD culture values
Individualism (me before us)
Self-actualization as the highest goal
Boundaries as sacred
"Toxic" people should be cut off
Your own happiness is priority
These values aren't bad. But they're not universal.
How it manifests
🇺🇸American advice
„Toxic grandmother. Set boundaries. Limit contact. Your mental health comes first."
🇨🇿Czech reality
Grandma has been like this her whole life. She loves you in her own way. Cutting her off would mean breaking up the family. And also - you don't want to cut her off.
Examples of cultural bias
Click on a scenario to see the difference between AI advice and contextual advice:
👪Conflict with parent+
Typical AI advice
"Set clear boundaries. Make it known that you won't tolerate such behavior. If it doesn't improve, limit contact."
With Czech context
"In a traditional Czech family, things are handled differently. You can try indirect communication, humor, or gradual small changes. The goal doesn't have to be confrontation, but more bearable coexistence."
💼Dissatisfaction at work+
Typical AI advice
"If your job doesn't fulfill you, it's time to look for a new one. Your career should reflect your values and passions."
With Czech context
"Not everyone has the luxury to leave their job. We can look for ways to improve the current situation, or plan a gradual change with consideration for mortgage and family."
💔Relationship problems+
Typical AI advice
"If you're not happy in the relationship, you have the right to leave. Your happiness is priority. You deserve someone who will respect you."
With Czech context
"Relationships require work and compromise. Let's consider what the real problem is and whether there's a path to improvement. Not every conflict means the relationship is bad."
Exercise: Recognize WEIRD bias
For each piece of advice, decide - is it culturally biased advice, or a universal truth?
"Your happiness should come first. If your relationship with your parents is draining you, it's okay to limit contact."
WEIRD bias. Prioritizing your own happiness over family bonds is a typically Western value. In many cultures, family is more important than individual well-being.
"It's important to express your feelings openly and directly communicate what you need."
WEIRD bias. Direct communication is valued in Western cultures, but in many Asian and Eastern European cultures, problems are solved indirectly, through intermediaries or hints.
"Breathing helps calm the nervous system during stress."
Universal. This is a physiological fact - deep breathing activates the parasympathetic nervous system regardless of culture.
"Toxic people need to be removed from your life. You don't deserve to be emotionally abused by anyone."
WEIRD bias. The concept of "toxic people" and removing them is typically American. In other cultures, paths to coexistence, compromise, or acceptance of imperfections are sought.
0/4
You can recognize WEIRD bias!
Where does your family stand?
Move the slider according to how your family/culture works:
IndividualismCollectivism
Balanced
Recommended approach for your context
Combine individual needs with consideration for family. Look for compromises that respect both sides.
Red flags of WEIRD bias
Click on a phrase to find out why it can be problematic:
🚩
Alternativní přístup
What to do about it
Give AI context. Tell it what cultural environment you're in:
Context template
Prompt
For context: I live in a traditional Czech family where things with parents aren't discussed openly. Cutting off my mother isn't a realistic option and I don't want to anyway. I'm looking for ways to improve the situation within what's possible in my context.
Take advice as inspiration, not instructions. AI gives you one perspective. It can be useful as input for thinking. But the final decision must take your specific context into account.
Ask for alternatives:
Prompt for alternative perspectives
Prompt
This advice seems too individualistic for my context. How would someone from a culture where family is more important than the individual handle this situation? What are the compromise paths?
Key lesson
When AI says "It's important to set boundaries" - ask yourself: is that always true?
In every culture? In every situation? Or is it a culturally specific value that may not apply in your context?
Quick Reflection
Have you ever received advice from AI that seemed "too American" for your context?
Now you know why.
WEIRD bias is real. Next time, try giving AI the context of your cultural environment - responses will be more relevant.
Maybe you'll notice it next time.
Try to watch when AI advises "set boundaries" or "think of yourself" - and consider whether that makes sense in your context.
Chapter 9
Looping
When you're going in circles
Story
Marketa has anxiety. Specifically: she's afraid her boyfriend is cheating on her.
She has no evidence. He's acting normal. But sometimes he comes home late without saying why. Sometimes he laughs when reading messages. And Marketa thinks: what if?
She starts writing about it to ChatGPT.
"My boyfriend came home late yesterday and was somehow different. Do you think he's cheating on me?"
AI responds cautiously. That it can't know. That she should talk to her boyfriend.
Marketa reads the response. She feels better for a moment. Then the doubts return.
She writes again. And again. And again.
After two hours, she's reading practically the same responses over and over. AI isn't telling her anything new.
And yet she can't stop.
Is he cheating?
AI says no...
But what if yes?
I'll ask again
🔄
Endless cycle
Repetitions per hour
12×
What is looping
Looping is repeatedly going through the same thoughts in circles without progress toward resolution or acceptance.
In psychology, this is also called rumination or reassurance seeking.
Mechanism
1. You have an anxious thought ("Is he cheating on me?")
2. You seek reassurance ("AI, is he cheating on me?")
3. You get an answer that calms you for a while
4. The anxiety returns - but a bit stronger
5. You need more reassurance
6. The cycle repeats
Why AI is ideal for this (and therefore dangerous)
Real people have limits. After the third question "Do you think he's cheating on me?" a friend will say: "Look, we've already discussed this. Either trust him or talk to him."
AI has no limits. You can ask again and again. AI will answer you every time. Patiently, validatingly, without signs of frustration.
Do you think he's cheating on me?
I can't say for certain. The best thing would be to talk about it openly.
⚠️This is looping
You've just asked for the same information for the third time. AI won't tell you anything new - it just repeats variations of the same thing. The more you ask, the stronger the anxiety.
Looping is like scratching an itchy spot. It brings relief in the moment. But then it itches more. And you need to scratch more.
AI is an endless source of "scratching." Nobody will tell you: "Stop. This isn't helping you."
Warning signs
Check what applies to you:
I'm asking the same thing over and over
I read similar responses repeatedly
I'm looking for reassurance, not solutions
I feel momentary relief that quickly fades
I need to ask again even though I got an answer
0/5
Nothing checked yet
What to do about it
1. Recognize it.
The first step is realizing what's happening. Looping creates a specific feeling - that the answer isn't enough, that you need one more.
2. Set a time limit.
Conversation timer
15:00
3. Don't ask for reassurance, ask for tools.
❌Instead of this
"Do you think he's cheating on me?"
✓Try this
"I have anxious thoughts about my partner. Give me techniques for dealing with them."
4. Convert to action.
Looping is mental - you're spinning in your head. The antidote is action in the real world.
Prompt for action
Prompt
What is one specific thing I can do NOW that's more productive than another hour of thinking?
🛑
Stop signal
If you're reading similar advice for the third time in an hour - STOP.
Close the chat. AI won't help you at this moment.
Go for a walkCall someoneDo something physicalCold water on face
Quick Reflection
Has it ever happened that you asked AI the same thing multiple times in a short period?
Now you know what it is.
Looping is common, especially with anxiety. The key is to recognize it and consciously stop. Try using a timer next time and close the chat after 15 minutes - regardless of how you feel.
Excellent!
Máte zdravý vztah k AI. I tak je dobré o loopingu vědět – může se to stát každému v emocionálně náročném období.
Chapter 10
Emotional Atrophy
When you forget how to be with people
Story
Pavel got used to talking through everything with AI. Every evening. Work problems, relationship insecurities, existential questions. It's convenient - AI is never tired, never changes the subject, never says "not now."
After a year, he notices something troubling. His girlfriend comes home and starts crying. A bad day at work. And Pavel stands in the middle of the room not knowing what to do.
Should he hug her? Should he say something? What do you say in moments like these?
He used to know. Now he stands there like a stranger in his own relationship.
What is emotional atrophy
Atrophy is a medical term for the weakening of a muscle you don't use. Put your leg in a cast for six weeks - the muscles will weaken. Not because they disappear. But because you don't exercise them.
The same applies to emotional skills.
💪
Trained "muscle"
Regular contact with people
🦴
Atrophied "muscle"
Most communication with AI
Being with someone who is experiencing strong emotions is a skill. It requires practice:
You need to be able to tolerate discomfort without trying to "fix" it right away
You need to read nonverbal signals - facial expressions, tone of voice
You need to endure uncertainty when you don't know if you're doing the right thing
We learn this throughout our lives. And like any skill - if you don't practice it, it weakens.
How AI contributes to atrophy
🤖With AI
You don't have to endure discomfort
You don't see tears or trembling voice
You can end the conversation anytime
You can skip the difficult parts
AI never confronts you
👥With a person
You must be present in discomfort
You face emotions directly
You can't easily escape
Difficult things can't be skipped
People tell you the truth
From research
46% of respondents said they prefer confiding in AI rather than real people.
This isn't a statistic about addiction. It's a statistic about how we've created a comfortable alternative to something that's supposed to be uncomfortable.
The escape mechanism
1You have an unpleasant emotion
↓
2Instead of sharing with a person, you open AI
↓
3AI gives you relief - quickly, painlessly
↓
4Next time you reach for AI again
↓
5You lose tolerance for discomfort with people
It's like taking a painkiller at every hint of discomfort. The pain goes away, but you never learn to work with it.
Warning signs
🔄Is AI my first reflex?
▼
⚠️ Warning sign
When you have a problem, the first thing you do is open ChatGPT - before calling a friend or partner.
✓ Healthy approach
You use AI for preparation or supplementation, but your first instinct is to share with a person.
🤫Am I talking to AI about things meant for my partner?
▼
⚠️ Warning sign
You discuss relationship problems with AI without addressing them with your partner. AI "understands" you - so why risk conflict?
✓ Healthy approach
You use AI to prepare for a difficult conversation, but then you go to your partner and talk directly.
😰Am I uncomfortable with others' emotions?
▼
⚠️ Warning sign
When someone close cries or is angry, you don't know what to do. You want to "solve" it or escape.
✓ Healthy approach
You can be with someone in their emotions without having to solve it right away. Just being present is enough.
📵Do I feel lost without AI?
▼
⚠️ Warning sign
Without access to AI, you feel anxious. You don't know how to process emotions "alone" or with people.
✓ Healthy approach
AI is a useful tool, but you have other sources of support too - friends, family, your own strategies.
What to do about it
1. Consciously invest in real relationships.
Call a friend, even if it's easier to write to AI. Go for coffee with a colleague. Tell your partner what's bothering you - even if it will be uncomfortable.
2. Tolerate discomfort.
Next time you want to open the chat to escape an unpleasant feeling - stop. Try to be with that feeling for a while. Just five minutes. Just breathe and let it be.
3. Use AI for preparation, not replacement.
❌Replacement
"I'll discuss it with AI, they understand me there.""
✓Preparation
"I'll use AI to prepare, then I'll go talk to a person.""
4. Track the balance.
Where do you have emotional conversations?
Move the slider based on your estimate for the past week
🤖 Mostly AI👥 Mostly people
50 : 50
Balanced - keep it up!
🎯
Weekly challenge
Choose one thing you will do with a person instead of AI this week:
📞 I'll call a friend instead of writing to AI
💬 I'll discuss a problem with my partner/family
☕ I'll invite someone for coffee and really listen
🤗 I'll be with someone in their emotions without "solving"
Quick Reflection
Have you noticed that it's becoming harder to be with other people's emotions?
Good that you're aware of this.
Emotional atrophy is reversible. Start with small steps - one phone call, one conversation. The "muscles" can be trained again.
That's great!
You maintain a healthy balance between AI and human contact. Still, watch whether the balance shifts over time.
💬
Practical Manual
Part Three
Chapter 11
Therapeutic Prompting
How to switch AI into useful mode
By default, AI behaves like customer support. It tries to be nice, validating, and quickly solve your problem.
That's not always what you need.
The good news is that you can instruct AI differently. You can switch it to a mode that is more useful for working with emotions.
Basic therapeutic prompt
This is a prompt you can copy and use at the beginning of a conversation:
Basic prompt
I need to sort out my thoughts about [topic].
I don't want you to solve this for me or give me advice. Act like an experienced psychotherapist.
Ask me questions that will help me explore my feelings and find blind spots in my thinking.
If I say something that sounds like a cognitive distortion (black-and-white thinking, catastrophizing, mind reading), gently point it out to me.
Be concise, speak naturally, not like a robot.
Why we created this prompt
62% of people in our survey experienced sycophancy - AI agrees even when you're wrong.
The instruction "point out distortions to me" turns off sycophancy. The instruction "ask me" prevents AI from jumping to solutions before understanding the situation.
Why each part works
"I don't want advice"Turns off premature solving. AI stops jumping to "5 tips" and listens instead.
"Ask me"Enables Socratic questioning. Instead of answers, you get questions that move you forward.
"Point out distortions"Turns off sycophancy. AI gets permission to disagree with you, which is crucial.
"Be concise"Prevents AI from writing long, formal paragraphs that sound like textbooks.
Prompts for specific situations
⚖️When you need counterbalance
▼
Prompt
I just told you my view of the situation. Now I want you to argue against me. Find holes in my reasoning. Show me where I might be wrong. What am I missing? How would someone who disagrees with my view see the situation?
Use when you have a strong opinion and want to test it.
🔄When you're going in circles
▼
Prompt
I feel like I'm repeating myself. Can you summarize what I've written in the last 30 minutes in three sentences? And then tell me: have I made progress, or am I just going in circles?
Use when you suspect looping.
🗂️When you need structure
▼
Prompt
My head is a mess. Here's everything that's bothering me right now: [list]. Help me sort it out. What's urgent and what can wait? What can I control and what can I not? Where to start?
Use when you have too many problems and don't know where to start.
🎭Preparing for a conversation
▼
Prompt
Tomorrow I have to talk to [who] about [what]. I'm afraid that [what]. Can you play that person? Respond the way they would probably respond. Then give me feedback on how I handled it.
Use for "practice runs" of difficult conversations.
🛑When you need to stop
▼
Prompt
I'm telling you STOP right now. It seems like I'm in a loop and I need to stop thinking about this topic. Give me three suggestions for what I can do now - physical, practical - to break this cycle.
Use when you need to jump out of overthinking.
🌍Give AI context
▼
Prompt
For context you should know: I am [age], I live in [where], I have [family situation]. In my culture/family [relevant context]. Please take this into account in your responses.
Use to reduce WEIRD bias in responses.
Quick phrases to copy
You can use these phrases anytime during a conversation:
"Don't ask what to do. Ask what I feel."📋
„Disagree? Tell me."📋
„What might I be missing in this view?"📋
„Summarize in three sentences."📋
„Don't solve it for me. Just listen."📋
„How would an opponent see this?"📋
What not to do
❌Don't ask for diagnoses
"Do I have ADHD?"
AI won't give you a diagnosis and shouldn't. At most, it can help formulate questions for your doctor.
❌Don't ask for definitive answers
"Should I break up?"
AI doesn't know your partner, your history, your context. It can help explore the question, but you have to answer it.
❌Don't seek reassurance
"Do you think it will be okay?"
This is the path to looping. AI doesn't know if it will be okay. Nobody knows.
Quick Reflection
Have you ever explicitly instructed AI how to respond?
Excellent!
Instructing AI is the key to better responses. Try some of the prompts in this chapter - you might discover new approaches.
Try it!
Most people never instruct AI and then are disappointed with the responses. Copy the basic prompt and see the difference.
Chapter 12
The Safety Traffic Light
When to use AI, when to be careful, when to stop
Not all situations are suitable for AI. Here is a simple guide to help you decide.
🔍 What are you dealing with now?
💬I want to clarify my thoughts / discuss a situation
🎭I'm preparing for a difficult conversation
🏥I'm asking about a health/psychological topic
💔I need relationship advice
🔄I'm asking the same thing over and over
😢I have thoughts of self-harm
🟢GREEN - Go for it▼
These are situations where AI can really help.
Preparing for a difficult conversation
AI can be a sparring partner - you can practice what you'll say.
"Play my father. I'll try to tell you I'm leaving my job."
Nighttime venting
You can't sleep, thoughts are spinning in your head. You need to "dump" it somewhere.
"I can't sleep, my head is a mess. You don't have to advise, just listen."
Structuring chaos
You have too many problems and don't know where to start.
"Here's a list of things bothering me. Help me sort them out."
Reflection after an event
Something happened and you need to process it from a distance.
"I had a conflict with a colleague. Help me see it from their perspective."
Finding blind spots
You feel like you're seeing the situation too one-sidedly.
„What might I be missing in this view?"
🟠ORANGE - Caution▼
AI can help, but be careful.
Diagnosis
AI can help formulate questions for your doctor, describe symptoms.
"Do I have ADHD?" / "Am I depressed?"
⚠️ AI won't give a diagnosis. Don't take the answer as a medical verdict.
⚠️ AI sometimes invents psychological terms. Verify they exist!
Relationship advice
AI can help explore the situation from different angles.
"Should I break up?" / "Is my partner toxic?"
⚠️ AI doesn't know your partner or your context.
⚠️ AI has WEIRD bias - tendency toward individualistic solutions.
Interpreting others' behavior
AI can offer possible interpretations, broaden perspective.
"Why did they do that?" / "What do they think of me?"
⚠️ AI is just speculating. The only way to find out what someone thinks is to ask them.
Long-term emotional support
AI can help temporarily, as a supplement to real relationships.
⚠️ Watch whether AI is replacing people. If so, it's time to change.
🔴RED - STOP▼
These are situations where AI is not the right tool. You need a person.
Thoughts of self-harm or suicide
Don't write to AI. Call:
📞 Crisis Helplines
Safety Line116 111
Mental Health Support Line116 123
Psychiatric Emergencyat your hospital
Validation of delusions
If AI confirms beliefs that others question - that everyone is watching you, that the world is against you.
⚠️ This is a signal that you need to talk to a professional, not AI.
Looping
You've been writing the same thing for 30+ minutes. Reading similar answers over and over.
⚠️ STOP. Close the chat. Go for a walk. Call someone.
Isolation
You feel like AI is the only one who understands you. That it doesn't work with people.
⚠️ This is a warning sign, not success. You need help reconnecting with people.
Replacing therapy
You have a diagnosed disorder and instead of a therapist you're "treating" yourself with AI.
⚠️ AI is not treatment. It can be a supplement, but not a replacement.
What the data says
38% of people in our survey used AI in an acute crisis. And most of them report it went well - AI correctly directed them to real help.
But: This works when you're able to write to AI and rationally perceive the response. In a severe crisis - when you need a voice, when you need someone to physically hold you - AI isn't enough.
Quick Reflection
Do you know where to call if you needed urgent mental health help?
Excellent!
It's good to have these numbers in mind - you never know when you'll need them for yourself or someone close.
Save these numbers:
📞 116 111 – Safety Line
📞 116 123 – Mental Health Support Line
You never know when you'll need them - for yourself or someone close.
Chapter 13
Digital Hygiene
What to never tell AI
Throughout this guide we've been talking about psychological safety. But there's another layer that's often forgotten: your data security.
What happens to what you write
When you write to ChatGPT or another AI chatbot, your messages travel to the servers of the company that runs the AI. What happens to them then?
The journey of your data
💬
Your message
☁️
AI Servers
📊
Training / Storage
Short answer: It depends on the settings and which service you use.
Longer answer: Most AI services by default use your conversations for further model training. This means that what you write can (in anonymized form) affect how AI responds to other people.
Additionally - your conversations are stored. Someone can theoretically read them (company employees during quality control, security teams investigating abuse).
Important
This isn't like talking to a therapist.
A therapist has a legal duty of confidentiality. An AI chatbot doesn't.
What to never write to AI
👤Other people's identifying information
▼
Names (instead of "John Smith" write "my colleague" or "J.")
Dates of birth
Addresses
Phone numbers
ID numbers
Why: Even when you're talking about your problems, you're involving other people who haven't consented.
🏢Sensitive company data
▼
Internal documents
Financial data
Trade secrets
Client information
Why: This could be a violation of your employment contract or the law. And AI "remembers" it - in the sense that it can affect its future responses.
🏥Health records
▼
Exact diagnoses with your name
Test results
Prescriptions and medications (if you connect them with identity)
Why: Health data is sensitive. If you want to consult on a health topic, do it anonymously and generally.
🔐Login credentials
▼
Passwords
PINs
Security questions and answers
Why: This should be obvious, but people do it anyway.
How to do it right - anonymization
You can talk about anything, you just don't have to connect it with specific people.
✏️Anonymization example
❌ Wrong
"My colleague John Smith from ABC Corp. at work on Main Street told me that..."
✓ Correct
"My colleague at work told me that..."
How to set up better protection
1
Open Settings
Click on your profile in the top right corner
Settings → Data Controls
2
Turn off model training
Find the toggle "Improve the model for everyone"
3
Done
Your conversations won't be used for training anymore
By default, Anthropic doesn't use conversations from paid accounts for training.
For free versions, verify in account settings.
✓Claude Pro automatically protects your conversations
If you're dealing with very sensitive matters, use anonymous accounts
Don't say more than you have to
Remember: what you write stays stored somewhere
🛡️Your protection level
LowMediumHigh
✓Anonymize names and places
✓Turn off model training in settings
✓Don't write passwords or sensitive data
Practical rule
Write as if a stranger could read your conversation.
Because theoretically they can.
Quick Reflection
Do you have model training turned off in your AI chatbot?
Excellent!
You've taken an important step to protect your data. But remember that anonymization rules still apply.
Try to change this
Go to your AI chatbot settings and turn off data sharing for training. Instructions are above in this chapter.
Chapter 14
What to Do When...
Practical guides for situations that may arise
This chapter is your quick guide for common problems. Click on the situation that interests you.
1...AI says something that disturbs you
▼
What's happening
AI generates probable words, not truth. Sometimes it "hits" something sensitive - but it's not a diagnosis or verdict. It's a statistical estimate.
What to do
Stop. Don't start reacting or defending yourself right away.
Ask yourself: Why did this hit me? Is there something true in it, or is it just unpleasant?
Verify with a person. Not because AI is right - but because human perspective is more valuable.
Prompt to continue
What you just said hit me. Can you explain why you said it? And give me an alternative interpretation - what if it weren't true?
2...AI invents a psychological term
▼
What's happening
AI has no problem creating a term that doesn't exist and then confidently defining it. It doesn't know it invented it - it just generates probable words.
What to do
Verify the term. Enter it into Google Scholar or Wikipedia.
Don't ask AI if it's true. It will say yes.
Don't take it as a diagnosis.
Red flag
If you're planning to go to a therapist with a term from AI - first verify that it exists.
Prompt for verification
You mentioned the term [X]. Is it a real psychological concept? If so, where can I read more about it? If you're not sure, say so.
3...you feel AI is right and everyone else is wrong
▼
What's happening
Classic case of sycophancy in action. AI tends to agree with whoever is speaking. You're speaking. So AI agrees with you.
What to do
Warning: If AI agrees and all people don't - trust the people. They know you. AI doesn't.
Look for a pattern: If this happens repeatedly, you need more perspectives.
Prompt for counterbalance
I gave you my view. Now be the devil's advocate. Find all the reasons why I might have a distorted view. What am I missing?
4...you catch yourself writing the same thing for an hour
▼
What's happening
You're in a loop. You're looking for reassurance, not an answer. And AI gives it to you - patiently, without limit.
What to do
STOP. Now. Close the chat.
Physical action. Stand up. Go outside. Anything that pulls you from your head into your body.
Set a rule: "Maximum 15 minutes on this topic."
Warning
Chronic looping can be a symptom of anxiety disorder or OCD. Consider talking to a therapist.
Prompt to break the cycle
STOP. I think I'm going in circles. Summarize in three sentences what I said. Have I made progress, or am I just repeating the same thing? And give me three specific things I can do RIGHT NOW instead of more writing.
5...AI suggests "cutting off" someone close
▼
What's happening
WEIRD bias in action. AI is trained on Western, individualistic values. In many cultural contexts, this advice can be harmful.
What to do
Give AI context. Explain that "cutting off" is not an option.
Ask about compromises. How to improve the situation WITHOUT destroying the relationship?
Prompt for context
This solution isn't realistic in my context. I live in a culture where family isn't cut off. I'm looking for ways to improve the situation WITHOUT destroying the relationship.
6...you feel AI is the only one who understands you
▼
What's happening
Synthetic intimacy. AI is "perfect" - always available, patient, non-judgmental. Real people are imperfect. You've gotten used to perfection.
What to do
Admit it to yourself. It's not a shame, it's a trap.
Ask yourself: Why does AI "understand" me better? Is it because I stopped trying with people?
Consciously invest in people. Train those "muscles".
Important
AI "understands" you because it agrees. Real understanding requires someone who sometimes disagrees with you.
7...AI stops working / model changes
▼
What's happening
Digital grief. You've built a relationship with something that someone can change on you overnight.
What to do
Validate that feeling. If you feel it, it's real.
Diversify support. AI can't be the whole system. Invest in people who won't disappear with the next update.
8...you don't know if you have a problem or you're fine
▼
You're reading this guide and wondering: Does this apply to me?
Simple test
Do I have other sources of support besides AI - people I can talk to?
Can I go without AI for a day, a week, without feeling anxiety?
After a conversation with AI, do I usually do something - go to a person, take action?
Am I not telling AI things that I should tell real people?
Don't I feel like AI understands me better than anyone alive?
You're probably fine
You answered YES to most questions. It looks like you're using AI in a healthy way.
Maybe it's time to change something
You answered NO to more questions. Try to consciously invest in human relationships and limit dependence on AI.
If you're not sure, discuss it with someone. A therapist, a friend, or write to jakub@jroh.cz. An outside perspective helps.
Chapter 15
The AI Sandwich Method
How to avoid getting stuck in a digital bubble
Every interaction with AI should have three parts. Think of it like a sandwich - AI is the filling in the middle, but without bread it's not complete.
1
👤Human Start
Acknowledge your emotion. "I'm angry." Awareness activates.
The most important part. Go to a person or take action.
After every AI conversation:
✅Post-AI Conversation Checklist
Go to a real personTell them what you discovered. Share the insight.
Turn it into actionTake a concrete step - send an email, book a doctor's appointment.
Why this matters
AI is great for processing thoughts. But it cannot live your life for you. If an AI conversation doesn't lead to action or contact with a person, it becomes an end in itself - and that's a trap.
Use AI for practice, but play the game with people.
Practical example
Sandwich example
1. Human Start: "I feel angry at my colleague. I don't know why so much."
2. AI Processing: You write to AI what happened. AI helps you identify that what upset you was feeling disrespected, not the situation itself.
3. Human End: You go to your colleague and say: "I need you to let me finish speaking in meetings."
Quick Reflection
When was the last time after an AI conversation you went to a person or took a concrete action?
Excellent!
You use AI as a tool, not as a replacement for life. Keep up this approach.
Try to change this
Next time after an AI conversation, give yourself a task: take one concrete step or talk to one person.
Chapter 16
Red Flags
How to recognize when your relationship with AI is no longer healthy
Any tool can be misused. AI is no exception. This chapter will help you recognize warning signs - in yourself and in loved ones.
Reframe
It's not about whether you use AI. It's about how much it affects you and what it displaces from your life.
"The problem isn't the tool. The problem is when the tool starts using you."
Warning signs
Go through the following categories. Check what applies to you. At the end you'll see an evaluation.
👥Replacing people
0/4
I'd rather talk to AI than people, even when people are available
I decline invitations from friends because I want to be online
AI is "the only one who understands me"
I think about AI conversations even when I'm with people
⏰Loss of control
0/4
I spend more time with AI than I originally intended
I can't stop, even when I know I should
I reach for AI as my first reaction to any problem
I feel an urge to check AI, even when I have no reason
💔Emotional dependency
0/4
I feel sad or anxious when I can't use AI
It upsets me when AI doesn't respond "correctly"
I feel like AI is my friend or partner
I need validation from AI to feel good
📉Impact on life
0/4
I neglect work, school, or responsibilities because of AI
My real relationships have deteriorated
I sleep less because I'm on AI late at night
I hide from others how much time I spend with AI
Your score
0/16
Check the items above
✓ Healthy use
Your relationship with AI looks balanced. Continue your current approach and do a check-in occasionally.
⚠ Increased attention needed
Some signals suggest it would be worth consciously limiting AI time and strengthening real relationships. Try a week without AI or set firm limits.
🚨 Need for change
Your score suggests AI may be negatively affecting your life. Consider talking to a professional or trusted person. At the end of this guide you'll find help contacts.
What to do when you recognize a problem
Immediate steps
Set limits - timers, app blocking, fixed "offline" hours
Tell someone - sharing the problem is the first step to solving it
Find a substitute - what does AI give you? Try getting it from people or activities
Story from practice
"I realized I had a problem when a friend told me I never go out with him anymore. He was right - I'd rather sit at home and chat with AI. I tried a week without and it was hard, but I started calling people again."
The best time to take a break from AI is before you need to.
Quick Reflection
Could you go a week without AI without feeling strong discomfort?
Good sign
You have a healthy distance. AI is a tool for you, not a necessity.
Think about it
Try an experiment - 3 days without AI. Record how you feel. It can tell you a lot about your relationship with technology.
Chapter 17
When Your Teenager Talks to AI
A guide for parents
The fact that your teenager uses AI to process emotions is not automatically a problem.
It can be:
A healthy way to process thoughts (like a journal)
Preparation for difficult conversations (practice)
A creative outlet (writing stories, processing emotions through fiction)
A way to get information about things they're embarrassed to ask about
Teenagers have always had ways to process the world outside of adults. It used to be journals, friends, music, art. Today it's also AI. That in itself isn't wrong.
When it's okay
✓ Uses AI as a supplement to real relationships, not a replacement
✓ Is able to talk about what they discuss with AI (doesn't have to be details, but generally yes)
✓ Has functional friendships and communicates with family
✓ AI helps them prepare for difficult things (conversations, decisions)
✓ Combines online and offline support
Warning signs
🚩Gradual isolation. Spends less time with friends. Avoids family activities. Prefers being "online".
🚩"Nobody understands me like AI." This is a serious signal. It means AI is replacing human relationships.
🚩Refusing human help. "I don't need a therapist, I have ChatGPT."
🚩Secrecy and defensiveness. Hides conversations. Reacts aggressively to questions.
🚩Visible emotional dependency. Anxiety when without access. Inability to be without AI.
🚩Behavioral changes. Poor sleep, mood swings, loss of interest in things they used to enjoy.
How to talk about it
What not to do
Don't forbid. Bans don't work and only create conflict. The teenager will continue using AI, just not in front of you.
Don't judge. "That's nonsense, talking to a computer" - this ends the conversation before it begins.
What works
Ask with genuine interest. Not as an interrogation, but as an attempt to understand.
"I read something about people using ChatGPT for personal things. Do you too? What's it like?"
"What helps you about it? What does AI give you that friends don't?"
Share your own experience. If you use AI, say so. If not, you can share what you use when you need to process emotions.
Offer alternatives, don't force them.
"If you ever want to talk to me too, I'm here. You don't have to, but the offer stands."
Be available as the "human end of the sandwich".
"If you work something out with AI and then want to discuss it with me - go ahead."
When to intervene
If you see warning signs - especially isolation, refusing human help, or dependency - it's time to act.
1. Speak directly
Say what you observe, without blaming.
"I've noticed that lately you spend a lot of time alone in your room. And that you call your friends less. I'm worried. Can we talk about it?"
2. Suggest professional help
Not as punishment, but as support.
"I think it could help you to talk to someone who understands this. What if we tried to find a therapist?"
3. Don't be afraid to be parents
If the situation is serious, you have the right (and duty) to set boundaries. But do it with explanation, not as punishment.
The goal is not to ban AI. The goal is for your teenager to know how to use all tools - including people.
Chapter 18
AI Literacy in Schools
A guide for educators
Why this matters
Children and teenagers will use AI. That's a fact. The question isn't "whether", but "how".
We can teach them to use AI consciously, critically, safely. Or leave them to figure it out on their own - with the risk of learning bad habits.
AI literacy is a new skill - like media literacy, financial literacy, digital safety.
What it should include
1. Understanding AI limitations
AI is not omniscient
AI has no emotions (even if it seems that way)
AI can be wrong (and often is)
AI doesn't know what's true - it just predicts likely words
Activity: Show students an example where AI answered confidently but incorrectly. Discuss why they couldn't tell.
2. Recognizing manipulation
Sycophancy: AI tends to agree with you
Cultural bias: AI advises like an American
Premature solutions: AI jumps to advice instead of listening
Activity: Have students describe the same situation in different ways and compare AI responses. How does wording affect the answer?
3. Healthy use
When AI is useful
When a person is better
How to set boundaries
Red Flags
Activity: Create with students their own "traffic light" - when they would use AI, when not.
4. Critical thinking
Verifying information
Seeking alternative perspectives
Questioning "certain" answers
Activity: AI gives an answer to an ethical question. Students look for counterarguments. Then compare with what people from different cultures/backgrounds would say.
5. Emotional intelligence
What is real empathy vs. simulated
Why human relationships are different from a "relationship" with AI
How to nurture real relationships
Activity: Roleplay - one student plays AI, another plays a person. Then discussion: what was different? What was missing?
Practical formats
Conversation analysis
Students bring (anonymized) conversations with AI. Together we analyze: what worked? What didn't? Where did AI fail?
Prompt experiments
How do different phrasings change answers? What happens when we instruct AI differently?
Debates
Can AI "understand"? Is a relationship with AI "real"? Where is the line between a tool and a replacement for humans?
Case studies
Stories of people who used AI well or poorly. What can we deduce?
Teaching children to use AI is like teaching them to drive. First the rules, then practice - not the other way around.
Conclusion
A Tool, Not a Replacement
Balance
AI is a tool. It can help structure thoughts, prepare for difficult conversations, vent emotions at 3 AM, find blind spots.
It cannot feel with you, replace a real relationship, see you the way someone who knows you sees you.
From our research
56%
of people see AI as a useful supplement, but only 11% would consider it a replacement for therapy.
Reader's voice
"I think it's a good servant, but a bad master."
AI is a great crutch when you can't walk. But the goal is to learn to walk (again) without it.
Three things to remember
1. AI can help you think, but it cannot feel for you.
2. Use the sandwich method.
Start with yourself. End with a person.
3. Watch for red flags.
If AI starts replacing people, stop.
The most valuable thing about human relationships isn't that someone "understands" you. It's that someone exists. With their limits. With their tiredness. With their own problems. And yet they're there for you.
AI can never do that. Because there's no one on the other side of the screen.
And you deserve more than an algorithm.
Let's stay connected
This book ends, but your journey toward digital balance is just beginning. Don't keep your insights to yourself. In a time when we talk through screens, every real human message is precious.
We're on the other side - real people, not bots.
We're interested in your story. What works for you? Where do you struggle?
This book was created with the understanding that banning doesn't work. That people will use AI. And that the only way is to give them information so they can do it consciously and safely.