ChatGPT as Therapist? We're Creating the First Czech Guide
It's 2 AM and you can't sleep. Your mind is racing. Who do you text?
A friend? (Asleep.) Parents? (Awkward.) Therapist? (You don't have one, or they're expensive.)
Or you open ChatGPT, because at least it won't judge you.
I see this more and more often. In trainings where I discuss with teachers that their students have AI as their only confidant. But also directly with young people I work with. ChatGPT as the first stop for life questions. Anonymous, available 24/7, "judgment-free".
Maybe you do it too.
Why We're Addressing This
With Gabi – my wife, who is a therapist (terapierohova.cz) – we see the same trend from opposite sides.
I work with young people who use AI for self-reflection, processing emotions, finding direction. Gabi sees clients in practice who use AI instead of or alongside therapy – sometimes usefully, sometimes riskily.
And we both discovered the same thing: quality guides in Czech don't exist.
There are foreign studies. There's data from USA, UK, Australia. But Czech perspective? Nothing.
What We Know from Research
It can work. AI can help structure thoughts, process emotions, prepare for difficult conversations. There are studies with measurable positive effects.
But there are huge risks:
- Emotional dependence on "someone" who always responds
- Agreement even when you're wrong (AI won't tell you "hey, this is nonsense")
- False sense of relationship (synthetic intimacy)
- Failure in crisis situations (AI doesn't recognize when life is at stake)
And then there's something I've been researching for a while: AI has bias against young men. It systematically underestimates their problems. Offers less empathy. Relativizes legitimate concerns. I wrote about it here – and it's not theory, it's data.
What We Want to Create
We're preparing a comprehensive series of articles about how to use AI for emotional support consciously and safely.
No bans. No moralizing. Harm reduction.
You'll do it anyway – and often for good reasons. Therapy is inaccessible, expensive, or just not for everyone. Instead of saying "don't do it" we want to say "here's how to do it more safely".
Planned structure:
- Why it works – mechanisms, data, when AI is actually useful
- Hidden risks – what can go wrong, what to watch out for (including that AI bias)
- Practical manual – specific prompts, safety rules, red flags
- Bonus for parents and teachers – how to talk about it with young people
Everything will be free. No course for 3000 CZK. No marketing. Just education.
We Need Your Experiences
Czech data doesn't exist. And we don't want to write just based on Western research.
That's why we've prepared an anonymous survey. 10-15 minutes.
We're interested in:
- Do you use AI for emotional support? How? What works, what doesn't?
- Have you had a good experience with it? Or a bad one?
- What would you need to know to do it more safely?
What you'll get:
- Practical guides with specific prompts
- Clear boundaries – when yes, when no
- Czech experiences – how others use it
- Email with results and links to finished articles (if you want)
Why I'm Doing This
This is exactly the intersection that interests me. Technology that has potential to help – but also to harm. People who use it without guidance. Society that either panics or ignores.
TECH MEETS HUMAN. Literally.
I understand why you do it. At 2 AM, when your head won't stop, AI is sometimes the only option. The question isn't whether to stop it. The question is how to do it more safely – and when it's better to go to a real person.
Help us create the first Czech guide.
Share further – the more perspectives, the better the result.
We'll start publishing articles in January 2026.
P.S. If you're going through a crisis right now, reach out to professionals:
- Crisis Helpline (Czech): 116 111 (24/7, free)
- First Mental Aid Line: 116 123