top of page

Exploring AI, Ethics, and the Future of Psychotherapy on the Story Samurai Pod

  • Sep 25, 2025
  • 3 min read

depression therapy manhattan

In a recent episode of the Story Samurai podcast, Lexington Park Psychotherapy’s founder and clinical director, Dr. Jordan Conrad, joined writer and researcher Dr. Jason Branford for a deep conversation about artificial intelligence, therapy, and what happens when machines start talking back.


Hosted by Ari Block, the discussion dives into how AI—especially tools like ChatGPT—are beginning to shape how people think, feel, and relate to themselves. It’s a thoughtful, sometimes unsettling, and often funny look at how these tools are influencing the ways we seek support and make meaning.


AI, Personality, and the Stories We Tell Ourselves


One of the central themes of the episode is how AI tools, especially language models, aren’t just answering questions—they’re shaping stories. Whether it’s through tone, word choice, or the kind of “voice” they seem to have, these systems subtly influence how users reflect on themselves and the world.


Jordan talks about how that matters in therapy. When someone engages with an AI tool—whether to get advice, journal their thoughts, or simulate conversation—it’s not just a neutral exchange. It can reinforce certain narratives or even blur the line between internal and external authority. At the extreme, this has caused people to stop taking medications and even to end their own lives, but even in less extreme ways, people can relocate the locus of authority from themselves to their chatbot, trusting it more than they trust their friends, family, and even themselves.


As Jordan notes, these technologies—especially conversational AI like ChatGPT—often operate with assumptions, “personalities,” and even values that have been embedded into their programming by coders who operate within a particular cultural perspective . While they may appear objective or user-friendly, they are designed and trained within specific cultural and ethical frameworks.


Language Isn’t Just a Tool—It’s the Terrain


One of the subtler but most important points raised in the episode is how language itself—not just what is said, but how it is said—is central to both therapy and AI and when the two intersect, problems can occur.


Jason observes how generative AI reflects back a kind of distilled collective voice: trained on massive amounts of human text, it reproduces our patterns, our biases, and our cultural metaphors. That means the language AI produces isn’t neutral—it carries the weight of social norms, assumptions, and even psychological scripts.


This has serious implications in a therapeutic context. When a therapist communicates, they have to be alive to the particular meaning of their words and the way they will be understood by the patient. The example Jason introduces is YOLO - you only live once - which to many young people encourages risk taking behavior ("Go scuba diving, after all you only live once") but to many older people is a cautionary statement encouraging safe behavior ("Buckle your seat belt - after all, you only live once"). When this isn't about phrases like YOLO, but about love, sex, work, school, aspirations, desperations, trauma, depression, anxiety, and the rest of the stuff that makes life life, that kind of value miscommunication can be very worrying.


Just Because It’s Smart Doesn’t Mean It’s Safe


In the podcast, Jordan expressed concern about the lack of AI regulation. Quoting a previous guest on the show who argued that AI is a tool like any other, Jordan stated that there is an important difference between AI and many other tools. Knives are tools that are capable of cutting a tomato or stabbing a person, but nobody uses a knife to brush their teeth or inflate their car's tires or change channels on the television. AI, however, is capable of being employed in so many use-cases that it becomes necessary to regulate appropriate and inappropriate use.


AI systems like ChatGPT are often deployed with little clarity about their intended use—or their limitations. They’re marketed as productivity aids, educational tools, emotional companions, even pseudo-therapists. But in practice, people may turn to them in moments of real vulnerability: when they’re anxious, isolated, or trying to make sense of painful thoughts.


About the Episode


This episode of Story Samurai was very wide ranging and offered a thoughtful, sometimes playful look at serious questions about technology, ethics, and human connection. If you’re interested in how AI intersects with psychology and identity—or you’re just curious about what therapists are thinking about behind the scenes—it’s well worth a listen.


🎙️ You can find the episode here: Story Samurai – Jordan Conrad & Jason Branford


Continuing the Conversation


If you're interested in learning more about how our team is navigating these questions—or if you’d like to explore working with a therapist who understands the complexities of today’s digital world— reach out.

 
 

Lexington Park Psychotherapy 

1123 Broadway, New York, NY, 10010

85 Fifth Ave, New York, NY, 10003

All content copyright ©2026 Lexington Park Psychotherapy. All rights reserved

bottom of page