top of page

Lexington Park Research: Jordan Conrad published in Philosophical Psychology

  • Lexington Park Psychotherapy
  • Dec 18, 2025
  • 5 min read

psychotherapist near me

Lexington Park Psychotherapy's founder and clinical director, Dr. Jordan Conrad, was published last week in Philosophical Psychology. In a collaboration with Dr. Jason Branford at the University of Hamburg, Germany, the article explores the limitations of digital mental health interventions (or, DMHI for short) when utilized for psychotherapeutic purposes. The article entitled "Log in, lie down: Ethics and the digital turn in psychotherapy" provides a comprehensive view of the challenges DMHI faces in addressing the problems in mental health treatment at the levels of programming, use, and regulation.


As with Dr. Conrad's previous work on AI and psychotherapy, "Log in, lie down" discusses the positives of DMHI. Doomsayers often exaggerate the failures of AI in order, claiming that DMHI will never be able to effectively deliver psychotherapeutic treatment. This attitude resembles mid 20th century chess players certain that a computer program would never be capable of beating a grandmaster. (Famously, chess grandmaster, Gary Kasparov, believed that the best chess players would be some combination of human players + machine intelligence. He was, of course, proven wrong.)


So what is wrong with using an AI therapist? To answer that question, we have to first understand what DMHI is for.


The Promise of Digital Mental Health Interventions


To those looking for a therapist in New York City, it can be hard to appreciate the lack of mental health treatment elsewhere. Faced with such a high density of psychotherapists, it can be hard to imagine that nearly half of all Americans live in federally designated mental health professional shortage areas. Outside of the U.S., the levels of treatment are low - in high-income countries, only 33% of people received treatment for depressive disorders (in low- or lower middle-income locations, this number drops as low as 8%). The ability of digital psychotherapists, including AI chatbots, to reach geographically distant populations and circumvent the stigma of mental health care by being discreetly available, makes it an incredibly promising tool to combat the mental health crisis.


So what are the problems with DMHI? Should we all be clamoring for an AI psychotherapist? Drs Branford and Conrad argue that the benefits of DMHIs are outweighed - at least presently - by the negatives and that these occur at three levels: programing, use, and regulation.


Programming a Digital Psychotherapist You Would Want to Use


There are several challenges to developing an AI psychotherapist that is effective and safe. Currently, although there are several mental health programs that appear to be effective, this is in part due to regulatory gerrymandering. As Branford and Conrad write:


...the majority of DMHI available to the public cannot reasonably claim to be evidence-based nor substantiate their efficacy claims (Larsen et al., Citation2019; Marshall et al., Citation2019; Wilhelm et al., Citation2020). One thing confusing matters is that it has been convenient, if misleading, to distinguish between “wellness” and “health” apps – the former, it is suggested, carry little or no risk and so are exempt from providing empirical support, while the latter carry potentially significant risks and so are required to present high-quality efficacy data in order to receive FDA approval. This categorization has raised concerns that developers will be incentivized to subtly modify their apps to fall into “wellness” categories. Worryingly, among the roughly 20,000 mental health apps on the Apple App Store or Google Play Store, just five have FDA approval.
Another problem, potentially a result of so many apps falling into the “wellness” category, is that there is a lack of good safety data. A recent meta-analysis found that over one-third of the studies evaluated did not include any safety data and that the remaining used inadequate or widely varying methods, making the data difficult to compare; another meta-analysis found that the few existing studies assessing safety in mental health chatbots had a high risk of bias.

This is troubling, but in time it will be resolved. We should have every confidence that motivated developers and clever researchers will find a way to create safe and effective AI psychotherapy devices. But does that mean that they won't cause any harm? The answer: No - they just won't malfunction.


Harmful Malfunctions

There is now a lawsuit connecting just ChatGPT to seven suicides. Although this is a tragic outcome, nothing went wrong at the level of the programming itself. Because generative AI learns from its users - it figures out what the "correct" response is based on the way the user responds, the language the user uses, the use-cases to which the user is putting it, etc. - when LLMs using generative AI result in real-world harms, they are not malfunctioning anymore than your car malfunctions when you get lost.


Worse still, it is not actually clear who (or what) ought to be held morally and legally culpable in situations like these. Should the programmer be held responsible? It doesn't seem so because the chatbot's responses were not preprogrammed but determined by features of its conversations with users. Should the company that manufactured the AI psychotherapist? Well, they may have run rigorous safety checks and, if they sold it to another company, might not even own the intellectual property anymore. Should owner of the intellectual property? They too may have run all the required tests without turning up any worrying situation. This is known as "the problem of many hands", but it becomes even more pronounced when developers are often unable to predict a specific output or explain how it was reached.


Your Relationship

The relationship patient's have to their psychotherapist in New York City is one-to-one. When a therapist at Lexington Park Psychotherapy is going on holiday or is sick, they will communicate it directly and with ample forewarning to patients. The relationship users have to their AI psychotherapist is very different. Because the program is owned by a parent company, if the company decides to shift directions, goes bankrupt, or shuts down abruptly due to company malfeasance, users will lose their therapist overnight without any discussion or warning. This is a worrying prospect and it is totally unclear how we might begin to regulate it.


There are several other problems, both abstract and practical, some of which have been discussed on this blog and published in other journals, but much of which is entirely novel. The paper is Open Access and so available to anyone who wants to read it. For the time being, if you are looking for trauma therapy in Manhattan, or Union Square psychotherapy, it might be premature to reach for AI. Fortunately, finding affordable therapy in Manhattan isn't impossible. Reach out for a free consultation.





 
 

Lexington Park Psychotherapy 

1123 Broadway, New York, NY, 10010

85 Fifth Ave, New York, NY, 10003

All content copyright ©2025 Lexington Park Psychotherapy. All rights reserved

bottom of page