Psychotherapy and AI Part 3: Post-Deployment and Institutional Challenges
- Jordan Conrad

- Oct 23
- 3 min read

In previous posts, I have discussed several challenges that must be addressed before AI psychotherapists could be effectively or ethically deployed. To reiterate, there are considerable theoretical advantages to developing digital mental health interventions (DMHI), ranging from increasing access to mental health treatments in geographically remote areas, to reducing cost and circumventing stigma around mental health care.
The concerns I have about DMHI, however, may undermine these sizeable advantages. In my first post on the issue, I discussed limitations to artificially intelligent psychotherapists. Some of these limitations – specifically, the problem of biases programmed into the software and the overreliance on operationalized treatments instead of effective treatments – are temporary and will be resolved in due course. However, one problem, the problem of foreknowledge, I have argued is unresolvable. If digital programs are incapable of having genuinely qualitative or intentional states, many people will find using them unhelpful. For a more sustained discussion of that issue, see here.
In the second post, I discussed so called “interaction problems” – those problems that will arise when people use DMHI. There I discussed how digital psychotherapists could cause harm users without malfunctioning. That is, the harm they would cause their “patients” would not result from a malfunction or something “going wrong” – everything could work as it was designed to, and patients could still suffer serious consequences. I also discussed how the indeterminacy of language can create harmful consequences if a program is used across cultures (as it is supposed to be), as well as the very real concern that it could widen already too-wide inequalities in health and healthcare.
In this post, I will discuss the problems that may occur at the institutional and governmental levels. Without adequate oversight, AI could cause very real problems to a definitionally vulnerable user base. I discuss these here.
Clinician Guidance, Efficacy, and Safety
Enthusiasm for AI has caused the pace of program development to outstrip social and institutional efforts to regulate its use. Although several nations – including Australia, Canada, New Zealand, the U.K., and the U.S. – provide some information concerning the efficacy and risk of DMHI, none offer any clinician-focused guidance for how to incorporate digital programs into their practice, nor patient-focused guidelines on how to determine which program is right for you.
This is a problem. The majority of AI therapy programs available to the public cannot legitimately claim to be evidence-based, nor can they substantiate their efficacy claims. Several companies have made use of the misleading categories, “wellness apps” and “health apps”, because if they can get labeled as a “wellness app” then they are exempt from providing empirical support for their claims and needn’t provide safety data.
And, the safety of these apps is itself unclear: a recent meta-analysis found that over-one third of studies of DMHI didn’t include any safety data at all, and the remaining studies used inadequate or methods that varied too much to be compared, and another found that the few existing studies assessing safety in mental health chatbots had a high risk of bias.
Company Responsibility
When in therapy with a human psychotherapist, your relationship is with your psychotherapist – they are responsible for their interactions with you, and for informing you of any upcoming changes in session frequency or treatment. In contrast, when engaged in therapy with a digital program, your relationship is with the company that owns the intellectual property of the AI therapist. That means that your therapist might change, or cease to exist altogether, overnight and without your knowledge, based on the company’s interests and financial responsibilities. That means that you might wake up to find that the therapist you have been working with for years no longer exists.
Patient Data
Within the AI landscape, data-use violations are troublingly common. In 2025, the Italian Supervisory Authority had to impose fines on Replika for violating data privacy regulations. In 2023, Betterhelp was fined by the FTC for sharing “consumers’ sensitive data with third parties [advertisers] after promising to keep it private”. In 2022, the FCC had to probe a suicide hotline that shared users’ texts with another for-profit AI company. These are serious violations, not only legally, but ethically, and we do not yet have the appropriate institutions to regulate company malfeasance in this way.


