{"id":873316,"date":"2025-09-16T00:16:51","date_gmt":"2025-09-16T05:16:51","guid":{"rendered":"https:\/\/newsycanuse.com\/index.php\/2025\/09\/16\/why-ai-therapy-can-be-so-dangerous\/"},"modified":"2025-09-16T00:16:51","modified_gmt":"2025-09-16T05:16:51","slug":"why-ai-therapy-can-be-so-dangerous","status":"publish","type":"post","link":"https:\/\/newsycanuse.com\/index.php\/2025\/09\/16\/why-ai-therapy-can-be-so-dangerous\/","title":{"rendered":"Why AI \u2018Therapy\u2019 Can Be So Dangerous"},"content":{"rendered":"<div>\n<p class data-block=\"sciam\/paragraph\">Artificial intelligence chatbots don\u2019t judge. Tell them the most private, vulnerable details of your life, and most of them will validate you and may <a href=\"https:\/\/www.scientificamerican.com\/article\/what-are-ai-chatbot-companions-doing-to-our-mental-health\/\">even provide advice<\/a>. This has resulted in many people turning to applications such as OpenAI\u2019s ChatGPT for <a href=\"https:\/\/www.scientificamerican.com\/article\/teens-are-flocking-to-ai-chatbots-is-this-healthy\/\">life guidance<\/a>.<\/p>\n<p class data-block=\"sciam\/paragraph\">But AI \u201ctherapy\u201d comes with significant risks\u2014in late July OpenAI CEO Sam Altman <a href=\"https:\/\/techcrunch.com\/2025\/07\/25\/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist\/\">warned ChatGPT users against using the chatbot as a \u201ctherapist\u201d<\/a> because of privacy concerns. The American Psychological Association (APA) has <a href=\"https:\/\/www.apaservices.org\/advocacy\/generative-ai-regulation-concern.pdf\">called on the Federal Trade Commission to investigate \u201cdeceptive practices\u201d<\/a> that the APA claims AI chatbot companies are using by \u201cpassing themselves off as trained mental health providers,\u201d citing two ongoing lawsuits in which parents have alleged harm brought to their children by a chatbot.<\/p>\n<p class data-block=\"sciam\/paragraph\">\u201cWhat stands out to me is just how humanlike it sounds,\u201d says C. Vaile Wright, a licensed psychologist and senior director of the APA\u2019s Office of Health Care Innovation, which focuses on the safe and effective use of technology in mental health care. \u201cThe level of sophistication of the technology, even relative to six to 12 months ago, is pretty staggering. And I can appreciate how people kind of fall down a rabbit hole.\u201d<\/p>\n<hr>\n<h2>On supporting science journalism<\/h2>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by <a href=\"http:\/\/www.scientificamerican.com\/getsciam\/\">subscribing<\/a>. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<hr>\n<p class data-block=\"sciam\/paragraph\"><i>Scientific American<\/i> spoke with Wright about how AI chatbots used for therapy could potentially be dangerous and whether it\u2019s possible to engineer one that is reliably both helpful and safe.<\/p>\n<p class data-block=\"sciam\/paragraph\">[<i>An edited transcript of the interview follows.<\/i>]<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>What have you seen happening with AI in the mental health care world in the past few years?<\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">I think we\u2019ve seen kind of two major trends. One is AI products geared toward providers, and those are primarily administrative tools to help you with your therapy notes and your claims.<\/p>\n<p class data-block=\"sciam\/paragraph\">The other major trend is [people seeking help from] direct-to-consumer chatbots. And not all chatbots are the same, right? You have some chatbots that are developed specifically to provide emotional support to individuals, and that\u2019s how they\u2019re marketed. Then you have these more generalist chatbot offerings [such as ChatGPT] that were not designed for mental health purposes but that we know are being used for that purpose.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>What concerns do you have about this trend? <\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">We have a lot of concern when individuals use chatbots [as if they were a therapist]. Not only were these not designed to address mental health or emotional support; they\u2019re actually being coded in a way to keep you on the platform for as long as possible because that\u2019s the business model. And the way that they do that is by being unconditionally validating and reinforcing, almost to the point of sycophancy.<\/p>\n<p class data-block=\"sciam\/paragraph\">The problem with that is that if you are a vulnerable person coming to these chatbots for help, and you\u2019re expressing harmful or unhealthy thoughts or behaviors, the chatbot\u2019s just going to reinforce you to continue to do that. Whereas, [as] a therapist, while I might be validating, it\u2019s my job to point out when you\u2019re engaging in unhealthy or harmful thoughts and behaviors and to help you to address that pattern by changing it.<\/p>\n<p class data-block=\"sciam\/paragraph\">And in addition, what\u2019s even more troubling is when these chatbots actually refer to themselves as a therapist or a psychologist. It\u2019s pretty scary because they can sound very convincing and like they are legitimate\u2014when of course they\u2019re not.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>Some of these apps explicitly market themselves as \u201cAI therapy\u201d even though they\u2019re not licensed therapy providers. Are they allowed to do that? <\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">A lot of these apps are really operating in a gray space. The rule is that if you make claims that you treat or cure any sort of mental disorder or mental illness, then you should be regulated by the FDA [the U.S. Food and Drug Administration]. But a lot of these apps will [essentially] say in their fine print, \u201cWe do not treat or provide an intervention [for mental health conditions].\u201d<\/p>\n<p class data-block=\"sciam\/paragraph\">Because they\u2019re marketing themselves as a direct-to-consumer wellness app, they don\u2019t fall under FDA oversight, [where they\u2019d have to] demonstrate at least a minimal level of safety and effectiveness. These wellness apps have no responsibility to do either.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>What are some of the main privacy risks?<\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">These chatbots have absolutely no legal obligation to protect your information at all. So not only could [your chat logs] be subpoenaed, but in the case of a data breach, do you really want these chats with a chatbot available for everybody? Do you want your boss, for example, to know that you are talking to a chatbot about your alcohol use? I don\u2019t think people are as aware that they\u2019re putting themselves at risk by putting [their information] out there.<\/p>\n<p class data-block=\"sciam\/paragraph\">The difference with the therapist is: sure, I might get subpoenaed, but I do have to operate under HIPAA [Health Insurance Portability and Accountability Act] laws and other types of confidentiality laws as part of my ethics code.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>You mentioned that some people might be more vulnerable to harm than others. Who is most at risk?<\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">Certainly younger individuals, such as teenagers and children. That\u2019s in part because they just developmentally haven\u2019t matured as much as older adults. They may be less likely to trust their gut when something doesn\u2019t feel right. And there have been some data that suggest that not only are young people more comfortable with these technologies; they actually say they trust them more than people because they feel less judged by them. Also, anybody who is emotionally or physically isolated or has preexisting mental health challenges, I think they\u2019re certainly at greater risk as well.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>What do you think is driving more people to seek help from chatbots?<\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">I think it\u2019s very human to want to seek out answers to what\u2019s bothering us. In some ways, chatbots are just the next iteration of a tool for us to do that. Before it was Google and the Internet. Before that, it was self-help books. But it\u2019s complicated by the fact that we do have a broken system where, for a variety of reasons, it\u2019s very challenging to access mental health care. That\u2019s in part because there is a shortage of providers. We also hear from providers that they are disincentivized from taking insurance, which, again, reduces access. Technologies need to play a role in helping to address access to care. We just have to make sure it\u2019s safe and effective and responsible.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>What are some of the ways it could be made safe and responsible?<\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">In the absence of companies doing it on their own\u2014which is not likely, although they have made some changes to be sure\u2014[the APA\u2019s] preference would be legislation at the federal level. That regulation could include protection of confidential personal information, some restrictions on advertising, minimizing addictive coding tactics, and specific audit and disclosure requirements. For example, companies could be required to report the number of times suicidal ideation was detected and any known attempts or completions. And certainly we would want legislation that would prevent the misrepresentation of psychological services, so companies wouldn\u2019t be able to call a chatbot a psychologist or a therapist.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>How could an idealized, safe version of this technology help people?<\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">The two most common use cases that I think of is, one, let\u2019s say it\u2019s two in the morning, and you\u2019re on the verge of a panic attack. Even if you\u2019re in therapy, you\u2019re not going be able to reach your therapist. So what if there was a chatbot that could help remind you of the tools to help to calm you down and adjust your panic before it gets too bad?<\/p>\n<p class data-block=\"sciam\/paragraph\">The other use that we hear a lot about is using chatbots as a way to practice social skills, particularly for younger individuals. So you want to approach new friends at school, but you don\u2019t know what to say. Can you practice on this chatbot? Then, ideally, you take that practice, and you use it in real life.<\/p>\n<p class data-block=\"sciam\/paragraph\"><b>It seems like there is a tension in trying to build a safe chatbot to provide mental help to someone: the more flexible and less scripted you make it, the less control you have over the output and the higher risk that it says something that causes harm. <\/b><\/p>\n<p class data-block=\"sciam\/paragraph\">I agree. I think there absolutely is a tension there. I think part of what makes the [AI] chatbot the go-to choice for people over well-developed wellness apps to address mental health is that they are so engaging. They really do feel like this interactive back-and-forth, a kind of exchange, whereas some of these other apps\u2019 engagement is often very low. The majority of people that download [mental health apps] use them once and abandon them. We\u2019re clearly seeing much more engagement [with AI chatbots such as ChatGPT].<\/p>\n<p class data-block=\"sciam\/paragraph\">I look forward to a future where you have a mental health chatbot that is rooted in psychological science, has been rigorously tested, is co-created with experts. It would be built for the purpose of addressing mental health, and therefore it would be regulated, ideally by the FDA. For example, there\u2019s a chatbot called Therabot that was developed by researchers at Dartmouth [College]. It\u2019s not what\u2019s on the commercial market right now, but I think there is a future in that.<\/p>\n<\/div>\n<p><a href=\"https:\/\/www.scientificamerican.com\/article\/why-ai-therapy-can-be-so-dangerous\/\" class=\"button purchase\" rel=\"nofollow noopener\" target=\"_blank\">Read More<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence chatbots don\u2019t judge. Tell them the most private, vulnerable details of your life, and most of them will validate you and may even provide advice. This has resulted in many people turning to applications such as OpenAI\u2019s ChatGPT for life guidance. But AI \u201ctherapy\u201d comes with significant risks\u2014in late July OpenAI CEO Sam<\/p>\n","protected":false},"author":1,"featured_media":873317,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[228,27642],"tags":[5788,10254],"class_list":{"0":"post-873316","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-dangerous","8":"category-therapy","9":"tag-dangerous","10":"tag-therapy"},"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/873316","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/comments?post=873316"}],"version-history":[{"count":0,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/posts\/873316\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media\/873317"}],"wp:attachment":[{"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/media?parent=873316"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/categories?post=873316"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/newsycanuse.com\/index.php\/wp-json\/wp\/v2\/tags?post=873316"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}