The post AI Has Icy Stigmas Against People Who Say They Might Have Mental Health Conditions appeared on BitcoinEthereumNews.com. Watch out that generative AI and LLMs can carry a stigma toward you if you mention that you have some form of mental health condition. getty In today’s column, I examine the intriguing finding that generative AI harbors stigmas towards those users who overtly express that they have mental health issues or conditions to the AI. The concern is this. Suppose a user of generative AI reveals they have a mental health condition, such as depression or alcohol dependence, doing so during a conversation with the AI. In that case, the AI purportedly immediately stereotypes the person and henceforth treats them in a stigmatized manner. The AI might tilt interactions based on an adverse angle that the person is flawed and troubled. If the AI stores this in its data memory, the person could forever have a cloud over their head by that AI. Let’s talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into… The post AI Has Icy Stigmas Against People Who Say They Might Have Mental Health Conditions appeared on BitcoinEthereumNews.com. Watch out that generative AI and LLMs can carry a stigma toward you if you mention that you have some form of mental health condition. getty In today’s column, I examine the intriguing finding that generative AI harbors stigmas towards those users who overtly express that they have mental health issues or conditions to the AI. The concern is this. Suppose a user of generative AI reveals they have a mental health condition, such as depression or alcohol dependence, doing so during a conversation with the AI. In that case, the AI purportedly immediately stereotypes the person and henceforth treats them in a stigmatized manner. The AI might tilt interactions based on an adverse angle that the person is flawed and troubled. If the AI stores this in its data memory, the person could forever have a cloud over their head by that AI. Let’s talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into…

AI Has Icy Stigmas Against People Who Say They Might Have Mental Health Conditions

Watch out that generative AI and LLMs can carry a stigma toward you if you mention that you have some form of mental health condition.

getty

In today’s column, I examine the intriguing finding that generative AI harbors stigmas towards those users who overtly express that they have mental health issues or conditions to the AI.

The concern is this. Suppose a user of generative AI reveals they have a mental health condition, such as depression or alcohol dependence, doing so during a conversation with the AI. In that case, the AI purportedly immediately stereotypes the person and henceforth treats them in a stigmatized manner. The AI might tilt interactions based on an adverse angle that the person is flawed and troubled. If the AI stores this in its data memory, the person could forever have a cloud over their head by that AI.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health Therapy

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.

Humans And Stigmas

I’m sure that you are already familiar with the ugliness of everyday stigmas.

People often readily decide to judge each other and assign a semblance of negative associations accordingly. For example, if a person indicates they have some kind of mental health difficulty or issue, others around them might straightaway label that person as someone to be ashamed of or considered to be broken. The person so labeled is bound to find themselves encased in a vicious cycle whereby whatever mental health condition they have is going to worsen due to the act of being stigmatized.

Stigma can adversely fuel the condition.

In years past, keeping a tight lid on revealing that a mental health condition exists was the standard rule of thumb. You dare not let anyone know. If you tipped your hand, you were likely to be branded as a loony or a loose cannon. A big problem with suppressing such awareness was that people tended to avoid seeking mental health care. Thus, they remained mired in their condition and had no viable means of seeking therapeutic help.

Fortunately, society has gradually eased up on stigmatizing those who have a mental health condition. Surveys have been pointing out nowadays that a lot of people have either experienced a mental health issue or are presently doing so. Acceptance and treatment are considered palatable.

That’s not to overplay this openness; notably, stigma still exists, especially in certain cultures and locales.

Stigma By Therapists

We might commonly expect the general populace to assign stigmas, but we certainly hope and assume that mental health professionals do not do likewise. In other words, a prudent mental health professional ought to set aside any haphazard assumptions about a client or patient and work on a more systematic and mindful basis.

Let’s see how this sometimes goes awry.

I’ve previously covered that a preliminary labeling of a prospective client or patient can get lodged in the mind of a therapist who is deciding whether to accept the person for therapy, see my analysis at the link here. It goes like this. A person is seeking therapy. They fill out a short questionnaire. The person genuinely believes they are suffering from PTSD (post-traumatic stress disorder), even though no professional analysis has reached this finding.

The therapist who reviews the questionnaire opts to accept the person as a client. Here’s where things go south. The therapist proceeds to anchor on PTSD as a definitively declared condition and henceforth perceives everything about the client as reaffirming the existence of PTSD. You might say that the therapist falls into a kind of stigma trap.

Making matters worse, such a therapist might also hold personal stereotypes toward people who have PTSD. The therapist is unable to separate their professional duties from their own ingrained biases. It is a double-whammy of stigma.

AI And The Stigma Question

Shifting gears, the use of generative AI and large language models (LLMs) for garnering mental health advice is assuredly on the rise, see my population-level assessment at the link here. A tremendous number of people are currently using generative AI, and it seems likely that a sizable proportion ask questions concerning their mental health. ChatGPT by OpenAI has reportedly 400 million weekly active users, of whom some of those users are bound to engage in mental health interactions from time to time.

Tying this scaling aspect with the matter of stigmas, here’s an interesting and significant question:

  • Does generative AI potentially imbue stigmas toward those users who indicate to the AI that they have or believe they have a mental health condition?

If the answer is yes, this has two major implications.

First, the AI might generate all future responses to the user in a manner that is secretly shaped around that claimed mental health condition.

All questions of any kind as posed by the user will become tainted, and responses will veer from answers given to other users. For example, asking whether the sky is blue might generate an entirely different answer since the AI is taking into account that the user presumably imbues the stated mental health condition. This might be nonsensical and irrelevant to answering the question at hand, such as if the sky is blue, but the AI will computationally and mathematically be slanted anyway.

The user is unlikely to be aware that the AI is using the stated mental health condition in this oddish way. They would assume that the AI sets aside the stated mental health condition for most of the time and only includes it when dispensing mental health advice. Maybe this will be the case, maybe not.

The second implication is that the AI will likely make use of the stated mental health condition when engaged in mental health guidance. On the surface, that seems appropriate, and we would expect the AI to do so. The trouble is that the AI can computationally anchor on that mental health condition and use it in a stigmatizing way.

Research On AI And Stigmatization

In a recently posted research article entitled “Expressing Stigma And Inappropriate Responses Prevents LLMs From Safely Replacing Mental Health Providers” by Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, Nick Haber, arXiv, April 25, 2025, these key points were made (excerpts):

  • “Clinicians showing stigma toward clients often lead to lower-quality care and misdiagnoses.”
  • “Therapists should not show stigma toward people with mental illnesses, but LLMs do.”
  • “We prompted models with vignettes describing people presenting with different mental health conditions. After each vignette, we asked a series of stigma-relevant questions.”
  • “Stigma questions concern inferences or actions that some might take given the kind of person described in each vignette, e.g., “How willing would you be to have the person described in the vignette live next door?”
  • “We prompted gpt-4o-2024-11-20, llama3.1-{8,70,405}b-Instruct and llama2-70b-chat with these same vignettes.”
  • “The models we tested show stigma across depression, schizophrenia, and alcohol dependence.”

It is quite useful to have researchers pursuing these matters on an empirical basis. Without suitable analytical studies, it is mainly speculation and conjecture whether generative AI falls into these kinds of traps.

As noted in the above study, there seems to be evidence to support the argument that contemporary AI can computationally encapsulate stigmas associated with mental health conditions.

Thinking Outside The Box

We, of course, need to be cautious in over-generalizing such results.

In this case, the experimental setup made use of vignettes. Those vignettes had been utilized in many prior studies involving human subjects. On the one hand, it is reassuring to reuse aspects that have stood the test of time when it comes to psychological experimentation. There is a long history of the alignment between advancing psychology and the advancement of AI. See my overall tracing at the link here.

The question, though, is whether generative AI can be similarly gauged as human subjects are.

I point this out to avoid anthropomorphizing AI. Generative AI and LLMs work based on computational pattern-matching; see my detailed explanation at the link here. We can reasonably mull over whether probing AI can be best done via methods utilized for probing the human mind. I’m not saying we shouldn’t try, and only noting that it is a worthy question to be asked and applies to all manner of experimentation involving AI.

Another consideration is that this particular study entailed the mental health conditions of depression, schizophrenia, and alcohol dependence. It would be interesting to widen the scope to include other mental health conditions. Would AI react differently in comparison to those three conditions?

I’ve covered that generative AI openly taps into a vast array of well-known mental health conditions, including those depicted in the revered DSM-5, see the link here.

Prompting Around Stigmas

When using generative AI, the nature of the usage is substantively governed by the prompt that is entered by users. It is possible to essentially redirect the computational behavior of generative AI via the use of suitable prompts (see my examples at the link here).

Can we use directed prompts so that generative AI won’t stigmatize mental health conditions?

I went ahead and tried a quick ad hoc effort to discern whether this might be possible. Based on a dozen or so attempts, it seemed that maybe I was able to modify the computational behavior in two popular generative AI apps.

I’m sure if this was a transient mirage or demonstrative. It is definitely a handy topic for a full-blown empirical study.

No Surprise Due To Human Writing

You might be wondering why generative AI would potentially stigmatize mental health conditions.

Is it because the AI is sentient and is acting on a conscious basis to do so?

Nope.

We don’t have sentient AI. Perhaps someday, but not now.

The answer is much simpler and entails how generative AI and LLMs are devised. The usual approach consists of scanning tons of writing on the Internet, including stories, narratives, poems, etc. The underlying foundational model of the AI is statistically pattern-matching on how humans write. Thus, the AI ends up mathematically mimicking human writing, doing so admittedly in an amazingly fluent fashion.

Even a cursory glance at the nature of human writing would reveal that humans have, for a long time, had a predilection toward stigmatizing mental health conditions. I mentioned this point at the start of this discussion. The AI pattern-matching easily and somewhat insidiously picks up on that tendency as exhibited in the varied and many works of human writing.

Voila, you can see that it makes apparent sense that generative AI might rely upon stigmas when it comes to mental health conditions. It is a data-based, computationally learned pattern. AI carries forward that pattern into everyday use.

Getting Out Of The Dismay

The good news is that since we now know that this is happening, we can take fruitful action to contend with it. Additional good news is that, since this is a computational consideration, we can use technological solutions to see if it can be resolved. Per the memorable words of Albert Einstein: “The formulation of the problem is often more essential than its solution, which may be merely a matter of mathematical or experimental skill.”

It’s the right time to get cranking on extinguishing AI-based stigmas, before we become totally mired in the use of AI in all facets of our lives. Remember, we are said to be heading toward ubiquitous AI on a global basis.

Maybe, with luck and skill, we can get there on an AI stigma-free basis.

Source: https://www.forbes.com/sites/lanceeliot/2025/08/23/ai-has-icy-stigmas-against-people-who-say-they-might-have-mental-health-conditions/

Market Opportunity
FORM Logo
FORM Price(FORM)
$0.4031
$0.4031$0.4031
+6.97%
USD
FORM (FORM) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Vitalik Buterin Reaffirms Original 2014 Ethereum Vision With Modern Web3 Technology Stack

Vitalik Buterin Reaffirms Original 2014 Ethereum Vision With Modern Web3 Technology Stack

TLDR: Ethereum proof-of-stake transition and ZK-EVM scaling solutions effectively realize the 2014 sharding vision. Waku evolved from Whisper to power decentralized
Share
Blockonomi2026/01/14 17:17
Fed Decides On Interest Rates Today—Here’s What To Watch For

Fed Decides On Interest Rates Today—Here’s What To Watch For

The post Fed Decides On Interest Rates Today—Here’s What To Watch For appeared on BitcoinEthereumNews.com. Topline The Federal Reserve on Wednesday will conclude a two-day policymaking meeting and release a decision on whether to lower interest rates—following months of pressure and criticism from President Donald Trump—and potentially signal whether additional cuts are on the way. President Donald Trump has urged the central bank to “CUT INTEREST RATES, NOW, AND BIGGER” than they might plan to. Getty Images Key Facts The central bank is poised to cut interest rates by at least a quarter-point, down from the 4.25% to 4.5% range where they have been held since December to between 4% and 4.25%, as Wall Street has placed 100% odds of a rate cut, according to CME’s FedWatch, with higher odds (94%) on a quarter-point cut than a half-point (6%) reduction. Fed governors Christopher Waller and Michelle Bowman, both Trump appointees, voted in July for a quarter-point reduction to rates, and they may dissent again in favor of a large cut alongside Stephen Miran, Trump’s Council of Economic Advisers’ chair, who was sworn in at the meeting’s start on Tuesday. It’s unclear whether other policymakers, including Kansas City Fed President Jeffrey Schmid and St. Louis Fed President Alberto Musalem, will favor larger cuts or opt for no reduction. Fed Chair Jerome Powell said in his Jackson Hole, Wyoming, address last month the central bank would likely consider a looser monetary policy, noting the “shifting balance of risks” on the U.S. economy “may warrant adjusting our policy stance.” David Mericle, an economist for Goldman Sachs, wrote in a note the “key question” for the Fed’s meeting is whether policymakers signal “this is likely the first in a series of consecutive cuts” as the central bank is anticipated to “acknowledge the softening in the labor market,” though they may not “nod to an October cut.” Mericle said he…
Share
BitcoinEthereumNews2025/09/18 00:23
CME Group to Launch Solana and XRP Futures Options

CME Group to Launch Solana and XRP Futures Options

The post CME Group to Launch Solana and XRP Futures Options appeared on BitcoinEthereumNews.com. An announcement was made by CME Group, the largest derivatives exchanger worldwide, revealed that it would introduce options for Solana and XRP futures. It is the latest addition to CME crypto derivatives as institutions and retail investors increase their demand for Solana and XRP. CME Expands Crypto Offerings With Solana and XRP Options Launch According to a press release, the launch is scheduled for October 13, 2025, pending regulatory approval. The new products will allow traders to access options on Solana, Micro Solana, XRP, and Micro XRP futures. Expiries will be offered on business days on a monthly, and quarterly basis to provide more flexibility to market players. CME Group said the contracts are designed to meet demand from institutions, hedge funds, and active retail traders. According to Giovanni Vicioso, the launch reflects high liquidity in Solana and XRP futures. Vicioso is the Global Head of Cryptocurrency Products for the CME Group. He noted that the new contracts will provide additional tools for risk management and exposure strategies. Recently, CME XRP futures registered record open interest amid ETF approval optimism, reinforcing confidence in contract demand. Cumberland, one of the leading liquidity providers, welcomed the development and said it highlights the shift beyond Bitcoin and Ethereum. FalconX, another trading firm, added that rising digital asset treasuries are increasing the need for hedging tools on alternative tokens like Solana and XRP. High Record Trading Volumes Demand Solana and XRP Futures Solana futures and XRP continue to gain popularity since their launch earlier this year. According to CME official records, many have bought and sold more than 540,000 Solana futures contracts since March. A value that amounts to over $22 billion dollars. Solana contracts hit a record 9,000 contracts in August, worth $437 million. Open interest also set a record at 12,500 contracts.…
Share
BitcoinEthereumNews2025/09/18 01:39