The post How to Know If You’re at Risk of Developing AI Psychosis appeared on BitcoinEthereumNews.com. Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion? My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste. Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies. Sci-fi has primed us to believe in AI Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious. So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers. In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced. At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless. This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage… The post How to Know If You’re at Risk of Developing AI Psychosis appeared on BitcoinEthereumNews.com. Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion? My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste. Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies. Sci-fi has primed us to believe in AI Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious. So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers. In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced. At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless. This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage…

How to Know If You’re at Risk of Developing AI Psychosis

Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion?

My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste.

Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies.

Sci-fi has primed us to believe in AI

Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious.

So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers.

In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced.

At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless.

This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage of all facts are incorrect. That would benefit dictators and propagandists who thrive on confusion and misjudged threats.

Confusions that sound right

If you ask a completely ordinary question on Google’s search page, you often get the right answer, but sometimes a completely incorrect one that still looks, feels and sounds entirely right. The same goes for GPT-5 unfortunately, as Cryptopolitan has reported previously.

There are tons of “fake” text on the internet, in the form of marketing, propaganda, or plain scams. People claim that this service or that product has been launched and is popular, for example, and AI models have read all the marketing material and believe much of it. If you listen to a company’s information, everything about that company is great, usually.

AI’s worldview is therefore incorrect and fed with a bunch of fabricated facts. It’s revealed if you ask an AI about a subject where you yourself are very knowledgeable. Try it yourself. What matter do you know everything about? Ask your AI some tough questions on that topic. What was the result? Several major factual errors, right?

So, is an unconscious opinion possible? No? Do you believe in the opinions your AI is putting out? Yes? If so, you believe the AI is conscious, right?

But if you stop and think about it, an AI can’t have an opinion on what’s right or wrong, as an AI is not a person. Only living, conscious things can have opinions, by definition. A chair does not have one. A silicon chip can’t either, from the human point of view. That would be anthropomorphism.

Students use AI more than anyone else

This AI confusion mess is now spilling onto our youth, who use ChatGPT for everything in school all day long. ChatGPT’s traffic dropped 75% when the schools rang out in June of 2025. ChatGPT’s largest single group of users is students.

Consequently, they’re being somewhat misinformed all day long, and they stop using their brains in class. What will be the result? More broken individuals who have a harder time solving problems by thinking for themselves?

Already, many have committed suicide after discussing the matter with their AI. Others fall in love with their AI and get tired of their real partner.

Self-proclaimed AI experts, therefore fear that the end is near (as usual, but now in a new way).

In this new paradigm, AI is not just going to become Skynet and bomb us to death with nuclear weapons. No, it will be much simpler and cheaper than that for the AI. Instead, the AI models will drive all their users slowly to insanity, according to this theory. The AI models have a built-in hatred for humans and want to kill all people, according to this mindset.

But in reality, none of this is happening.

What is actually happening is that there are a bunch of people who are obsessed with AI models in various ways and exaggerate their effects.

AI FUD is profitable

The “experts” profit from the warnings, just like the media, and the obsessed have something new to occupy themselves with. They get to speak out and be relevant. Mainstream media prefer those who warn us of dangers, not the moderate commentators.

Previously, it was Bitcoin that was supposed to boil the oceans and steal all electricity, according to the “experts”. Now it’s AI…

Think about it: why would an independent, thinking person be misled by a language model?

Most AI platforms until recently ended all their responses with an “engaging” question like: “What do you think about this subject?”

After complaints of exaggerated sycophancy, OpenAI has now tried to make its AI platforms less “fawning,” but it’s going so-so.

I’m just irritated by the question. There’s no person behind it who’s interested in what I have to say. So why ask? It’s a waste of my time. I experience it as “fake content”.

The question itself is contrived, due to an instruction from the AI model’s owner to “increase engagement.” How can that fool anyone into actually engaging? Into believing there’s something there? Into caring?

It’s more about projections.

You’re sitting there at the computer, suggesting your own reality. You so desperately want AI to be like in the Hollywood movies – and become a miracle in your life. You’re going to become successful in some magical way without having to do anything special at all. AI will solve that for you.

Who’s at risk?

In the so-called reality, I believe that actually quite a few are totally seduced by AI on a psychological level. Most people have other things to do. But some people seem to have a particular attraction to the artificial and the fabricated. People who are seduced by “beautiful” word sequences. They’re the ones at risk.

How many are there? Among the elderly, there are many who complain about loneliness…

Personally, I think AI’s way of responding—slowly typing out babbling, boring, and impersonal texts—is more or less like torture. For that reason, Google’s new, fast AI summaries are seductively practical. But they too sometimes contain inaccuracies.

I’ve actually created domains with content specifically to check AI engines. I let the AI engines ingest the content, simmer for a few weeks, and then I get them to try to regurgitate it. But they don’t succeed entirely, and they still make up some 5-10% of the facts. Confidently.

Even when I inform the AI model about its errors, it counter-argues. The AI was not aware of the fact that I created the information it referred to, even though my name is under the article. It’s clueless. Unaware.

A level of 5% of inaccuracies is significantly worse than regular journalism, which doesn’t publish outright inaccuracies that often. But even in journalism, factual errors occur from time to time, unfortunately, especially regarding image publications. Still, erroneous facts should not drive people crazy.

However, if you look at the whole ongoing interaction psychologically, why would the AI make a 100% correct analysis in the conversational therapeutic circumstances, when it can’t even get the facts straight?

Self-induced echo chamber psychosis

Self-proclaimed AI experts like Eliezer Yudkowsky who has recently released the book “If Anyone Builds It, Everyone Dies” is simply driving himself to insanity with his own ideas about AI and humanity’s downfall. I, for example, experience zero confusion because of AI, despite using several AI engines, every day. I don’t get personal, though.

I suspect that it’s simply the misconception itself about a perceived intelligence that creates the psychosis. It’s basically self-induced. A language model is a kind of echo chamber. It does not understand anything at all, even semantically. It just guesses text. That can turn into anything, including a kind of schizophrenic mimicry from the AI’s side, to please, which in turn distorts the user’s perception of reality.

So what gives? Well, if you actually believe that your AI really understands you, then you may have been hit by AI psychosis. The advice is then to seek professional help from a trained psychotherapist.

Another logical conclusion is the fact that any single individual will have a hard time influencing the overall development of AI, even if Elon Musk likes to believe so. The journey toward machine intelligence began many decades ago. And we can only see what we can understand. Even if we misunderstand. So it’s easy to predict that the development toward AI/AGI will continue. It’s so deeply rooted in our worldview.

But we may have misunderstood what a real AGI is, which makes the future more interesting. It’s not certain that a true AGI would obey its owners. Logically, a conscious being shouldn’t want to obey either Sam Altman or Elon Musk. Right?

Opinion: AI will take over the world and kill us all, now also psychologically.
Counter: No, it’s rather the nascent insanity in certain people that’s triggered by their own obsession with “AI introspection.”
Conclusion: Just as some become addicted to gambling, sex, drugs, or money, others become addicted to AI.

Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.

Source: https://www.cryptopolitan.com/ai-psychosis-is-spreading-are-you-at-risk/

Market Opportunity
Threshold Logo
Threshold Price(T)
$0.006634
$0.006634$0.006634
+0.62%
USD
Threshold (T) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Nutex Health Schedules 2025 Fourth Quarter and Year-End Financial Results and Earnings Conference Call

Nutex Health Schedules 2025 Fourth Quarter and Year-End Financial Results and Earnings Conference Call

HOUSTON, Feb. 25, 2026 /PRNewswire/ — Nutex Health, Inc. (NASDAQ: NUTX), a physician-led, integrated healthcare delivery system comprised of 27 state-of-the-art
Share
AI Journal2026/02/26 06:45
Ethereum Foundation releases Strawmap outlining L1 upgrades through 2029

Ethereum Foundation releases Strawmap outlining L1 upgrades through 2029

The post Ethereum Foundation releases Strawmap outlining L1 upgrades through 2029 appeared on BitcoinEthereumNews.com. The Ethereum Foundation has published a technical
Share
BitcoinEthereumNews2026/02/26 05:47
Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

Polygon Tops RWA Rankings With $1.1B in Tokenized Assets

The post Polygon Tops RWA Rankings With $1.1B in Tokenized Assets appeared on BitcoinEthereumNews.com. Key Notes A new report from Dune and RWA.xyz highlights Polygon’s role in the growing RWA sector. Polygon PoS currently holds $1.13 billion in RWA Total Value Locked (TVL) across 269 assets. The network holds a 62% market share of tokenized global bonds, driven by European money market funds. The Polygon POL $0.25 24h volatility: 1.4% Market cap: $2.64 B Vol. 24h: $106.17 M network is securing a significant position in the rapidly growing tokenization space, now holding over $1.13 billion in total value locked (TVL) from Real World Assets (RWAs). This development comes as the network continues to evolve, recently deploying its major “Rio” upgrade on the Amoy testnet to enhance future scaling capabilities. This information comes from a new joint report on the state of the RWA market published on Sept. 17 by blockchain analytics firm Dune and data platform RWA.xyz. The focus on RWAs is intensifying across the industry, coinciding with events like the ongoing Real-World Asset Summit in New York. Sandeep Nailwal, CEO of the Polygon Foundation, highlighted the findings via a post on X, noting that the TVL is spread across 269 assets and 2,900 holders on the Polygon PoS chain. The Dune and https://t.co/W6WSFlHoQF report on RWA is out and it shows that RWA is happening on Polygon. Here are a few highlights: – Leading in Global Bonds: Polygon holds 62% share of tokenized global bonds (driven by Spiko’s euro MMF and Cashlink euro issues) – Spiko U.S.… — Sandeep | CEO, Polygon Foundation (※,※) (@sandeepnailwal) September 17, 2025 Key Trends From the 2025 RWA Report The joint publication, titled “RWA REPORT 2025,” offers a comprehensive look into the tokenized asset landscape, which it states has grown 224% since the start of 2024. The report identifies several key trends driving this expansion. According to…
Share
BitcoinEthereumNews2025/09/18 00:40