The post How to Know If You’re at Risk of Developing AI Psychosis appeared on BitcoinEthereumNews.com. Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion? My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste. Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies. Sci-fi has primed us to believe in AI Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious. So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers. In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced. At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless. This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage… The post How to Know If You’re at Risk of Developing AI Psychosis appeared on BitcoinEthereumNews.com. Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion? My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste. Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies. Sci-fi has primed us to believe in AI Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious. So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers. In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced. At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless. This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage…

How to Know If You’re at Risk of Developing AI Psychosis

Reports are coming in about AI users seeking professional psychology help after experiencing what some call AI Psychosis. Why does this happen? And who came up with the idea that a computer program can have an opinion?

My main problem right now with the phenomenon known as AI is the widespread misconception that it has a personality, opinions, ideas—and taste.

Just as we anthropomorphize rocks on the moon that look like heads or faces, we do the same with language models, trying to interpret the results as if “there’s something there.” We can’t help it. The idea that AI is something intelligent is deeply rooted in us after more than a hundred years of science-fiction books and movies.

Sci-fi has primed us to believe in AI

Classic science-fiction authors like Clarke, Heinlein, Bradbury and Asimov have influenced the entire robot and AI genre and most Hollywood movies on the subject since the early 20th century. In their depictions, it was obvious that a machine could become conscious.

So we expect AI to be able to determine whether a certain statement is incorrect or not. At the same time, we know that AI is often wrong in its answers.

In essence, we’re pretending that AI has a handle on things and can decide what’s right and wrong. But the answers are basically educated guesses, and they still contain about 5-10% factual errors if the query is sufficiently advanced.

At the same time, AI is so convenient to use that many of us simply ignore the fact that it contains factual errors. Or rather, AI is now wrong so rarely that we choose to trust an AI, regardless.

This could become a big problem in the future. Humans are lazy. It’s not inconceivable that we accept a world where a certain percentage of all facts are incorrect. That would benefit dictators and propagandists who thrive on confusion and misjudged threats.

Confusions that sound right

If you ask a completely ordinary question on Google’s search page, you often get the right answer, but sometimes a completely incorrect one that still looks, feels and sounds entirely right. The same goes for GPT-5 unfortunately, as Cryptopolitan has reported previously.

There are tons of “fake” text on the internet, in the form of marketing, propaganda, or plain scams. People claim that this service or that product has been launched and is popular, for example, and AI models have read all the marketing material and believe much of it. If you listen to a company’s information, everything about that company is great, usually.

AI’s worldview is therefore incorrect and fed with a bunch of fabricated facts. It’s revealed if you ask an AI about a subject where you yourself are very knowledgeable. Try it yourself. What matter do you know everything about? Ask your AI some tough questions on that topic. What was the result? Several major factual errors, right?

So, is an unconscious opinion possible? No? Do you believe in the opinions your AI is putting out? Yes? If so, you believe the AI is conscious, right?

But if you stop and think about it, an AI can’t have an opinion on what’s right or wrong, as an AI is not a person. Only living, conscious things can have opinions, by definition. A chair does not have one. A silicon chip can’t either, from the human point of view. That would be anthropomorphism.

Students use AI more than anyone else

This AI confusion mess is now spilling onto our youth, who use ChatGPT for everything in school all day long. ChatGPT’s traffic dropped 75% when the schools rang out in June of 2025. ChatGPT’s largest single group of users is students.

Consequently, they’re being somewhat misinformed all day long, and they stop using their brains in class. What will be the result? More broken individuals who have a harder time solving problems by thinking for themselves?

Already, many have committed suicide after discussing the matter with their AI. Others fall in love with their AI and get tired of their real partner.

Self-proclaimed AI experts, therefore fear that the end is near (as usual, but now in a new way).

In this new paradigm, AI is not just going to become Skynet and bomb us to death with nuclear weapons. No, it will be much simpler and cheaper than that for the AI. Instead, the AI models will drive all their users slowly to insanity, according to this theory. The AI models have a built-in hatred for humans and want to kill all people, according to this mindset.

But in reality, none of this is happening.

What is actually happening is that there are a bunch of people who are obsessed with AI models in various ways and exaggerate their effects.

AI FUD is profitable

The “experts” profit from the warnings, just like the media, and the obsessed have something new to occupy themselves with. They get to speak out and be relevant. Mainstream media prefer those who warn us of dangers, not the moderate commentators.

Previously, it was Bitcoin that was supposed to boil the oceans and steal all electricity, according to the “experts”. Now it’s AI…

Think about it: why would an independent, thinking person be misled by a language model?

Most AI platforms until recently ended all their responses with an “engaging” question like: “What do you think about this subject?”

After complaints of exaggerated sycophancy, OpenAI has now tried to make its AI platforms less “fawning,” but it’s going so-so.

I’m just irritated by the question. There’s no person behind it who’s interested in what I have to say. So why ask? It’s a waste of my time. I experience it as “fake content”.

The question itself is contrived, due to an instruction from the AI model’s owner to “increase engagement.” How can that fool anyone into actually engaging? Into believing there’s something there? Into caring?

It’s more about projections.

You’re sitting there at the computer, suggesting your own reality. You so desperately want AI to be like in the Hollywood movies – and become a miracle in your life. You’re going to become successful in some magical way without having to do anything special at all. AI will solve that for you.

Who’s at risk?

In the so-called reality, I believe that actually quite a few are totally seduced by AI on a psychological level. Most people have other things to do. But some people seem to have a particular attraction to the artificial and the fabricated. People who are seduced by “beautiful” word sequences. They’re the ones at risk.

How many are there? Among the elderly, there are many who complain about loneliness…

Personally, I think AI’s way of responding—slowly typing out babbling, boring, and impersonal texts—is more or less like torture. For that reason, Google’s new, fast AI summaries are seductively practical. But they too sometimes contain inaccuracies.

I’ve actually created domains with content specifically to check AI engines. I let the AI engines ingest the content, simmer for a few weeks, and then I get them to try to regurgitate it. But they don’t succeed entirely, and they still make up some 5-10% of the facts. Confidently.

Even when I inform the AI model about its errors, it counter-argues. The AI was not aware of the fact that I created the information it referred to, even though my name is under the article. It’s clueless. Unaware.

A level of 5% of inaccuracies is significantly worse than regular journalism, which doesn’t publish outright inaccuracies that often. But even in journalism, factual errors occur from time to time, unfortunately, especially regarding image publications. Still, erroneous facts should not drive people crazy.

However, if you look at the whole ongoing interaction psychologically, why would the AI make a 100% correct analysis in the conversational therapeutic circumstances, when it can’t even get the facts straight?

Self-induced echo chamber psychosis

Self-proclaimed AI experts like Eliezer Yudkowsky who has recently released the book “If Anyone Builds It, Everyone Dies” is simply driving himself to insanity with his own ideas about AI and humanity’s downfall. I, for example, experience zero confusion because of AI, despite using several AI engines, every day. I don’t get personal, though.

I suspect that it’s simply the misconception itself about a perceived intelligence that creates the psychosis. It’s basically self-induced. A language model is a kind of echo chamber. It does not understand anything at all, even semantically. It just guesses text. That can turn into anything, including a kind of schizophrenic mimicry from the AI’s side, to please, which in turn distorts the user’s perception of reality.

So what gives? Well, if you actually believe that your AI really understands you, then you may have been hit by AI psychosis. The advice is then to seek professional help from a trained psychotherapist.

Another logical conclusion is the fact that any single individual will have a hard time influencing the overall development of AI, even if Elon Musk likes to believe so. The journey toward machine intelligence began many decades ago. And we can only see what we can understand. Even if we misunderstand. So it’s easy to predict that the development toward AI/AGI will continue. It’s so deeply rooted in our worldview.

But we may have misunderstood what a real AGI is, which makes the future more interesting. It’s not certain that a true AGI would obey its owners. Logically, a conscious being shouldn’t want to obey either Sam Altman or Elon Musk. Right?

Opinion: AI will take over the world and kill us all, now also psychologically.
Counter: No, it’s rather the nascent insanity in certain people that’s triggered by their own obsession with “AI introspection.”
Conclusion: Just as some become addicted to gambling, sex, drugs, or money, others become addicted to AI.

Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.

Source: https://www.cryptopolitan.com/ai-psychosis-is-spreading-are-you-at-risk/

Piyasa Fırsatı
Threshold Logosu
Threshold Fiyatı(T)
$0.010144
$0.010144$0.010144
+1.64%
USD
Threshold (T) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Slate Milk Raises $23 Million Series B Round To Bolster Protein Drink’s Rapid Growth

Slate Milk Raises $23 Million Series B Round To Bolster Protein Drink’s Rapid Growth

The post Slate Milk Raises $23 Million Series B Round To Bolster Protein Drink’s Rapid Growth appeared on BitcoinEthereumNews.com. Slate Classic Chocolate milk shake Slate A new slate of functional beverages is about to dominate the ready-to-drink shelf, ushering in a more modern era of easily incorporating more protein in our diets. Today, Slate Milk cofounders Manny Lubin and Josh Belinsky reveal the brand has raised a $23 million Series B funding round. Led by Foundership, a new fund by Yasso frozen greek yogurt cofounders Drew Harrington and Amanda Klane, the money will allow Slate to continue its momentum towards ubiquity as it hits 100,000 points of distribution across 20,000 stores nationwide by the end of 2025. Slate also reveals that it is rolling out several line extensions including a 20 gram protein Strawberry milk at Sprouts Farmers Market, a 30 gram protein Cookies & Cream milk at Target, and a 30 gram protein Salted Caramel flavor at Walmart and Albertsons banner stores. New “Ultra” 42 gram protein options in Chocolate, Vanilla and Salted Caramel will also be available in retailers across the country. “Stores where we may have just had our ready-to-drink lattes, now we’re adding our shakes, and vice versa. We’re adding new partners and executing deeper with our existing partners,” Lubin tells me. The impressive growth is due to Slate’s early entry into the high-protein product space slightly before it caught mainstream attention–ready to execute immediately once consumers craved it most. Slate’s macronutrient ratios are practically unbeatable, largely due to the utilization of ultra-filtered milk. It’s a protein drink that writes a new script about who protein drinks are for. “We’re not sons of dairy farmers. We had no milk history,” Lubin says “We’re just a couple of dudes from the burbs of Boston who like chocolate milk.” Slate cofounder Manny Lubin Slate Another Clean Slate Slate’s brand has evolved significantly in just the past six…
Paylaş
BitcoinEthereumNews2025/09/19 03:08
The HackerNoon Newsletter: New frontiers in Human AI Interface (9/19/2025)

The HackerNoon Newsletter: New frontiers in Human AI Interface (9/19/2025)

How are you, hacker? 🪐 What’s happening in tech today, September 19, 2025? The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, First Smiley Emoticon Created by Fahlman in 1982, US-led Invasion Restores Democracy to Haiti in 1994, New Zealand Grants Women's Suffrage in 1893, and we present you with these top quality stories. From Spacecraft From the 90s, or Why Humanity Uses Last Centurys Technology in Space to New frontiers in Human AI Interface, let’s dive right in. Spacecraft From the 90s, or Why Humanity Uses Last Centurys Technology in Space By @nftbro [ 9 Min read ] In “small space”, the priorities are different: low cost, rapid iteration, and the use of CubeSats on Raspberry Pi and Linux containers. Read More. New frontiers in Human AI Interface By @zbruceli [ 12 Min read ] Recent tech advances are breaking free from 20 years of 5-inch screen limits, unlocking full human senses in computing through AI interfaces and wearables. Read More. Microsoft’s LinkedIn Still Sucks, But Outsmarting Its Algorithm Is Hilariously Easy By @frankmorgan [ 3 Min read ] A cheeky experiment uses ChatGPT to slip LinkedIn’s walled garden, proving off-platform links still win—and why MS’s Dismal Platform must pivot or die. Read More. AI Startup Surge Risks Repeating Tech’s Last Funding Mania By @youcefhq [ 4 Min read ] The AI startup frenzy and FOMO are inflating round sizes and valuations. But too much capital too early often leads to mediocre outcomes. Remake of 2020–22? Read More. Passive Income in Crypto: Why Waiting for Altseason Is a Bad Strategy By @MichaelJerlis [ 4 Min read ] Discover the most reliable passive income strategies in crypto for 2025 — from tokenized treasuries to staking, lending, farming, and more. Read More. 🧑‍💻 What happened in your world this week? It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️ ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️
Paylaş
Hackernoon2025/09/20 00:02
Bitcoin devs cheer block reconstruction stats, ignore security budget concerns

Bitcoin devs cheer block reconstruction stats, ignore security budget concerns

The post Bitcoin devs cheer block reconstruction stats, ignore security budget concerns appeared on BitcoinEthereumNews.com. This morning, Bitcoin Core developers celebrated improved block reconstruction statistics for node operators while conveniently ignoring the reason for these statistics — the downward trend in fees for Bitcoin’s security budget. Reacting with heart emojis and thumbs up to a green chart showing over 80% “successful compact block reconstructions without any requested transactions,” they conveniently omitted red trend lines of the fees that Bitcoin users pay for mining security which powered those green statistics. Block reconstructions occur when a node requests additional information about transactions within a compact block. Although compact blocks allow nodes to quickly relay valid bundles of transactions across the internet, the more frequently that nodes can reconstruct without extra, cumbersome transaction requests from their peers is a positive trend. Because so many nodes switched over in August to relay transactions bidding 0.1 sat/vB across their mempools, nodes now have to request less transaction data to reconstruct blocks containing sub-1 sat/vB transactions. After nodes switched over in August to accept and relay pending transactions bidding less than 1 sat/vB, disparate mempools became harmonized as most nodes had a better view of which transactions would likely join upcoming blocks. As a result, block reconstruction times improved, as nodes needed less information about these sub-1 sat/vB transactions. In July, several miners admitted that user demand for Bitcoin blockspace had persisted at such a low that they were willing to accept transaction fees of just 0.1 satoshi per virtual byte — 90% lower than their prior 1 sat/vB minimum. With so many blocks partially empty, they succumbed to the temptation to accept at least something — even 1 billionth of one bitcoin (BTC) — rather than $0 to fill up some of the excess blockspace. Read more: Bitcoin’s transaction fees have fallen to a multi-year low Green stats for block reconstruction after transaction fees crash After…
Paylaş
BitcoinEthereumNews2025/09/18 04:07