By Gil Press Compiled by: Felix, PANews On July 9, 2025, Nvidia became the first public company to reach a market value of $4 trillion. Where will Nvidia and theBy Gil Press Compiled by: Felix, PANews On July 9, 2025, Nvidia became the first public company to reach a market value of $4 trillion. Where will Nvidia and the

Looking back at the 80-year development of AI, these 5 historical lessons are worth learning

2025/07/16 15:38

By Gil Press

Compiled by: Felix, PANews

On July 9, 2025, Nvidia became the first public company to reach a market value of $4 trillion. Where will Nvidia and the volatile AI field go next?

Although predictions are difficult, there is a wealth of data available to help us see at least why past predictions did not come true, and in what ways, how, and for what reasons. This is history.

What lessons can be learned from the 80-year history of artificial intelligence (AI), a history that has seen highs and lows in funding, widely varying approaches to research and development, and public curiosity, anxiety, and excitement?

The history of AI began in December 1943, when neurophysiologist Warren S. McCulloch and logician Walter Pitts published a paper on mathematical logic. In “A Logical Calculation of Ideas Immanent in Nervous Activity,” they speculated about idealized and simplified networks of neurons and how they could perform simple logical operations by passing or not passing impulses.

Ralph Lillie, who was then pioneering the field of tissue chemistry, described McCulloch and Pitts's work as giving "logical and mathematical models a 'reality'" in the absence of "experimental facts." Later, when the paper's hypotheses failed empirical tests, Jerome Lettvin of MIT noted that while the fields of neurology and neurobiology ignored the paper, it had inspired "a community of enthusiasts in what was destined to become the new field now known as AI."

In fact, McCulloch and Pitts’ paper inspired “connectionism,” the specific variant of AI that dominates today, now known as “deep learning” and more recently renamed “AI.” The statistical analysis methods that underpin this variant of AI, “artificial neural networks,” are often described by AI practitioners and commentators as “mimicking the brain,” despite the approach having nothing to do with how the brain actually works. In 2017, the pundit and top AI practitioner Demise Hassabis declared that McCulloch and Pitts’ fictional description of how the brain works and similar research “continue to lay the foundation for contemporary deep learning research.”

Lesson 1 : Beware of confusing engineering with science, science with speculation, and science with papers full of mathematical symbols and formulas. Most importantly, resist the temptation to fall into the delusion that humans are no different from machines and that we can create machines that are like humans.

This stubborn and pervasive hubris has been the catalyst for tech bubbles and periodic AI manias over the past 80 years.

This brings to mind the idea of general artificial intelligence (AGI), the idea that machines will soon have human-like or even superintelligence.

In 1957, AI pioneer Herbert Simon declared: "We now have machines that think, learn, and create." He also predicted that within a decade, computers would become chess champions. In 1970, another AI pioneer, Marvin Minsky, confidently stated: "In three to eight years, we will have a machine with the intelligence of an ordinary person... Once the computers take control, we may never get them back. We will live at their mercy. If we are lucky, they may decide to keep us as pets."

The anticipation of the advent of general AI was so significant that it even affected government spending and policy. In 1981, Japan allocated $850 million for the fifth-generation computer project, which aimed to develop machines that think like humans. In response, the U.S. Defense Advanced Research Projects Agency, after a long "AI winter," planned to re-fund AI research in 1983 to develop machines that could "see, hear, speak, and think like humans."

It took about a decade and billions of dollars for enlightened governments around the world to come to terms with not only general AI (AGI) but also the limitations of traditional AI. But by 2012, connectionism had finally triumphed over the other AI schools, and a new wave of predictions about the imminent arrival of general AI swept the world. OpenAI declared in 2023 that superintelligent AI—“the most impactful invention ever made by mankind”—could arrive within this decade and “could lead to the loss of human power or even human extinction.”

Lesson 2: Be wary of shiny new things and look at them carefully, cautiously, and wisely. They may not be much different from previous speculations about when machines will have human-like intelligence.

Yann LeCun, one of the "godfathers" of deep learning, once said: "To make machines learn as efficiently as humans and animals, we are still missing something key, but we don't know what it is yet."

For years, general AI (AGI) has been said to be “just around the corner,” all due to the “first step fallacy.” Machine translation pioneer Yehoshua Bar-Hillel, who was one of the first to talk about the limitations of machine intelligence, noted that many people believe that if someone demonstrates a computer can do something that no one thought it could do until recently, even if it does it poorly, it will only take further technological development for it to do the task perfectly. The common belief is that if you just wait, it will eventually happen. But Bar-Hillel warned as early as the mid-1950s that this was not the case, and reality has proven otherwise time and again.

Lesson Three: The distance from not being able to do something to doing it poorly is usually much shorter than the distance from doing it poorly to doing it well.

In the 1950s and 1960s, many people fell for the “first step fallacy” as the processing speeds of the semiconductors that powered computers increased. As hardware improved on a reliably upward trajectory each year called “Moore’s Law,” it was widely assumed that machine intelligence would keep pace with the hardware.

However, in addition to the continuous improvement of hardware performance, the development of AI entered a new stage, introducing two new elements: software and data collection. Starting in the mid-1960s, expert systems (Note: an intelligent computer program system) placed a new focus on acquiring and programming real-world knowledge, especially the knowledge of experts in specific fields, and their rules of thumb (heuristic methods). Expert systems became increasingly popular, and by the 1980s, it was estimated that two-thirds of Fortune 500 companies applied this technology in their daily business activities.

However, by the early 1990s, the AI craze had completely collapsed. Many AI startups went bankrupt, and major companies froze or canceled AI projects. As early as 1983, expert system pioneer Ed Feigenbaum pointed out the "key bottleneck" that led to their demise: the expansion of the knowledge acquisition process, "which is a very cumbersome, time-consuming and expensive process."

Expert systems also face the problem of knowledge accumulation. The need to constantly add and update rules makes them difficult and costly to maintain. They also expose the flaws of thinking machines compared to human intelligence. They are "brittle", make ridiculous mistakes when faced with unusual inputs, cannot transfer their expertise to new domains, and lack understanding of the world around them. At the most fundamental level, they cannot learn from examples, experience, and environment the way humans do.

Lesson 4: Initial success—widespread adoption by businesses and government agencies and massive public and private investment—may not necessarily lead to a lasting “new industry” even after ten or fifteen years. Bubbles tend to burst.

Amid the ups and downs, hype and setbacks, two very different approaches to AI development have been vying for the attention of academia, public and private investors, and the media. For more than four decades, symbolic, rule-based approaches to AI have dominated. But connectionism, an instance-based, statistically driven approach to AI, enjoyed brief periods of popularity in the late 1950s and 1980s as the other major AI approach.

Prior to the connectionist renaissance in 2012, AI research and development was driven primarily by academia, which was characterized by dogma (so-called “normal science”) and a persistent choice between symbolic AI and connectionism. In 2019, Geoffrey Hinton spent much of his Turing Award speech describing the hardships he and a handful of deep learning enthusiasts had experienced at the hands of mainstream AI and machine learning academics. Hinton also went out of his way to disparage reinforcement learning and the work of his colleagues at DeepMind.

Just a few years later, in 2023, DeepMind took over Google’s AI business (and Hinton left there as well), largely in response to the success of OpenAI, which also used reinforcement learning as an integral part of its AI development. Two of the pioneers of reinforcement learning, Andrew Barto and Richard Sutton, received the Turing Award in 2025.

However, there is no sign that neither DeepMind nor OpenAI, nor any of the many “unicorn” companies working on general artificial intelligence (AGI), are focusing beyond the prevailing paradigm of large language models. Since 2012, the center of gravity of AI development has shifted from academia to the private sector; however, the entire field remains obsessed with a single research direction.

Lesson 5: Don’t put all your AI eggs in one basket.

Huang is undoubtedly an outstanding CEO and Nvidia is an outstanding company. When the AI opportunity suddenly emerged more than a decade ago, Nvidia was quick to seize it because the parallel processing power of its chips (originally designed for efficiently rendering video games) was well suited to deep learning calculations. Huang was always vigilant, telling employees: "We are only 30 days away from bankruptcy."

Beyond staying vigilant (remember Intel?), lessons learned from 80 years of AI development may also help Nvidia weather the ups and downs of the next 30 days or 30 years.

Related reading: A look at the 10 AI companies and models that are defining the current AI revolution

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0.03633
$0.03633$0.03633
+1.79%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Whales keep selling XRP despite ETF success — Data signals deeper weakness

Whales keep selling XRP despite ETF success — Data signals deeper weakness

The post Whales keep selling XRP despite ETF success — Data signals deeper weakness appeared on BitcoinEthereumNews.com. XRP ETFs have crossed $1 billion in assets
Paylaş
BitcoinEthereumNews2025/12/20 02:55
Foreigner’s Lou Gramm Revisits The Band’s Classic ‘4’ Album, Now Reissued

Foreigner’s Lou Gramm Revisits The Band’s Classic ‘4’ Album, Now Reissued

The post Foreigner’s Lou Gramm Revisits The Band’s Classic ‘4’ Album, Now Reissued appeared on BitcoinEthereumNews.com. American-based rock band Foreigner performs onstage at the Rosemont Horizon, Rosemont, Illinois, November 8, 1981. Pictured are, from left, Mick Jones, on guitar, and vocalist Lou Gramm. (Photo by Paul Natkin/Getty Images) Getty Images Singer Lou Gramm has a vivid memory of recording the ballad “Waiting for a Girl Like You” at New York City’s Electric Lady Studio for his band Foreigner more than 40 years ago. Gramm was adding his vocals for the track in the control room on the other side of the glass when he noticed a beautiful woman walking through the door. “She sits on the sofa in front of the board,” he says. “She looked at me while I was singing. And every now and then, she had a little smile on her face. I’m not sure what that was, but it was driving me crazy. “And at the end of the song, when I’m singing the ad-libs and stuff like that, she gets up,” he continues. “She gives me a little smile and walks out of the room. And when the song ended, I would look up every now and then to see where Mick [Jones] and Mutt [Lange] were, and they were pushing buttons and turning knobs. They were not aware that she was even in the room. So when the song ended, I said, ‘Guys, who was that woman who walked in? She was beautiful.’ And they looked at each other, and they went, ‘What are you talking about? We didn’t see anything.’ But you know what? I think they put her up to it. Doesn’t that sound more like them?” “Waiting for a Girl Like You” became a massive hit in 1981 for Foreigner off their album 4, which peaked at number one on the Billboard chart for 10 weeks and…
Paylaş
BitcoinEthereumNews2025/09/18 01:26
New York Regulators Push Banks to Adopt Blockchain Analytics

New York Regulators Push Banks to Adopt Blockchain Analytics

New York’s top financial regulator urged banks to adopt blockchain analytics, signaling tighter oversight of crypto-linked risks. The move reflects regulators’ concern that traditional institutions face rising exposure to digital assets. While crypto-native firms already rely on monitoring tools, the Department of Financial Services now expects banks to use them to detect illicit activity. NYDFS Outlines Compliance Expectations The notice, issued on Wednesday by Superintendent Adrienne Harris, applies to all state-chartered banks and foreign branches. In its industry letter, the New York State Department of Financial Services (NYDFS) emphasized that blockchain analytics should be integrated into compliance programs according to each bank’s size, operations, and risk appetite. The regulator cautioned that crypto markets evolve quickly, requiring institutions to update frameworks regularly. “Emerging technologies introduce evolving threats that require enhanced monitoring tools,” the notice stated. It stressed the need for banks to prevent money laundering, sanctions violations, and other illicit finance linked to virtual currency transactions. To that end, the Department listed specific areas where blockchain analytics can be applied: Screening customer wallets with crypto exposure to assess risks. Verifying the origin of funds from virtual asset service providers (VASPs). Monitoring the ecosystem holistically to detect money laundering or sanctions exposure. Identifying and assessing counterparties, such as third-party VASPs. Evaluating expected versus actual transaction activity, including dollar thresholds. Weighing risks tied to new digital asset products before rollout. These examples highlight how institutions can tailor monitoring tools to strengthen their risk management frameworks. The guidance expands on NYDFS’s Virtual Currency-Related Activities (VCRA) framework, which has governed crypto oversight in the state since 2022. Regulators Signal Broader Impact Market observers say the notice is less about new rules and more about clarifying expectations. By formalizing the role of blockchain analytics in traditional finance, New York is reinforcing the idea that banks cannot treat crypto exposure as a niche concern. Analysts also believe the approach could ripple beyond New York. Federal agencies and regulators in other states may view the guidance as a blueprint for aligning banking oversight with the realities of digital asset adoption. For institutions, failure to adopt blockchain intelligence tools may invite regulatory scrutiny and undermine their ability to safeguard customer trust. With crypto now firmly embedded in global finance, New York’s stance suggests that blockchain analytics are no longer optional for banks — they are essential to protecting the financial system’s integrity.
Paylaş
Coinstats2025/09/18 08:49