By Gil Press Compiled by: Felix, PANews On July 9, 2025, Nvidia became the first public company to reach a market value of $4 trillion. Where will Nvidia and theBy Gil Press Compiled by: Felix, PANews On July 9, 2025, Nvidia became the first public company to reach a market value of $4 trillion. Where will Nvidia and the

Looking back at the 80-year development of AI, these 5 historical lessons are worth learning

2025/07/16 15:38

By Gil Press

Compiled by: Felix, PANews

On July 9, 2025, Nvidia became the first public company to reach a market value of $4 trillion. Where will Nvidia and the volatile AI field go next?

Although predictions are difficult, there is a wealth of data available to help us see at least why past predictions did not come true, and in what ways, how, and for what reasons. This is history.

What lessons can be learned from the 80-year history of artificial intelligence (AI), a history that has seen highs and lows in funding, widely varying approaches to research and development, and public curiosity, anxiety, and excitement?

The history of AI began in December 1943, when neurophysiologist Warren S. McCulloch and logician Walter Pitts published a paper on mathematical logic. In “A Logical Calculation of Ideas Immanent in Nervous Activity,” they speculated about idealized and simplified networks of neurons and how they could perform simple logical operations by passing or not passing impulses.

Ralph Lillie, who was then pioneering the field of tissue chemistry, described McCulloch and Pitts's work as giving "logical and mathematical models a 'reality'" in the absence of "experimental facts." Later, when the paper's hypotheses failed empirical tests, Jerome Lettvin of MIT noted that while the fields of neurology and neurobiology ignored the paper, it had inspired "a community of enthusiasts in what was destined to become the new field now known as AI."

In fact, McCulloch and Pitts’ paper inspired “connectionism,” the specific variant of AI that dominates today, now known as “deep learning” and more recently renamed “AI.” The statistical analysis methods that underpin this variant of AI, “artificial neural networks,” are often described by AI practitioners and commentators as “mimicking the brain,” despite the approach having nothing to do with how the brain actually works. In 2017, the pundit and top AI practitioner Demise Hassabis declared that McCulloch and Pitts’ fictional description of how the brain works and similar research “continue to lay the foundation for contemporary deep learning research.”

Lesson 1 : Beware of confusing engineering with science, science with speculation, and science with papers full of mathematical symbols and formulas. Most importantly, resist the temptation to fall into the delusion that humans are no different from machines and that we can create machines that are like humans.

This stubborn and pervasive hubris has been the catalyst for tech bubbles and periodic AI manias over the past 80 years.

This brings to mind the idea of general artificial intelligence (AGI), the idea that machines will soon have human-like or even superintelligence.

In 1957, AI pioneer Herbert Simon declared: "We now have machines that think, learn, and create." He also predicted that within a decade, computers would become chess champions. In 1970, another AI pioneer, Marvin Minsky, confidently stated: "In three to eight years, we will have a machine with the intelligence of an ordinary person... Once the computers take control, we may never get them back. We will live at their mercy. If we are lucky, they may decide to keep us as pets."

The anticipation of the advent of general AI was so significant that it even affected government spending and policy. In 1981, Japan allocated $850 million for the fifth-generation computer project, which aimed to develop machines that think like humans. In response, the U.S. Defense Advanced Research Projects Agency, after a long "AI winter," planned to re-fund AI research in 1983 to develop machines that could "see, hear, speak, and think like humans."

It took about a decade and billions of dollars for enlightened governments around the world to come to terms with not only general AI (AGI) but also the limitations of traditional AI. But by 2012, connectionism had finally triumphed over the other AI schools, and a new wave of predictions about the imminent arrival of general AI swept the world. OpenAI declared in 2023 that superintelligent AI—“the most impactful invention ever made by mankind”—could arrive within this decade and “could lead to the loss of human power or even human extinction.”

Lesson 2: Be wary of shiny new things and look at them carefully, cautiously, and wisely. They may not be much different from previous speculations about when machines will have human-like intelligence.

Yann LeCun, one of the "godfathers" of deep learning, once said: "To make machines learn as efficiently as humans and animals, we are still missing something key, but we don't know what it is yet."

For years, general AI (AGI) has been said to be “just around the corner,” all due to the “first step fallacy.” Machine translation pioneer Yehoshua Bar-Hillel, who was one of the first to talk about the limitations of machine intelligence, noted that many people believe that if someone demonstrates a computer can do something that no one thought it could do until recently, even if it does it poorly, it will only take further technological development for it to do the task perfectly. The common belief is that if you just wait, it will eventually happen. But Bar-Hillel warned as early as the mid-1950s that this was not the case, and reality has proven otherwise time and again.

Lesson Three: The distance from not being able to do something to doing it poorly is usually much shorter than the distance from doing it poorly to doing it well.

In the 1950s and 1960s, many people fell for the “first step fallacy” as the processing speeds of the semiconductors that powered computers increased. As hardware improved on a reliably upward trajectory each year called “Moore’s Law,” it was widely assumed that machine intelligence would keep pace with the hardware.

However, in addition to the continuous improvement of hardware performance, the development of AI entered a new stage, introducing two new elements: software and data collection. Starting in the mid-1960s, expert systems (Note: an intelligent computer program system) placed a new focus on acquiring and programming real-world knowledge, especially the knowledge of experts in specific fields, and their rules of thumb (heuristic methods). Expert systems became increasingly popular, and by the 1980s, it was estimated that two-thirds of Fortune 500 companies applied this technology in their daily business activities.

However, by the early 1990s, the AI craze had completely collapsed. Many AI startups went bankrupt, and major companies froze or canceled AI projects. As early as 1983, expert system pioneer Ed Feigenbaum pointed out the "key bottleneck" that led to their demise: the expansion of the knowledge acquisition process, "which is a very cumbersome, time-consuming and expensive process."

Expert systems also face the problem of knowledge accumulation. The need to constantly add and update rules makes them difficult and costly to maintain. They also expose the flaws of thinking machines compared to human intelligence. They are "brittle", make ridiculous mistakes when faced with unusual inputs, cannot transfer their expertise to new domains, and lack understanding of the world around them. At the most fundamental level, they cannot learn from examples, experience, and environment the way humans do.

Lesson 4: Initial success—widespread adoption by businesses and government agencies and massive public and private investment—may not necessarily lead to a lasting “new industry” even after ten or fifteen years. Bubbles tend to burst.

Amid the ups and downs, hype and setbacks, two very different approaches to AI development have been vying for the attention of academia, public and private investors, and the media. For more than four decades, symbolic, rule-based approaches to AI have dominated. But connectionism, an instance-based, statistically driven approach to AI, enjoyed brief periods of popularity in the late 1950s and 1980s as the other major AI approach.

Prior to the connectionist renaissance in 2012, AI research and development was driven primarily by academia, which was characterized by dogma (so-called “normal science”) and a persistent choice between symbolic AI and connectionism. In 2019, Geoffrey Hinton spent much of his Turing Award speech describing the hardships he and a handful of deep learning enthusiasts had experienced at the hands of mainstream AI and machine learning academics. Hinton also went out of his way to disparage reinforcement learning and the work of his colleagues at DeepMind.

Just a few years later, in 2023, DeepMind took over Google’s AI business (and Hinton left there as well), largely in response to the success of OpenAI, which also used reinforcement learning as an integral part of its AI development. Two of the pioneers of reinforcement learning, Andrew Barto and Richard Sutton, received the Turing Award in 2025.

However, there is no sign that neither DeepMind nor OpenAI, nor any of the many “unicorn” companies working on general artificial intelligence (AGI), are focusing beyond the prevailing paradigm of large language models. Since 2012, the center of gravity of AI development has shifted from academia to the private sector; however, the entire field remains obsessed with a single research direction.

Lesson 5: Don’t put all your AI eggs in one basket.

Huang is undoubtedly an outstanding CEO and Nvidia is an outstanding company. When the AI opportunity suddenly emerged more than a decade ago, Nvidia was quick to seize it because the parallel processing power of its chips (originally designed for efficiently rendering video games) was well suited to deep learning calculations. Huang was always vigilant, telling employees: "We are only 30 days away from bankruptcy."

Beyond staying vigilant (remember Intel?), lessons learned from 80 years of AI development may also help Nvidia weather the ups and downs of the next 30 days or 30 years.

Related reading: A look at the 10 AI companies and models that are defining the current AI revolution

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(AI)
$0.03646
$0.03646$0.03646
+2.15%
USD
Sleepless AI (AI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen [email protected] ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Which DOGE? Musk's Cryptic Post Explodes Confusion

Which DOGE? Musk's Cryptic Post Explodes Confusion

A viral chart documenting a sharp decline in U.S. federal employment during President Trump's second term has sparked unexpected confusion in cryptocurrency markets
Paylaş
Coinstats2025/12/20 01:13
‘One Battle After Another’ Becomes One Of This Decade’s Best-Reviewed Movies

‘One Battle After Another’ Becomes One Of This Decade’s Best-Reviewed Movies

The post ‘One Battle After Another’ Becomes One Of This Decade’s Best-Reviewed Movies appeared on BitcoinEthereumNews.com. Topline Critics have hailed Paul Thomas Anderson’s “One Battle After Another,” starring Leonardo DiCaprio, as a “masterpiece,” indicating potential Academy Awards success as it boasts near-perfect scores on review aggregators Metacritic and Rotten Tomatoes based on early reviews. Leonardo DiCaprio stars in “One Battle After Another,” which opens in theaters next week. (Photo by Jeff Spicer/Getty Images for Warner Bros. Pictures) Getty Images for Warner Bros. Pictures Key Facts “One Battle After Another” boasts a nearly perfect 97 out of a possible 100 on Metacritic based on its first 31 reviews, making it the highest-rated movie of this decade on Metacritic’s best movies of all time list. The movie also has a 96% score on Rotten Tomatoes based on the first 56 reviews, with only two reviews considered “rotten,” or negative. The Associated Press hailed the movie as “an American masterpiece,” noting the movie touches on topical political themes and depicts a society where “gun violence, white power and immigrant deportations recur in an ongoing dance, both farcical and tragic.” The movie stars DiCaprio as an ex-revolutionary who reunites with former accomplices to rescue his 16-year-old daughter when she goes missing, and Anderson has said the movie was inspired by the 1990 novel, “Vineland.” Most critics have described the movie as an action thriller with notable chase scenes, which jumps in time from DiCaprio’s character’s early days with fictional revolutionary group, the French 75, to about 15 years later, when he is pursued by foe and military leader Captain Steven Lockjaw, played by Sean Penn. The Warner Bros.-produced film was made on a big budget, estimated to be between $130 million and $175 million, and co-stars Penn, Benicio del Toro, Regina Hall and Teyana Taylor. When Will ‘one Battle After Another’ Open In Theaters And Streaming? The move opens in…
Paylaş
BitcoinEthereumNews2025/09/18 07:35
Gold continues to hit new highs. How to invest in gold in the crypto market?

Gold continues to hit new highs. How to invest in gold in the crypto market?

As Bitcoin encounters a "value winter", real-world gold is recasting the iron curtain of value on the blockchain.
Paylaş
PANews2025/04/14 17:12