Looking back at the 80-year development history of AI, these 5 historical lessons are worth learning.

CN
23 hours ago

The lessons learned from the 80-year development history of artificial intelligence may help AI companies navigate the ups and downs of the next 30 days or 30 years.

Written by: Gil Press

Translated by: Felix, PANews

On July 9, 2025, Nvidia became the first publicly traded company to reach a market value of $4 trillion. What lies ahead for Nvidia and the volatile AI field?

While predictions are difficult, there is a wealth of data available. At the very least, it can help clarify why past predictions did not materialize, and in what ways, for what reasons, and to what extent they failed. This is history.

What lessons can be drawn from the 80-year development history of artificial intelligence (AI)? Throughout this journey, funding has fluctuated, research and development methods have varied widely, and public sentiment has oscillated between curiosity, anxiety, and excitement.

The history of AI began in December 1943, when neurophysiologist Warren S. McCulloch and logician Walter Pitts published a paper on mathematical logic. In their paper "Logical Calculus of the Ideas Immanent in Nervous Activity," they speculated about idealized and simplified neural networks and how they could perform simple logical operations by transmitting or not transmitting impulses.

Ralph Lillie, who was pioneering the field of organized chemistry at the time, described McCulloch and Pitts's work as giving "reality" to "logical and mathematical models" in the absence of "experimental facts." Later, when the hypotheses of that paper failed empirical testing, Jerome Lettvin at MIT pointed out that while the fields of neurology and neurobiology ignored the paper, it inspired "a group destined to become enthusiasts of a new field (now known as AI)."

In fact, McCulloch and Pitts's paper inspired "connectionism," a specific variant of AI that dominates today, now referred to as "deep learning," which has recently been rebranded as "AI." Although this approach has no relation to how the brain actually works, the statistical analysis methods supporting this AI variant—"artificial neural networks"—are often described by AI practitioners and commentators as "mimicking the brain." Authorities and top AI practitioners like Demis Hassabis claimed in 2017 that McCulloch and Pitts's fictional description of how the brain works and similar research "continue to lay the foundation for contemporary deep learning research."

Lesson One: Be wary of conflating engineering with science, speculation with science, and scientific papers filled with mathematical symbols and formulas with science. Most importantly, resist the temptation of the illusion that "we are like gods," the belief that humans are no different from machines and that humans can create machines that are like humans.

This stubborn and pervasive arrogance has been a catalyst for technological bubbles and the cyclical fervor of AI over the past 80 years.

This brings to mind the idea of General AI (AGI), machines that will soon possess human-like intelligence or even superintelligence.

In 1957, AI pioneer Herbert Simon declared, "There are now machines that can think, learn, and create." He also predicted that within a decade, computers would become chess champions. In 1970, another AI pioneer, Marvin Minsky, confidently stated, "Within three to eight years, we will have a machine with the intelligence of an ordinary person… Once computers take over, we may never regain control. We will depend on their benevolence to survive. If we are lucky, they may decide to keep us as pets."

The expectations surrounding the imminent arrival of General AI were significant, even influencing government spending and policy. In 1981, Japan allocated $850 million for its Fifth Generation Computer project, aiming to develop machines that think like humans. In response, the U.S. Defense Advanced Research Projects Agency planned to re-fund AI research in 1983 after a long "AI winter," aiming to develop machines that could "see, hear, speak, and think like humans."

Enlightened governments around the world spent about a decade and billions of dollars not only coming to a sober understanding of General AI (AGI) but also recognizing the limitations of traditional AI. However, by 2012, connectionism finally triumphed over other AI schools of thought, and a new wave of predictions about the imminent arrival of General AI swept the globe. OpenAI claimed in 2023 that superintelligent AI—"the most influential invention in human history"—could arrive within this decade and "could lead to humanity losing power, or even extinction."

Lesson Two: Be cautious of shiny new things; examine them carefully, prudently, and wisely. They may not differ significantly from previous speculations about when machines could possess intelligence similar to humans.

One of the "fathers" of deep learning, Yann LeCun, stated, "To enable machines to learn as efficiently as humans and animals, we still lack some key elements, though we do not yet know what they are."

For years, General AI (AGI) has been said to be "just around the corner," all due to the "first-step fallacy." Yehoshua Bar-Hillel, a pioneer in machine translation, was one of the first to discuss the limitations of machine intelligence, pointing out that many people believe that if someone demonstrates a computer can accomplish a task that was only recently thought possible, even if it does so poorly, it just needs further technological development to perfect the task. There is a widespread belief that with patience, it will eventually be achieved. But Bar-Hillel warned as early as the mid-1950s that this was not the case, and reality has repeatedly proven otherwise.

Lesson Three: The distance from being unable to do something to doing it poorly is often much shorter than the distance from doing it poorly to doing it well.

In the 1950s and 60s, many fell into the "first-step fallacy" due to the increasing processing speeds of semiconductors driving computers. As hardware progressed along the reliable upward trajectory of "Moore's Law" each year, it was widely believed that machine intelligence would also develop in sync with hardware.

However, beyond the continuous improvement of hardware performance, AI development entered a new phase, introducing two new elements: software and data collection. Starting in the mid-1960s, expert systems (i.e., intelligent computer program systems) shifted the focus to acquiring and programming knowledge of the real world, particularly the knowledge of domain experts and their heuristics. Expert systems became increasingly popular, and by the 1980s, it was estimated that two-thirds of Fortune 500 companies were using this technology in their daily operations.

However, by the early 1990s, this AI boom completely collapsed. Numerous AI startups went bankrupt, and major companies froze or canceled their AI projects. As early as 1983, expert system pioneer Ed Feigenbaum pointed out the "key bottleneck" leading to their demise: the expansion of the knowledge acquisition process, "which is a very tedious, time-consuming, and expensive process."

Expert systems also faced the challenge of knowledge accumulation. The constant need to add and update rules made them difficult to maintain and costly. They also exposed the shortcomings of thinking machines compared to human intelligence. They were "fragile," making absurd mistakes when faced with unusual inputs, unable to transfer their expertise to new domains, and lacking an understanding of the surrounding world. At a fundamental level, they could not learn from examples, experiences, or environments like humans.

Lesson Four: Initial success, such as widespread adoption by businesses and government agencies and significant public and private investment, even after ten or fifteen years, does not necessarily lead to a lasting "new industry." Bubbles often burst.

Amid the ups and downs, hype, and setbacks, two distinctly different AI development approaches have been vying for the attention of academia, public and private investors, and the media. For over forty years, rule-based symbolic AI methods have dominated. However, instance-based, statistically driven connectionism, as another major AI approach, briefly enjoyed popularity in the late 1950s and late 1980s.

Before the revival of connectionism in 2012, AI research and development were primarily driven by academia. The academic environment was characterized by dogma (the so-called "normal science"), and there was a binary choice between symbolic AI and connectionism. In 2019, Geoffrey Hinton, in his Turing Award speech, spent much of his time discussing the struggles he and a few deep learning enthusiasts faced at the hands of mainstream AI and machine learning scholars. Hinton also deliberately downplayed reinforcement learning and the work of his colleagues at DeepMind.

Just a few years later, in 2023, DeepMind took over Google's AI business (and Hinton left there), primarily in response to OpenAI's success, which also incorporated reinforcement learning as a component of its AI development. The two pioneers of reinforcement learning, Andrew Barto and Richard Sutton, received the Turing Award in 2025.

However, there are currently no signs that either DeepMind or OpenAI, or the many "unicorn" companies dedicated to General AI (AGI), are focusing on anything beyond the currently dominant paradigm of large language models. Since 2012, the focus of AI development has shifted from academia to the private sector; however, the entire field remains fixated on a single research direction.

Lesson Five: Do not put all your AI "eggs" in one "basket."

There is no doubt that Jensen Huang is an outstanding CEO and Nvidia is an exceptional company. More than a decade ago, when the opportunity for AI suddenly emerged, Nvidia quickly seized it, as its chips (originally designed for efficient video game rendering) were well-suited for deep learning computations. Huang remains vigilant, telling employees, "Our company is only 30 days away from bankruptcy."

In addition to maintaining vigilance (remember Intel?), the lessons learned from the 80-year development history of AI may also help Nvidia navigate the ups and downs of the next 30 days or 30 years.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX:交易+Web3一站式体验!注册返佣20%,福利立享!
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink