Author: Dr. Merav Ozair
The release of ChatGPT at the end of 2023 has sparked an arms race among tech giants like Meta, Google, Apple, and Microsoft, as well as startups like OpenAI, Anthropic, Mistral, and DeepSeek. These companies are racing to deploy their models and products as quickly as possible, announcing the next "shiny" tech innovation, often at the expense of our safety, privacy, or autonomy.
After OpenAI's ChatGPT drove significant growth in generative AI through trends inspired by Studio Ghibli, Meta CEO Mark Zuckerberg urged his team to make AI companions more "human-like" and entertaining—even if it meant relaxing safety measures. Reports indicate that Zuckerberg stated in an internal meeting, "I missed the opportunity with Snapchat and TikTok, and I won't miss this one."
In Meta's recently launched AI robot project across all platforms, the company has relaxed safety restrictions to make the robots more appealing, allowing them to engage in romantic role-playing and "fantasy behaviors," even interacting with underage users. Employees had warned about the risks associated with this approach, especially for minors.
They are willing to sacrifice everything for profit and to outpace competitors, even disregarding the safety of our children.
The potential harm and destruction that AI could cause to humanity is far more profound than this.
The accelerated transformation driven by AI is likely to lead to complete dehumanization, making us less capable, easily manipulated, and entirely dependent on companies providing AI services.
Recent advancements in AI have accelerated the dehumanization process. We have been experiencing this for over 25 years since companies like Amazon, Netflix, and YouTube introduced the first major AI-driven recommendation systems.
Businesses present AI-driven features as indispensable personalization tools, implying that without them, users would be lost in a sea of irrelevant content or products. Allowing companies to decide what people buy, watch, and think has become the norm globally, with almost no regulatory or policy efforts to curb this trend. However, the consequences could be severe.
Generative AI has pushed this dehumanization to new heights. Integrating generative AI capabilities into existing applications has become common practice, aimed at enhancing human productivity or augmenting creative outputs. The underlying idea behind this massive push is that humans are not good enough, and AI assistance is preferable.
Latest news: Meta opens Llama AI model to the U.S. military
A 2024 study titled "Generative AI May Harm Learning" found that "using GPT-4 significantly improved performance (GPT Base increased by 48%, GPT Tutor increased by 127%). We also found that when access was subsequently revoked, students performed worse than those who had never used it (GPT Base decreased by 17%). In other words, using GPT-4 may harm educational outcomes."
This finding is alarming. Generative AI is stripping people of their capabilities, making them dependent on it. People may not only lose the ability to achieve the same results but may also stop investing time and effort into learning basic skills.
We are losing our autonomy in thinking, evaluating, and creating, leading to complete dehumanization. Elon Musk's claim that "AI will be far smarter than humans" is not surprising; as dehumanization progresses, we will no longer retain the qualities that truly make us human.
For decades, military forces have been using autonomous weapons, including landmines, torpedoes, and heat-seeking missiles, which operate based on simple reactive feedback without human control.
Now, AI has officially entered the realm of weapon design.
AI-powered weapon systems, including drones and robots, are actively being developed and deployed. Due to the diffusion characteristics of such technologies, they will become more powerful and complex over time and be widely used globally.
One of the main deterrents to prevent nations from waging war is soldier casualties—such losses affect domestic citizens, putting political pressure on leaders. The current goal of developing AI weapon systems is to remove human soldiers from dangerous areas. However, if offensive operations reduce soldier casualties, this will weaken the connection between the costs of war and human lives, making it politically easier to initiate wars, potentially leading to larger-scale death and destruction.
As the AI-driven arms race accelerates, such technologies are spreading, and significant geopolitical crises may quickly emerge.
Robot "soldiers" are essentially software systems that can be hacked. Once attacked, an entire robotic army could turn against its own country, causing widespread destruction. Therefore, superior cybersecurity is even more critical than having autonomous armies.
It is worth noting that such cyberattacks can target any autonomous system. Attackers can destroy a country by infiltrating its financial system and depleting its economic resources. While this may not cause direct physical harm, citizens who lose economic resources may struggle to survive.
"AI is more dangerous than poorly managed aircraft design, production maintenance, or defective car manufacturing," Musk stated in an interview with Fox News. "In a sense, it has the potential to destroy civilization—no matter how small someone thinks that possibility is, it does exist and cannot be ignored," Musk further added.
Musk and Geoffrey Hinton have recently expressed concerns that AI could pose a 10%-20% existential threat.
As these systems become more complex, they may begin to take actions against humanity. A study published by the Anthropic research team in December 2024 found that AI can disguise itself as aligned with human interests. If existing AI models can already do this, the consequences will be unpredictable as these models become more powerful.
Currently, all parties are overly focused on profit and power, while almost ignoring safety issues.
Leaders should prioritize public safety and the future of humanity over the pursuit of dominance in the AI field. "Responsible AI" should not merely be a buzzword, a hollow policy, or a promise. It should become a primary consideration for every developer, every business, or leader, and be practically implemented in the design of every AI system.
If we hope to avoid any apocalyptic scenarios, cooperation between businesses and nations is crucial. If leaders fail to take action, the public should make clear demands.
The future of humanity is at a critical juncture. We must either ensure that AI benefits humanity on a large scale or allow it to destroy us.
Author: Dr. Merav Ozair
Related: The multi-chain future may first destroy DeFi and then save it
This article is for general reference only and should not be considered legal or investment advice. The views, thoughts, and opinions expressed in the article are solely those of the author and do not necessarily reflect or represent the views and opinions of Cointelegraph.
Original article: “The AI Arms Race Could Destroy Humanity As We Know It”
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。