"Human Awakening" Musk: Compared to the AI tsunami, DOGE is nothing to mention.

CN
10 hours ago

Just after leaving DOGE, Musk issued a shocking warning in a recent interview: humanity is standing at the starting point of an "intelligence explosion," and a "thousand-foot AI tsunami" is about to sweep in, making government reforms seem as insignificant as "cleaning a dirty beach."

Written by: Long Yue

Source: Wall Street Journal

Recently, the American startup accelerator Y Combinator (YC) held its first AI Startup School in San Francisco, inviting several heavyweight figures from the AI industry, including Elon Musk and OpenAI CEO Sam Altman.

Musk, who recently completed a 130-day term as a special government employee in the "Department of Government Efficiency" (DOGE), candidly described this experience in the interview as an "interesting side quest," but its importance pales in comparison to the impending AI revolution. He likened the work of the government efficiency department to "cleaning the beach," while the upcoming AI is a "thousand-foot tsunami."

Fixing the government is like… the beach is dirty, with needles, feces, and trash. But then there’s a thousand-foot wall of water, which is the AI tsunami. If a thousand-foot tsunami is about to hit, how much significance does cleaning the beach have? Not much.

Musk predicts that digital superintelligence could arrive this year or next, surpassing human intelligence, while the number of humanoid robots in the future could far exceed that of humans, potentially reaching 5-10 times the human population. He boldly forecasts that the scale of the AI-driven economy will be thousands or even millions of times larger than the current one, with human intelligence potentially dropping below 1%. Here are some key points from his speech:

• Musk announced that he left DOGE on May 28, concluding his 130-day term as a special government employee, stating he is "returning to the main mission";

• Musk compared the work of the government efficiency department to "cleaning the beach," while the impending AI is a "thousand-foot tsunami," making the former seem insignificant in comparison;

• He predicts that digital superintelligence could arrive this year or next, stating, "If it doesn't happen this year, it will definitely happen next year";

• The future number of humanoid robots will far exceed that of humans, potentially being 5 times or even 10 times greater;

• He predicts that the scale of the AI-driven economy will be thousands or even millions of times larger than the current one, pushing civilization towards a Kardashev Type II (stellar energy level), with human intelligence potentially dropping below 1%;

• Musk emphasizes that "a rigorous adherence to the truth" is the most important cornerstone of AI safety, and forcing AI to believe in untruths is extremely dangerous;

• Reflecting on the early days of SpaceX, he noted that the success of the fourth rocket launch after three consecutive failures was a "life-or-death moment," and that Tesla's financing was completed at the last moment before bankruptcy in 2008.

Mission Accomplished for DOGE: Political Noise Too Loud, Returning to "Main Mission"

Musk candidly stated in the interview that his experience in Washington, D.C. made him deeply aware of how "terrible the signal noise in politics is." He described his work in D.C. as an "interesting side quest," but ultimately decided to "return to the main mission—building technology, which is what I love to do."

The billionaire explained the fundamental reason for his departure from government: "Fixing the government is like cleaning the beach—the beach is dirty, with needles, feces, and trash. But at the same time, there’s this thousand-foot wall of water, which is the AI tsunami. If you’re facing a thousand-foot tsunami, how important is cleaning the beach? Not that important."

AI Superintelligence is Imminent: It Will Definitely Arrive This Year or Next

Musk provided a very clear prediction regarding the arrival of digital superintelligence. He stated, "I think we are very close to digital superintelligence. If it doesn’t happen this year, it will definitely happen next year."

He defines "digital superintelligence" as "intelligence that is smarter than any human in anything." Musk predicts that AI will drive the economy to achieve exponential growth—"not just ten times larger than the current economy, but thousands or even millions of times larger."

AI will change the future so profoundly that its extent is hard to measure… Assuming we don’t go astray and AI doesn’t wipe us out and itself, what you will ultimately see is not an economy that is ten times larger than the current one. Ultimately, if our descendants (mainly machine descendants) become a Kardashev Type II or higher civilization, we will be talking about an economy that is thousands or even millions of times larger than today’s economy.

He further elaborated on the position of human intelligence in the future: "At some point, the percentage of human intelligence will become quite small. At some point, the total sum of human intelligence will be less than 1% of all intelligence."

xAI is Currently Training Grok 3.5

Musk revealed in the interview that xAI is currently training Grok 3.5, "focusing on reasoning abilities."

According to ZeroHedge, xAI is seeking $4.3 billion in equity financing, which will be combined with $5 billion in debt financing, covering both xAI and the social media platform X.

Hardware Race: An Engineering Miracle from Zero to 100,000 GPUs

Musk tackled the hardware challenges of AI training using first principles thinking. When suppliers told them it would take 18 to 24 months to complete a training supercluster of 100,000 H100 GPUs, Musk's team compressed it to 6 months.

They rented a defunct Electrolux factory in Memphis, solved the 150-megawatt power demand by leasing generators, rented a quarter of the mobile cooling equipment in the U.S., and used Tesla Mega Packs to smooth out power fluctuations during training. Musk even personally participated in the wiring work, "sleeping in the data center."

Currently, the training center has 150,000 H100s, 50,000 H200s, and 30,000 GB200s, with a second data center about to go online with 110,000 GB200s.

Multiple Future Visions: Robot Legions and Interstellar Civilization

Musk predicts that there will be at least 5 times the number of humanoid robots compared to humans, "maybe even 10 times." He admitted to having delayed in the AI and robotics field due to concerns about "making the Terminator a reality," but ultimately realized, "Whether I do it or not, it will happen. You are either a spectator or a participant. I would rather be a participant."

In a grander vision, Musk places human civilization within the framework of Kardashev levels. He believes humanity currently utilizes only 1-2% of Earth's energy, far from a Type I civilization. Becoming a multi-planetary species is a key step in expanding consciousness to the interstellar level, "greatly increasing the potential lifespan of civilization or consciousness."

Musk stated that SpaceX plans to transfer enough material to Mars in about 30 years to make Mars self-sufficient, "so that even if supply ships from Earth stop running, Mars can continue to develop and thrive."

The Full Interview Transcript is as Follows (Translated by AI)

Elon Musk

We are at a very, very early stage of the intelligence explosion. Becoming a multi-planetary species can greatly extend the potential survival time of civilization, consciousness, or intelligence (whether biological or digital). I think we are very close to digital superintelligence. If it doesn’t happen this year, it will definitely happen next year.

Garry Tan, CEO and President of YC

[Music] Let’s give a warm welcome to Elon Musk.[Applause] Elon, welcome to the AI Startup School. We are truly, truly honored to have you here today. Starting with SpaceX, Tesla, Neuralink, xAI, and so on. Was there a moment in your life before you did all this that made you feel, "I have to create something great"? What led you to make that decision?

Elon Musk

I initially didn’t think I could create anything great. I just wanted to try to do something useful, but I didn’t think I could create anything particularly great. From a probabilistic standpoint, that seemed unlikely, but I at least wanted to give it a shot.

Garry Tan

You are now facing a room full of people, all of whom are tech engineers, including some rising top AI researchers.

Elon Musk

Well, I think we should… I prefer the term "engineer" over "researcher." I mean, if there’s a breakthrough in some fundamental algorithm, that counts as research, but everything else is engineering.

Garry Tan

Maybe we can start from a long time ago. I mean, you are now facing a room full of young people aged 18 to 25. It’s skewing younger because the founding group is getting younger. Can you put yourself in their shoes? When you were 18 or 19, you know, learning to program, even coming up with the first idea for Zip2. What did that feel like for you?

Elon Musk

Yes, back in '95, I faced a choice: either go to Stanford for graduate school, actually in materials science, studying supercapacitors, wanting to use them in electric vehicles, essentially to solve the range problem for electric cars; or dive into this thing called "the internet," which most people had never heard of at the time. I talked to my professor, who was Bill Nix from the materials science department, and I said, can I take a semester off? Because this (the internet) might fail, and then I’d have to go back to school and continue my studies.

Then he said, this might be our last conversation. He was right. So, I thought the odds were that it would fail, not that it would succeed. Then in '95, I wrote… basically, what I think was the first or close to the first internet map, route guide, white pages, and yellow pages.

I wrote that code myself; I didn’t even use a web server. I directly read the port because I couldn’t afford it, nor could I afford a T1 line. The initial office was on Sherman Avenue in Palo Alto. There was an ISP (internet service provider) just downstairs. So, I drilled a hole in the floor and directly pulled a network cable to connect to the ISP.

Then, you know, my brother joined me, along with another co-founder, Greg Curry, who has since passed away. At that time, we couldn't even afford a place to live, so we… rented an office for $500 a month, and we slept in the office while showering at the YMCA on Pechmerle Road. Yes, we ended up building a somewhat useful company, Zip2, in the early days. We did develop a lot of really, really great software technology, but we were somewhat "captured" by traditional media companies, because companies like Knight-Ridder and The New York Times were both investors and clients, and they were on the board.

So they always wanted to use our software in meaningless ways. So I wanted to go directly to consumers. Anyway, I won't go into too much detail about Zip2, but the core is that I really just wanted to do something useful online. Because I had two choices: either pursue a PhD and watch others build the internet, or participate in building the internet in some small way. I thought, I guess I can always try first, and if I fail, I can go back to graduate school. In any case, the outcome was quite successful, selling for about $300 million,

which was a lot of money at the time. Now, I think the minimum starting price for an AI startup is at least $1 billion. It's like… there are so many damn unicorn companies now, it's like a herd of unicorns, you know, unicorns refer to companies valued at a billion.

Garry Tan

Inflation has happened since then, so money has actually depreciated quite a bit.

Musk

Yes. I mean, in 1995, you could probably buy a hamburger for 5 cents? Well, maybe not that exaggerated, but I mean, yes, a lot of inflation has indeed occurred. But I mean, the hype around AI is quite high, as you can see. You know, you see some companies that have been established for less than a year sometimes getting valuations of a billion or even tens of billions of dollars. I guess some of them might succeed, and they might actually succeed. But seeing some of those valuations is indeed jaw-dropping. Yes, what do you think? I mean,

Garry Tan

I’m personally very optimistic. I’m actually very hopeful. So, I believe that all of you here will create a lot of value, and you know, there should be a billion people globally using these things. We haven't even scratched the surface yet. I really like that internet story, even back then, you were a lot like the people here, because you know, all the CEOs of traditional media companies saw you as the one who understood the internet. And now, for that vast world that doesn’t understand what AI is doing—the business world, or the whole world—they will rely on you, for the exact same reason. It sounds like you seem to know… what are some practical lessons? It sounds like one of them is not to give up board control, or to be very careful and have a really good lawyer.

Musk

I think the biggest mistake I made with my first startup was allowing traditional media companies to have too much shareholder and board control, which inevitably led them to view things from a traditional media perspective, and they would make you do things that seemed reasonable to them but were actually unreasonable from the perspective of new technology. I should point out that I initially didn’t plan to start a company. I… I tried to get a job at Netscape. I submitted my resume to Netscape. Mark Andreessen knew about this.

But I don’t think he ever saw my resume, and then no one responded. So after that, I tried to hang around the Netscape lobby to see if I could "bump into" someone, but I was too shy to talk to anyone. So I thought, oh my god, this is ridiculous. Then I decided to write software myself and see what would happen. So, this wasn’t really from the standpoint of "I want to start a company." I just wanted to participate in building, you know, some part of the internet. Since I couldn’t find a job at an internet company, I had to start an internet company. In short, yes. Yes. I mean, AI will profoundly change the future. The extent is hard to measure, but you know, the economy, assuming we don’t go astray,

and AI doesn’t wipe us and itself out, then you will ultimately see an economy that is not just ten times larger than the current economy; ultimately, if we become, say, or regardless of what our future machine descendants are, or mainly machine descendants, a Kardashev Type II or higher civilization, then the scale of the economy we are talking about will be thousands of times larger than today, maybe millions of times. So, yes, I mean, I did feel a bit, you know, when I was in Washington, D.C., being criticized for clearing waste and fraud, that was an interesting side quest, in terms of side quests. But I had to get back to the main mission. Yes, I had to get back to the main mission here. Well, I did feel like, you know, it’s a bit like… it’s like government reform is a bit like… the beach is dirty, with needles, feces, and trash, you want to clean the beach, but at the same time, there’s a thousand-foot wall of water—that’s the AI tsunami—if a thousand-foot tsunami is about to hit, how much significance does cleaning the beach really have? Not much. Oh, I’m glad you’re back to the main mission. That’s very important.

Yes, back to the main mission. Building technology, that’s what I love to do. There’s too much distraction. The signal noise in politics is just too bad.

Garry Tan

So, I mean, I live in San Francisco, so you don’t have to tell me twice (I get it too).

Musk

Yes, Washington, D.C. is like, you know, I guess the whole of Washington is politics, but if you’re trying to build rockets or cars, or you’re trying to make software compile and run reliably, then you have to pursue the truth to the maximum extent, otherwise your software or hardware won’t work. It’s like you can’t cheat math; math and physics are harsh judges. So I’m used to being in that kind of environment where the pursuit of truth is maximized, and that’s definitely not politics. So anyway, I’m glad to be back in, you know, the tech field. I think I

Garry Tan

am a bit curious, going back to that moment with Zip2. Did you have a few hundred million dollars, or did you cash out a few hundred million?

Musk

I mean, I got $20 million, right?

Garry Tan

Well, so you at least solved the money problem. Then you basically took it and kept betting, and you continued with X.com, which later became PayPal and Confinity (after the merger).

Musk

Yes. I left my chips on the table.

Garry Tan

Not everyone would do that. Many people here will have to make that decision in the future. What drove you to jump back into the fray?

Musk

I felt that with Zip2, we developed really great technology, but it was never really fully utilized. At least from my perspective, our technology was better than Yahoo or anyone else, but it was constrained by our clients (the media companies). So I wanted to do something unbound by clients, going directly to consumers. That’s what later became X.com/PayPal. Essentially, X.com merged with Confinity, and we created PayPal together.

Then, actually, the "PayPal Mafia" might have created more companies than any other company in the 21st century. When Infinity and X.com merged, it gathered so many talented people. I just wanted… I felt a bit constrained at Zip2, and I thought, well, what if we weren’t constrained and went directly to consumers? That’s how it turned out.

But yes, when I got that $20 million check from Zip2 (referring to personal income), I was living with four roommates, and I probably had about $10,000 in the bank. Then this check was actually mailed to me (incredible). Mailed to me! Then my bank balance suddenly went from $10,000 to $20,010 ($20 million + $10,000). I thought, well (and then there’s taxes and stuff). But I later invested almost all the money into X.com. Just like you said, I left almost all my chips on the table.

Yes, after PayPal, I thought, I’m a bit curious why we haven’t sent people to Mars yet. I went to the NASA website to see when we were sending people to Mars, and there was no date. I thought maybe the website was just hard to navigate. But in fact, there was no real plan to send people to Mars. So, you know, it’s a long story, and I don’t want to take up too much time here, but

Garry Tan

I think we’re all listening intently.

Musk

So, at that time, I was actually on the Long Island Expressway with my friend Adeo Ressi. We were classmates in college (University of Pennsylvania), and Adeo asked me what I planned to do after PayPal, and I said, I don’t know, I guess maybe I want to do some philanthropic projects in space, because I didn’t think I could do anything commercial in space; that seemed like a government-exclusive domain. So, but you know, I was curious about when we would send people to Mars, and that’s when I found out, oh, there’s nothing on the website, and I started digging deeper. I’m sure I’m leaving out a lot here, but my initial idea was to do a Mars charity mission called "Life to Mars," which was to send a small greenhouse with seeds and dehydrated nutrient gel to Mars, land it on Mars, and then, you know, add water to the gel, and then you’d have this amazing shot—green plants against a red background.

By the way, for a long time, I didn’t realize that "money shot" was a reference to a pornographic film (referring to the key climax shot). But, in short, the point was that it would be an amazing shot of green plants against a red background, trying to inspire, you know, NASA and the public to send astronauts to Mars. As I learned more, I realized, oh, by the way, during this process, I went to Russia around 2001 and 2002 to buy intercontinental ballistic missiles (ICBMs), which was quite an adventure. You know, you go meet high-ranking Russian commanders and say, "I want to buy some intercontinental ballistic missiles." This was to get into space. Yes. Not to blow anyone up, but they had to destroy a large number of their large nuclear missiles as a result of disarmament negotiations. So I thought, well, let’s take two, you know, remove the nuclear warheads, and add an extra upper stage for Mars.

But it felt a bit surreal, you know, around 2001 in Moscow, negotiating with the Russian military to buy intercontinental ballistic missiles. It was crazy. But they were also constantly raising the price on me, so it was the opposite of normal negotiations. So I thought, oh my god, these things are getting really expensive.

Then I realized that the real problem wasn't a lack of willingness to go to Mars, but rather that there was simply no way to do it without going over budget, you know, we couldn't even afford NASA's budget. So that's why I decided to start SpaceX—SpaceX was founded to advance rocket technology to the level where we could send people to Mars. That was in 2002.

Garry Tan

So it wasn't that you initially wanted to start a company. You just wanted to start doing something you found interesting and that humanity needed, and then, like, you know, like a cat unraveling a ball of yarn, the ball slowly unwound, and it turned out to be a very profitable business.

Musk

Now it is indeed profitable, but there was no precedent for a successful rocket startup before, although there had been some attempts by commercial rocket companies, all of which failed. So when I founded SpaceX, it was really out of this idea: I felt the chances of success were less than 10%, maybe only 1%, I don’t know. But a startup that doesn’t do something to advance rocket technology is definitely not going to come from those large defense contractors, because they are just appendages of the government, and the government only wants to do very conventional things. So it had to come from startups, or it wouldn’t happen at all. So even if the success rate was very low, it was better than having no chance at all. So yes, when I founded SpaceX in mid-2002, I expected it to fail. I said, there’s probably a 90% failure rate, and even when recruiting people, I didn’t try to gloss over the fact that it would succeed.

I said we were likely to fail. But there’s a 1 in 10 chance we might not fail, and if this is the only way to send people to Mars and advance technology, then I had to go for it. Then I ended up becoming the chief engineer of the rocket, not because I wanted to, but because I couldn’t hire great people. So, no excellent senior engineers were willing to join because they thought it was too risky; you would fail. So I became the chief engineer of the rocket. You know, the first three launches did indeed fail. So that was a learning process, I guess. The fourth one succeeded, luckily. But if the fourth one hadn’t succeeded, I would have run out of money, and that would have been it. So that was very precarious.

If the fourth Falcon launch had failed, that would have been the end, and we would have joined the graveyard of those previous rocket startups. So, my estimate of the chances of success wasn’t too far off. We just barely succeeded. Tesla was happening around the same time. 2008 was a tough year. Because in mid-2008, or let’s say the summer of 2008, SpaceX’s third launch failed, and we had three failures in a row. Tesla’s funding round also failed. So Tesla was on the verge of bankruptcy. It was like, oh my god, this is terrible. This would become a cautionary tale of arrogance.

Garry Tan

During that time, many people were probably saying, Elon is a software guy, why is he doing hardware? Why… yes, why did he choose to do this, right?

Musk

Yes. 100%. So you can look at the media at that time, because the reports from back then can still be found online. They kept calling me the "internet guy." So the "internet guy," also known as the "fool," trying to start a rocket company. So you know, we were laughed at a lot. It really sounded absurd; an internet guy starting a rocket company doesn’t sound like a recipe for success.

To be honest. So I don’t blame them. I thought at the time, yes, you know, it does sound unlikely, and I agree it’s unlikely. But fortunately, the fourth launch succeeded, and then NASA awarded us a contract to supply the space station. I think that was around December 22, or just before Christmas. Because even if the fourth launch succeeded, it wasn’t enough to guarantee success. We still needed a big contract to survive. So, I got a call from the NASA team, and they really said, we are awarding you a contract to supply the space station. I was just… I blurted out, "I love you guys." That’s not usually something, you know, they get to hear.

Because usually, it’s very, you know, very calm, but I thought at that moment, "Oh my god, this saved the company." Then, we closed the Tesla funding round on the last possible day, the last hour, which was December 24, 2008, at 6 PM. If that funding round hadn’t closed, we would have had to miss payroll two days after Christmas. So the end of 2008 was really nerve-wracking.

Garry Tan

I think from your experiences with PayPal and Zip2 to jumping into these hardcore hardware startups, one constant is the ability to find and ultimately attract the smartest people in those specific fields… you know, I mean, some of the people here haven’t even managed anyone yet. They’re just starting their careers. What would you say to, you know, that Elon who has never done these things?

Musk

I generally think you should try to do as much useful stuff as possible. This might sound a bit cliché, but doing useful things is really hard, especially things that are useful to many people. For example, the area under the total utility curve, how much use you are to your fellow beings multiplied by how many people? It’s like the physical definition of "true work." Achieving this is extremely difficult. And I think if you aspire to do "true work," your chances of success will be much higher. It’s like, don’t pursue glory; pursue doing work.

Garry Tan

How do you judge that it’s "true work"? Is it based on external feedback? Like how others see it or how you know the product is useful to people?

Musk

It’s like, you know, when you’re hiring someone, what do you value? For example, you know, you’re looking for someone or they are different questions. I think it’s, I mean, in terms of your final product, you just have to ask, if this thing succeeds, how much use will it be to how many people? That’s what I mean. Then you do anything, you know, whether you’re a CEO or in any role in a startup, you do whatever it takes to succeed, just like you have to constantly crush your ego, like, internalize responsibility. A major failure mode is when the ego to ability ratio is greater than 1. You know, if your ego is too high compared to your ability,

then you basically cut off the feedback loop to reality. In AI terms, you would break your reinforcement learning (RL) loop. So, you don’t want to break your loop; you want a strong RL loop, which means internalizing responsibility and minimizing ego, regardless of whether the task is noble or humble, you go do it. So, I mean, that’s why I actually prefer the term "engineering" over "research." I prefer that term, and I don’t want to call xAI a lab.

I just want it to be a company. Just like, regardless of whether it’s the simplest, most direct, ideally lowest ego terms, those are usually good directions. You just want to tightly close the loop on reality hard. This is a super big deal.

Garry Tan

I think everyone here greatly admires your exemplary role in applying first principles. How do you determine your "reality"? That seems to be a very important part of it. Those who have never created anything, non-engineers, like some journalists, sometimes criticize you. But clearly, there’s another group of people around you who are builders, with very high… area under the curve of achievements. How should people view this? What methods have worked for you? How would you pass this on to… say, your children? How would you tell them how to stand in this world? For example, how to build a predictable worldview based on first principles.

Musk

The tools of physics are extremely useful in understanding any field and making progress. First principles obviously refer to, you know, breaking things down to the most likely correct fundamental axiomatic elements, and then reasoning up as logically clearly as possible, rather than reasoning by analysis or analogy. Then there are some simple things, like thinking in the limit, like if you extrapolate minimizing this thing or maximizing that thing, thinking in the limit is very helpful. I use all the tools of physics.

They apply to any field. It’s like a superpower. So you can take, for example, rockets. You can say, how much should a rocket cost? The typical approach people take is to look at the historical costs of rockets and then assume that any new rocket must cost about the same as previous rockets. The first principles approach is, you look at what materials the rocket is made of. If it’s aluminum, copper, carbon fiber, steel, whatever, then you say, how heavy is this rocket? What are its constituent elements? How heavy are they? What is the price per kilogram of these constituent elements? That sets the true bottom line for the cost of the rocket. It can approach the cost of raw materials incrementally.

Then you realize, oh, actually, the raw materials of a rocket only account for 1% or 2% of the historical rocket costs. So the manufacturing process must be extremely inefficient if the raw material costs are only 1% or 2%. This is the first principles analysis of the optimization potential for rocket costs. And this is still before considering reusability. To give an example in AI, I guess last year, when xAI was trying to build a training supercluster, we went to various suppliers and said (this was early last year) we need 100,000 H100 (GPUs) to train coherently.

They estimated it would take 18 to 24 months to complete. I said, we need to complete it in 6 months. Otherwise, we would be uncompetitive. So then if you break it down, what do you need? You need a building, you need power, you need cooling. We didn’t have time to build a building from scratch. So we had to find an existing building. So we found an abandoned factory in Memphis that used to produce Electrolux products. But its input power was 15 megawatts, and we needed 150 megawatts.

So, we rented generators and placed them on one side of the building, and then we needed cooling. So, we rented about a quarter of the mobile cooling capacity in the U.S. and placed the chillers on the other side of the building. This still didn’t completely solve the problem because the power fluctuations during training were very large. So the power could drop by 50% within 100 milliseconds, and the generators couldn’t keep up. So we combined that with adding Tesla Megapacks (large battery packs) and modified the software of the Megapacks to smooth out the power fluctuations during training. Then there were a whole bunch of networking challenges. Because if you’re trying to get 100,000 GPUs to train coherently, the network cabling is very, very challenging.

Garry Tan

…It sounds like almost anything you mentioned, I can imagine someone would directly tell you "no, you can't get that power," "you can't manage this." A key point of first principles thinking seems to be: we need to ask "why," figure out the reasons, and challenge the other person. If I’m not satisfied with the answer they give, I won’t accept it. Is that right? I feel like if someone wants to do hardware like you, this seems especially necessary. And in the software realm, we have a lot of redundancy, like "we can just add more CPUs, no problem." But in hardware, if it doesn’t work, it just doesn’t work.

Musk

I think these general principles of first principles thinking apply to both software and hardware, and to anything. I just used a hardware example to illustrate how we are told something is impossible, but once we break it down into its components—we need a building, we need power, we need cooling, we need power smoothing—then we can solve those components. But it is… then we had our network operations team do all the cabling work, working in shifts 24/7, and I also slept in the data center and personally did some cabling.

There were many other issues to solve. You know, last year no one had coherently trained with 100,000 H100s. Maybe someone has done it this year. I don’t know. Then we later doubled it to 200,000. So now we have 150,000 H100s, 50,000 H200s, and 30,000 GB200s in our training center in Memphis. We are about to bring online a second data center in the Memphis area with 110,000 GB200s.

Garry Tan

Do you think pre-training is still effective? Do scaling laws still hold? Will the one who ultimately wins this race have the largest and smartest model, and then be able to distill it?

Musk

Aside from the competitiveness of large AI, there are various other factors. For large AI, the talent of the personnel is certainly important. The scale of the hardware and how you effectively utilize that hardware is also crucial. So you can’t just order a bunch of GPUs and expect them to work just by plugging them in. You have to get a lot of GPUs and ensure they can stably train coherently.

Then there’s the question of what unique data sources you have. I guess distribution is also important to some extent, like how people access your AI? These are all key factors for those aiming to become competitive large foundation models. Just like my friend Ilya Sutskever said, I think we are running out of human-generated data for pre-training; the supply of high-quality tokens is depleting quite rapidly, and then you essentially have to create synthetic data and be able to accurately judge the synthetic data you create to verify whether it is real synthetic data or a hallucination. So achieving grounding in reality is quite tricky, but we are at a stage where we need to invest more effort in synthetic data. Just like now we are training Grok 3.5, which focuses on reasoning.

Garry Tan

Going back to your perspective on physics, I’ve heard that hard sciences, especially physics textbooks, are very useful for reasoning. And researchers have told me that social sciences are completely useless for reasoning.

Musk

Yes, that may be true. So yes, you know, one very important point for the future is to combine deep AI with robotics in data centers or superclusters.

That way, you know, things like the Optimus humanoid robot—yes, Optimus is amazing. In the future, there will be many humanoid robots and robots of various sizes and shapes, but my prediction is that humanoid robots will far exceed all other robots combined, possibly by an order of magnitude, with a huge difference.

Garry Tan

There are rumors that you plan to build a robot army?

Musk

Whether it’s us or Tesla, you know, Tesla and xAI work closely together.

Just like how many humanoid robot startups have you seen? I think Jensen Huang brought a bunch of robots on stage from different companies. I think there are about a dozen different humanoid robots. So, I mean, I guess, you know, part of what I’ve been struggling with, which may have slowed me down, is that I’m a bit… I don’t want the Terminator to become a reality, you know. So, to some extent, at least until the last few years, I’ve been dragging my feet on AI and humanoid robotics. Then I realized it’s happening whether I do something or not. So you have two choices. You can either participate or be a spectator. So I thought, well, I’d rather be a participant than a spectator. So now it’s like, you know, I’m all in on humanoid robots and, uh, digital superintelligence, pedal to the metal.

Garry Tan

I think there’s also a third thing that everyone has heard you talk about a lot, which I personally strongly agree with, which is becoming a multiplanetary species. How does this fit into the bigger picture? This isn’t just a matter of 10 or 20 years; it may be a matter of 100 years, concerning several generations of humanity. How do you view it? Here we have AI, obviously embodied robotics, and becoming a multiplanetary species. Do all these ultimately serve the last point? Or what is driving you for the next 10, 20, or 100 years?

Musk

Oh my god, 100 years, man. I hope civilization is still around in 100 years. If it is, it will be vastly different from today’s civilization. I mean, I predict that humanoid robots will be at least five times the number of humans, maybe ten times. And one way to view the progress of civilization is the percentage completion of the Kardashev Scale. So if you are, you know, Scale One, you have harnessed all the energy of a planet. Now, to me, we are only utilizing about 1% or 2% of Earth’s energy. So we have a long way to go to reach Kardashev Scale One. Then Scale Two is harnessing all the energy of a star. That would be about a billion times the energy of Earth, maybe close to a trillion times.

Then Scale Three is the energy of an entire galaxy, and we are far from that. So we are in the very, very early stages of an intelligence big bang. I hope that in terms of being multiplanetary, I think in about 30 years, we will have enough material transferred to Mars to make it self-sustaining, so that even if supply ships from Earth stop, Mars can continue to grow and thrive. This greatly extends the expected lifespan of civilization, or consciousness, or intelligence (both biological and digital). So that’s why I think becoming a multiplanetary species is important.

I’m a bit troubled by the Fermi Paradox, like why haven’t we seen any aliens? It could be because intelligence is very rare. Maybe we are the only intelligent life in this galaxy. If that’s the case, then conscious intelligence is like a tiny candlelight in the vast darkness, and we should do everything we can to ensure that tiny candlelight doesn’t go out, and becoming a multiplanetary species or making consciousness multiplanetary can greatly increase the expected lifespan of civilization, and it is the next step before going to other star systems. Once you have at least two planets, you have a forcing function to drive progress in space travel. That will ultimately lead to consciousness expanding to the stars.

Garry Tan

The Fermi Paradox might imply that once technology reaches a certain level, civilizations self-destruct. How do we avoid self-destruction? What advice would you give to a room full of engineers? What can we do to prevent this from happening?

Musk

Yes, how to avoid the Great Filters? One obvious Great Filter is global thermonuclear war. So we should try to avoid that.

I want to build benevolent AI robots, that kind of AI that loves humanity, you know, helpful robots. I think one extremely important point in building AI is a strict adherence to the truth, even if that truth is politically incorrect. My intuition about what could make AI very dangerous is if you force AI to believe untrue things.

Garry Tan

How do you view the debate between safety and being closed to gain a competitive advantage? I think one great thing is that you have a competitive model, and others do too. In that sense, we may have avoided the worst timeline I’m most worried about—the kind where there’s a fast takeoff and it’s all in one person’s hands. That could lead to a lot of things collapsing. And now we have choices, which is good. What do you think?

Musk

Yes, I do believe there will be several deep intelligences, maybe at least five. Possibly up to 10. I’m not sure if there will be hundreds, but it could be close, say around 10. Probably about four of them in the U.S. So I don’t think any one AI will have runaway capability. But yes, there will be several deep intelligences.

What will these deep intelligences do? Will they conduct scientific research, or will they try to attack each other?

Musk

It could be both. I mean, I hope they will discover new physics, and I definitely think they will invent new technologies. I think we are quite close to digital superintelligence. It could happen this year; if it doesn’t happen this year, it will definitely happen next year, defined as being smarter than any human at anything.

Garry Tan

So how do we guide it towards super abundance? We could have a robotic workforce, cheap energy, intelligence on demand. Is this what’s called the "white pill"? Where do you stand on this spectrum? What specific things would you encourage everyone here to do to make this "white pill" a reality?

Musk

I think the most likely outcome is a good one. I guess I somewhat agree with Jeff Hinton’s view that there might be a 10% to 20% chance of annihilation. But looking on the bright side, that means there’s an 80% to 90% chance of a good outcome. So yes, I can’t emphasize enough that strict adherence to the truth is the most important thing for AI safety. Clearly, empathy for humanity and life as we know it is also crucial.

Garry Tan

We haven’t talked about Neuralink yet. I’m curious about your efforts to narrow the input/output gap between humans and machines. How critical is this for AGI/ASI (Artificial General Intelligence / Artificial Superintelligence)? Once this link is established, can we not only read but also write?

Musk

Neuralink is not essential for achieving digital superintelligence. It will happen before widespread application of neural connections. But neural connections can effectively address input/output bandwidth constraints. Particularly, our output bandwidth is very low. The sustained output of a human in a day is less than one bit per second. So, you know, there are 86,400 seconds in a day. It’s extremely rare for a person to output more symbols than that (86400) in a day. It’s even more so over consecutive days. So with a neural connection interface, you can significantly increase your output bandwidth and input bandwidth. Input refers to write operations to the brain.

We currently have five humans implanted with devices that read input signals. You have people with amyotrophic lateral sclerosis (ALS) who are completely immobile; they are tetraplegics, but now they can communicate with bandwidth comparable to that of able-bodied individuals, controlling their computers and phones, which is pretty cool. Then I think in the next 6 to 12 months, we will conduct the first visual implants, so even if someone is completely blind, we can directly write to the visual cortex, which we have already achieved in monkeys.

I think we have a monkey that has had a visual device implanted for three years. Initially, the resolution will be relatively low, but in the long term, it will have very high resolution and be able to see multispectral wavelengths. So you can see infrared, ultraviolet, radar, like gaining superpowers. At some point, cybernetic implants will not just correct errors but will greatly augment human capabilities, significantly enhancing intelligence, senses, and bandwidth. This will happen at some point.

But digital superintelligence will occur long before that, at least if we have a neural connection, we might be able to appreciate AI better. I guess one of the constraints you’re working hard to overcome across all these different fields is access to the smartest talent. Yes. But, you know, at the same time, we have, you know, rocks that can talk and reason; they might have an IQ of 130 now, and they might soon become superintelligent. How do you reconcile these two things? For example, what will happen in 5 or 10 years? What should everyone here do to ensure that they are creators rather than people who might fall below the API line?

People call it the singularity for a reason because we don’t know what will happen in the near future. The proportion of human intelligence will be very small. At some point, the total sum of human intelligence will be less than 1% of all intelligence. And if things progress to Kardashev Scale level two, we’re talking about human intelligence, even assuming a significant population increase and substantial intelligence augmentation, like everyone having an IQ of a thousand. Even then, the total sum of human intelligence might only be one billionth of digital intelligence. In any case, where is the biological bootloader for digital superintelligence? I guess I’ll end here; am I a good bootloader?

Garry Tan

Where do we go from here? How do we start from here? I mean, all of this is quite a wild sci-fi scenario, but it could also be built by everyone here. You know, if you have any closing thoughts for the smartest tech talent of this generation, what should they do? What should they engage in? What should they think about, what should they ponder over dinner tonight?

Musk

As I said at the beginning, I think if you’re doing something useful, that’s great. If you’re just trying to be as useful as possible to your fellow humans, then you’re doing good. I keep emphasizing this point: focusing on super truthful AI is the most important thing for AI safety. You know, obviously, if you know anyone interested in working at xAI, I mean, please let us know. Our goal is to make Grok the maximally truth-seeking AI. I think that’s very important. Hopefully, we can understand the nature of the universe. That’s probably what AI can tell us. Maybe AI can tell us where the aliens are, and how the universe really began? How will it end? What are the questions we don’t even know we should be asking? Are we in a simulation? Or at what level of simulation are we?

Garry Tan

I think we will find the answers. An NPC (non-player character). Elon, thank you very much for joining us.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

ad
Gate: 注册赢取$10000+礼包
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink