Original Title: The original tech right power player on A.I., Mars and immortality.
Original Author: Hosted by Ross Douthat
Original Translation: Peggy, Johyyn, BlockBeats
Editor's Note:
In recent years, the global relationship between Silicon Valley conglomerates and Trump has left many puzzled. On one hand, people provide various reasons to justify why Musk would support Trump, and after their relationship deteriorated, they began to find reasons to claim they had predicted the collapse of their alliance. This podcast offers us a fresh perspective to understand Silicon Valley's choices during the Trump era: How did Peter Thiel profoundly influence Musk? What is the underlying anxiety of the entire Silicon Valley conglomerate regarding "technological stagnation"?
The host of this discussion, Ross Douthat, is a columnist for The New York Times, a well-known conservative author, and has published several books on religion, politics, and society. In this in-depth conversation about AI, politics, and faith, he describes Peter Thiel as one of the most influential right-wing intellectuals globally over the past two decades. Thiel reiterates his consistent judgment: since the 1970s, technological progress has been slowing down, social structures have become rigid, and humanity may be entering a form of soft totalitarianism under the guise of "order and security." He discusses why he initially supported Trump, his cautious expectations regarding AI, and his concerns about how environmentalism and "global governance" could lead to technological totalitarianism. He believes that the Antichrist may not necessarily arise from technological explosions but could stem from compromises made in the name of "order and security."
This article has greatly inspired the Blockbeats editorial team, and we hope to share it with you in front of the screen. Below is the original content (edited for readability):
Ross Douthat: (Opening remarks) Is Silicon Valley ambitious? What should we be more afraid of: the apocalypse or stagnation? Why is one of the world's most successful investors worried about the Antichrist?
My guest today is Peter Thiel, co-founder of PayPal and Palantir, and an early investor in the political careers of Donald Trump and JD Vance. Thiel is the original tech right power player, known for funding various conservative and even counter-mainstream ideas. But today, we will discuss his own views, because despite the slight "disadvantage" of being a billionaire (traditionally not seen as a typical image of a thinker), there is a good reason to believe that he is one of the most influential right-wing intellectuals of the past two decades.
Peter Thiel, welcome to "Interesting Times."
Peter Thiel: Thank you for having me.
Technological Stagnation: Why Don't We Feel the Future Anymore?
Ross Douthat: I want to take you back about thirteen or fourteen years. At that time, you wrote an article for the conservative magazine National Review titled "The End of the Future." The basic argument of the article was: on the surface, the modern world is vibrant, fast-paced, and constantly changing, but in reality, it is far less dynamic than people think. We have long entered an era of technological stagnation. Digital life has indeed brought some breakthroughs, but it has not fundamentally changed the world as people initially expected. Overall, we are stuck in place.
Peter Thiel: Yes.
Ross Douthat: You were not the only one to express this view at the time, but hearing it from you carries extra weight—after all, you are an "insider" in Silicon Valley, having participated in and profited from the internet revolution. So I am curious: by 2025, do you still believe that judgment holds true?
Peter Thiel: Yes, I still broadly agree with the idea of "technological stagnation." This argument has never been absolute. We are not saying that the entire world is completely stagnant, but rather that, to some extent, the pace of development has indeed slowed down. It hasn't gone to zero, but from 1750 to 1970, there was a period of continuous acceleration: steamships got faster, railroads got faster, cars got faster, and airplanes got faster. This trend peaked with the Concorde and the Apollo moon landing missions. But since then, developments at all levels have begun to slow down.
I have always viewed the "digital world" as an exception, which is why we have seen advancements in computers, software, the internet, and mobile internet. Then, in the past ten to fifteen years, we have seen the emergence of cryptocurrency and the AI revolution. I think this is indeed significant in some sense. But the question is: is this really enough to free us from that pervasive sense of stagnation?
In those "Back to the Future" articles, you can start from an epistemological question: how do we judge whether we are in a state of stagnation or accelerating development? Because one important characteristic of late modernity is that humans are highly specialized. For example, unless you spend half your life studying string theory, can you not judge whether physics has made progress? What about quantum computing? Cancer research, biotechnology, and all these vertical fields—how do we assess them? Further, how do we weigh the progress in cancer treatment against breakthroughs in string theory? You have to "weight" these different fields to evaluate overall technological progress.
Theoretically, this is an extremely difficult question to define. And the difficulty in answering it is itself worth questioning: nowadays, more and more fields of knowledge are controlled by a few "expert circles," and these people often only answer to each other and validate each other. This closed nature is enough to cast doubt on the so-called technological progress.
So yes, I believe that overall, we still live in a fairly stagnant era, but that does not mean that everything is completely stagnant.
Ross Douthat: You just mentioned "Back to the Future." We recently took the kids to watch the original first movie, starring Michael J. Fox.
Peter Thiel: (The plot) is set from 1955 to 1985, a span of 30 years. And "Back to the Future 2" has a timeline from 1985 to 2015—now looking back, that was already a "future" ten years ago. The movie features flying cars, and the vision of 2015 compared to the reality of 1985 is simply worlds apart.
Ross Douthat: "Back to the Future 2" indeed shaped a character similar to Donald Trump—Biff Tannen, in a sense, holds power, so from a certain perspective, it is somewhat prescient. But more striking is how different the physical environment of that future world looks.
So, one of the most compelling arguments I've heard about "technological stagnation" is: if you let a person travel through time from one era to another, they would find themselves in a completely different world.
For example, traveling from 1860 to—
Peter Thiel: Or from 1890 to 1970, which is roughly the 80 years of your life. Something like that.
Ross Douthat: But for my kids, even those living in 2025, when they look back at 1985, their feeling is: well, the cars seem a bit different, people didn't have smartphones, but overall, the world actually looks quite similar.
This is certainly not a statistical judgment, but—
Peter Thiel: This is a very intuitive common-sense judgment.
Ross Douthat: It is a common-sense understanding. But what kind of evidence would it take to convince you that we are in a phase of takeoff? Is it purely economic growth? Or is it productivity improvement? What specific indicators do you usually pay attention to that measure "stagnation" versus "vitality"?
Peter Thiel: Of course, you can look at an economic indicator: how does your standard of living compare to your parents'? If you are a 30-year-old millennial, how are you doing compared to your baby boomer parents at the same age? What was their situation like back then?
There are also cognitive questions: how many real breakthroughs have we achieved? How can these achievements be quantified? Where do the returns on research manifest?
The returns to entering the scientific community, or more broadly, the academic community, have indeed shown diminishing returns. Perhaps this is why it often gives a cold, even Malthusian institutional feel: you have to keep putting in more and more to get the same output. And at some point, people will choose to give up, and the entire system will collapse.
The Cost of Stagnation: When Society Loses Its Upward Mobility
Ross Douthat: Let's continue discussing this topic. Why do we pursue growth and vitality? Because, as you mentioned in related discussions, in the 1970s, the Western world underwent a cultural shift, which is also when you believe society began to slow down and head towards stagnation. People began to feel anxious about the costs of growth, especially the environmental costs became particularly prominent.
The core of this viewpoint is: we are already wealthy enough. If we continue to strive to become richer, the Earth may not be able to bear it, and various forms of ecological degradation will follow. Therefore, we should be satisfied with the current state. So, what is the problem with this argument?
Peter Thiel: I believe there are deeper reasons behind this stagnation. When facing history, people usually ask three questions: First, what exactly happened? Second, how should we respond? But there is also a middle question that is often overlooked: why did it happen?
People are beginning to run out of new ideas. I think, to some extent, the system itself has also degenerated and become more risk-averse; some of these cultural shifts we can outline. But at the same time, I also believe that people have some very reasonable concerns about the future: if we continue to accelerate technological progress, does that also mean we are speeding towards environmental disaster, nuclear disaster, or other similar endings?
But I believe that if we cannot find a path back to the future, I do feel that society will… I can't say for sure, but it will begin to unravel and fail to function normally. The middle class, which I define as those who expect their children to live better than they did. And once that expectation collapses, we no longer have a truly middle-class society. Perhaps there is indeed some system, like feudal society, where everything remains stagnant and unchangeable; or there may be some path leading to a completely different social structure. But none of this is the operating logic of the Western world, at least not the trajectory followed by America in its first 200 years.
Ross Douthat: Do you think ordinary people will ultimately not accept this stagnation? Will they choose to resist and, in the process, bring down the surrounding order?
Peter Thiel: They might resist. Or perhaps our institutions themselves are starting to malfunction—because the premise of these institutions is continuous growth.
Ross Douthat: Our fiscal budget is certainly built on growth expectations.
Peter Thiel: Yes. For example, I'm not sure, but look at Reagan and Obama. Reagan represents "consumer capitalism," which is inherently a contradictory notion: as a capitalist, you don't get rich through saving, but through borrowing. And Obama represents "low-tax socialism"—which is also contradictory like "consumerist capitalism."
I certainly prefer low-tax socialism over high-tax socialism, but I worry that it is unsustainable. One day, taxes will either rise, or we will abandon "socialist" policies. It is fundamentally very, very unstable. This is also why people lack optimism: they do not believe we have reached a stable, "Greta-style" future. Perhaps this model could work, but we are clearly not there yet.
Ross Douthat: Because her name is likely to come up again in this conversation, I am referring to Greta Thunberg, the Swedish environmental activist known for her protests against climate change. For you, I think she symbolizes a future vision that is anti-growth, essentially authoritarian, and led by environmentalists.
Peter Thiel: Yes. But we are not there yet. Not yet. If society truly falls into stagnation, it will be a completely different society—
Ross Douthat: If you really lived in a "degrowth," Scandinavian-style small village.
Peter Thiel: I'm not sure it would be like North Korea, but it would definitely be very oppressive.
Ross Douthat: One thing that has always impressed me is that when a society experiences a sense of stagnation and falls into a kind of "decline," which is a term I often use to describe this state, people often begin to yearn for a crisis, for a turning point to arrive, so they have the opportunity to fundamentally change the direction of society. Because I tend to think that in affluent societies, when people's wealth reaches a certain level, they become too comfortable and too risk-averse, and without a crisis, it is difficult to emerge from "decline" and move toward some new possibility.
So for me, the initial example is: after the 9/11 attacks, conservatives in the field of foreign policy generally had a mindset that we had been in decline and stagnation, and now it was time to awaken and launch a new "crusade" to reshape the world. Clearly, that outcome was very bad. But similar sentiments…
Peter Thiel: But at that time, it was George W. Bush telling everyone to go shopping immediately.
Ross Douthat: So that wasn't really a true form of "anti-decline," was it?
Peter Thiel: Generally speaking, yes. There were indeed some neoconservative foreign policy circles trying to escape decline through "live-action role-playing" (LARPing). But the mainstream was still the Bush administration, telling everyone, "It's time to go shopping."
Ross Douthat: So, to escape decline, how much risk should people be willing to take? It does seem there is a danger: those who want to resist decline often need to actively embrace great uncertainty. They must stand up and say: look, we currently have a beautiful, stable, comfortable society, but guess what? We might need a war, a crisis, or even a complete government overhaul. They must confront danger and even actively engage with it.
Peter Thiel: Well, I'm not sure I can give a precise answer, but my directional judgment is: we should take on more risk and do more things. Our range of actions should far exceed the current level.
I can go through these vertical fields one by one. For example, in biotechnology, diseases like dementia and Alzheimer's—over the past forty to fifty years, we have made almost no progress. People have been entangled in the β-amyloid pathway, which clearly has not worked. Now it resembles a ridiculous game of interests, where the relevant practitioners continuously reinforce and endorse each other. So yes, in this field, we do need to take on more and take greater risks.
Ross Douthat: To make the discussion more concrete, I want to linger on this example a bit longer. My question is: when we say "we need to take more risks in anti-aging research," what does that specifically mean? Does it mean the FDA should step aside and allow anyone with a new Alzheimer's therapy to sell it directly on the open market? What does "taking risks" look like in the medical field?
Peter Thiel: Yes, you do need to take greater risks. If you have a terminal illness, you might be willing to try more radical measures. Researchers should also be able to take on more risks.
From a cultural perspective, what comes to mind is the image of "early modernity"—when people believed we would eventually cure diseases and even achieve radical longevity. Immortality was a grand goal of early modernity. From Francis Bacon to Condorcet, this was the case. Perhaps this anti-Christian notion, or perhaps it is a continuation of Christian thought—regardless, it is competitive: if Christianity promises you bodily resurrection, then science must also promise the same thing to "win."
I remember around 1999 or 2000, when we were still operating PayPal, one of my co-founders, Luke Nosek, was particularly fascinated by Alcor and cryonics, believing that people should freeze themselves. At one point, we even took the whole company to a "cryonics party." You know what a "Tupperware party" is? It's that kind of event where plastic food storage containers are sold at a gathering. At the "cryonics party," they weren't selling storage containers…
Ross Douthat: Was it just freezing heads? Or was the whole body to be frozen?
Peter Thiel: You could choose to freeze your whole body or just your head.
Ross Douthat: The "head-only" option is a bit cheaper.
Peter Thiel: What was more disturbing was that the printer had a problem, and as a result, the freezing agreements couldn't be printed out.
Ross Douthat: Once again, this reflects technological stagnation, right?
Peter Thiel: But looking back, that was also a sign of decline. In 1999, this idea was not mainstream, but among the baby boomer generation, there was still a small group of people who firmly believed they could live forever. And that was probably the last generation to hold such beliefs. So while I have consistently criticized the baby boomer generation, perhaps—even in this marginalized, narcissistic fantasy—we did lose something. At least there were some people who believed that science would eventually cure all their diseases. And now, no millennial believes that anymore.
Political Bets: Why Support Trump and Populism?
Ross Douthat: However, I think there are still some people who believe in another form of immortality. I feel that people's obsession with AI is, to some extent, related to the imagination of "transcending human limitations." I will specifically ask you about this later. But I want to talk about politics first. When you initially proposed the idea of "stagnation," you primarily focused on technological and economic aspects, and one thing that impressed me is that this idea can actually be applied to a very broad range of fields. When you wrote that article, you were also interested in "seasteading"—that is, wanting to construct a new political community outside the rigid Western system. But you later made a shift in the 2010s.
You are one of the few well-known Silicon Valley figures who openly supported Donald Trump early on, and you might even be the only one. You also supported some carefully selected Republican Senate candidates, one of whom is now the Vice President of the United States. As an observer, after reading your discussions on "social decline," my understanding is that you were essentially making a kind of "political venture capital." You were betting on a group of potential disruptors who could break the status quo, believing that taking such risks was worthwhile. Was that how you thought at the time?
Peter Thiel:
Of course, there were many different layers at that time. On one hand, there was that hope—hope that we could steer the "Titanic" away from the iceberg, whatever metaphor you want to use, but the goal was to genuinely change the course of society.
Ross Douthat: Achieving that through political change.
Peter Thiel: Perhaps a narrower desire was that we could at least have a dialogue around these issues. So when Trump said "Make America Great Again"—was that a positive, optimistic, ambitious agenda? Or was it an extremely pessimistic judgment of the status quo, believing we are no longer a great country?
I did not have high hopes that Trump could truly bring about positive change. But I felt that, at least for the first time in 100 years, a Republican was not reciting those syrupy, hollow Bush-style platitudes. That does not mean society progressed, but at least we could start a real dialogue. Looking back now, that idea seems like a ridiculous fantasy.
In 2016, I actually had two thoughts—those thoughts that were just floating at the edge of consciousness—but I couldn't connect them at the time. The first was: if Trump lost, no one would be angry at me for supporting him; the second was: I thought he had a 50% chance of winning. And I had an implicit…
Ross Douthat: If he lost, why would no one be angry at you?
Peter Thiel: That was just a strange thing; it really didn't matter. But I thought he had a 50% chance of winning because the problems were indeed serious, and stagnation was frustrating. The reality is that people were not ready to face these issues. Perhaps we have now reached that point, by 2025, ten years after Trump took office, we can finally have this conversation. Of course, Ross, you are not that kind of leftist zombie—
Ross Douthat: I've been labeled all sorts of things, Peter.
Peter Thiel: But as long as we can make some progress, I am willing to accept it.
Ross Douthat: From your perspective, there seem to be two levels. The first level is: this society needs to be broken, it needs to take risks; and Trump himself is a form of breaking and risk. The second level is: Trump indeed dares to speak some real truths about America's decline.
So what do you think, as an investor, a venture capitalist, what have you gained from Trump's first term?
Peter Thiel: Well…
Ross Douthat: What do you think were the anti-decline or anti-stagnation measures during Trump's first term? If there were any. Of course, it is also possible that you feel there were none at all.
Peter Thiel: I think the whole process has taken longer and been slower than I expected. But at least we have now reached a moment where many people are starting to realize that there are indeed some problems. And this is not the conversation I could have sparked between 2012 and 2014. At that time, I debated these issues with Eric Schmidt (former CEO of Google), Marc Andreessen (founder of A16Z), and Bezos (founder of Amazon) in 2012, 2013, and 2014, respectively.
My position at the time was: "We are facing a stagnation problem," while their attitudes were some version of "everything is going well." But I believe that at least these three have made some adjustments to varying degrees since then. The whole of Silicon Valley has changed.
Ross Douthat: However, Silicon Valley has not just "adjusted."
Peter Thiel: Yes, in terms of the stagnation problem.
Ross Douthat: Right. But by 2024, a significant portion of Silicon Valley ultimately supported Trump. The most famous among them, of course, is Elon Musk.
Peter Thiel: That's right. As I understand it, this is deeply related to the issue of "stagnation." These things are always particularly complex, but I tend to see it this way—of course, I don't want to speak for everyone—like Mark Zuckerberg, or Facebook, Meta. I think he doesn't really have a strong ideological stance. He hasn't thought deeply about these issues. The default position is liberalism, and when liberalism doesn't work, what do you do? For years, the answer has been: do more. If something doesn't work, double down. Add another dose, invest a few hundred million more, and fully "awaken," resulting in everyone starting to hate you.
At some point, people will think: well, maybe this approach just doesn't work.
Ross Douthat: So they turned to something else.
Peter Thiel: But that doesn't mean they support Trump.
Ross Douthat: Indeed, it's not support for Trump, but whether in public or private discussions, there is indeed a sense that—in the context of 2024, regardless of whether you are the only supporter like in 2016—now "Trumpism" or "populism" could indeed become a driver of technological innovation, economic vitality, and so on.
Peter Thiel: Your statement is really, really optimistic.
Ross Douthat: I know you are pessimistic. But people—
Peter Thiel: When you express it in an optimistic way, you are essentially saying that these people will ultimately be disappointed; they are destined to fail, something like that.
Ross Douthat: I mean, people did express a lot of optimism at that time, that's what I mean. Although Elon Musk has expressed apocalyptic anxiety about budget deficits potentially leading to the extinction of humanity, when he entered the government, including those around him, they were basically saying: "We are partnering with the Trump administration to achieve technological greatness." I think they were indeed optimistic.
You lean more towards pessimism, or realism, I want to ask about your own judgment—not theirs. Do you think that Trump 2.0-style populism can become a vehicle for technological vitality?
Peter Thiel: For now, it is still our best option. Can Harvard cure Alzheimer's by continuing to tread water and doing things that haven't worked for the past 50 years?
Ross Douthat: That sounds more like "it can't get any worse, so let's try disruption." But the criticism of current populism is that Silicon Valley chooses to ally with populists, but these people don't care about science at all. They are unwilling to invest in science. They just want to cut funding to it because they hate Harvard. The result is that you ultimately don't get the kind of forward-looking investment that Silicon Valley originally hoped for. Isn't that criticism valid?
Peter Thiel: To some extent, it is valid. But we need to return to a more fundamental question: how well does the scientific system in our background operate? The people from the New Deal era, despite their many problems, did indeed push science forward vigorously; you allocated funds, paid people, and promoted scaling. And now, if another "Einstein" wrote to the White House, that letter would probably get lost in the mailroom. Things like the "Manhattan Project" are simply unimaginable.
Now we still call some things "moonshots," like when Biden talks about cancer research. But the "moonshot" of the 1960s was genuinely about going to the moon. And today's "moonshot" often means something completely fictional and destined not to happen. When you hear "this needs a moonshot," it actually means: this is hopeless. It's not that we need another Apollo program; it's that this will never come true.
Ross Douthat: It sounds like you still hold the position that, perhaps unlike others in Silicon Valley, the value of populism lies in exposing illusions and removing the veil. We are not yet at the stage where you expect the Trump administration to undertake a "Manhattan Project" or a "moonshot." It is more like—populism helps us see that everything is fake.
Peter Thiel: You have to try to balance both. These two things are actually intertwined.
For example, deregulation of nuclear energy: one day, we will start building new nuclear power plants again, or design them better, or even possibly fusion reactors. So indeed, part of it is a process of deregulation and de-establishment. But then you have to start real construction—this is how things progress. In a sense, you have to clear the field first before you can start working, maybe…
Ross Douthat: But you personally are no longer funding politicians, right?
Peter Thiel: I am divided on this issue. I think it is extremely important, but also highly toxic. So I always struggle with whether I should do it…
Ross Douthat: What does "highly toxic" mean for you personally?
Peter Thiel: It is toxic for everyone involved. Because it is a zero-sum game, it's too crazy. To some extent…
Ross Douthat: Is it because everyone hates you and associates you with Trump? What has it personally done to you?
Peter Thiel: The toxicity lies in the fact that it happens in a zero-sum world. You feel that the stakes are extraordinarily high.
Ross Douthat: Have you also gained some enemies that you didn't have before?
Peter Thiel: It is harmful to everyone involved in various ways. It also involves a "back to the future" political proposition. You can't—this is one of the things I discussed with Elon in 2024. We talked a lot. I even told him about a "seasteading" version of the idea: I said if Trump didn't win, I would want to leave the United States. Elon replied: there is nowhere to go; we have nowhere to go.
Then you always think of how to refute it afterward. Probably two hours after we finished eating, when I got home, I realized: wow, Elon, now you no longer believe in "going to Mars." 2024 is the year Elon stopped believing in Mars—not that he doesn't believe it is a scientific and technological project, but he no longer believes in its potential as a political project. Mars was originally a political solution to create an alternative social model. And in 2024, Elon began to believe that even if you went to Mars, that socialist American government, those awakened AIs, would follow you there.
We facilitated a meeting between Elon and DeepMind's CEO, Demis Hassabis.
Ross Douthat: Demis leads an AI company.
Peter Thiel: Yes. The core of that conversation was Demis telling Elon: I am working on the most important project in the world; I am building a superhuman-level AI.
Elon responded: I am also working on the most important project in the world; I am making humanity an interstellar species. Then Demis said: you know, my AI will go to Mars with you. After hearing that, Elon fell silent. But in my telling of this history, this thought took years to truly hit him. He didn't really process this issue until 2024.
Ross Douthat: But that doesn't mean he no longer believes in Mars itself. It just means he thinks that to achieve "going to Mars," you first have to win the battle over budget deficits or "woke culture."
Peter Thiel: Yes, but what does Mars mean?
Ross Douthat: What does Mars mean?
Peter Thiel: Is it just a scientific project? Or is it, like in Robert Heinlein's works, a paradise for libertarians, treating the moon as an experimental ground for an ideal society?
Ross Douthat: A vision of a new society, inhabited by many… descendants of Elon Musk.
Peter Thiel: Well, I'm not sure if this vision has been concretized to that extent, but if you really start to concretize it, you will realize that Mars should not just be a scientific project; it should be a political project. And once you concretize it, you must start to seriously consider: the awakened AI will go with you, and the socialist government will also follow you. Then you might not just be able to "go to Mars"; you have to think of other ways.
The Light and Shadow of AI: Is it an Engine of Growth or a Magnifier of Mediocrity?
Ross Douthat: So, awakened AI (artificial intelligence), at least during this period of stagnation, seems to be an exception—it is one of the few areas that has made significant progress, and this progress is unexpected for many. At the same time, it is also an exception in the political realm we just mentioned. In my view, the Trump administration did give AI investors, to some extent, what they wanted: on one hand, stepping back and reducing oversight, and on the other hand, promoting public-private partnerships. So it is both at the forefront of technological progress and a point of renewed government intervention.
You are also an investor in the AI field. What do you think you are investing in?
Peter Thiel: This is a long story with many layers. We can start with a question: how important do I think AI is? My "clumsy" answer is: it is definitely not a hollow hype "air burger," but it also doesn't amount to a complete transformation of society. My current estimate is that it is probably on the same scale as the internet in the late 1990s. I'm not sure if it is enough to truly end long-term stagnation, but perhaps it is enough to give birth to some great companies.
For example, the internet drove GDP growth of about 1% per year for ten to fifteen years, also contributing to some productivity improvements. So, my preliminary positioning of AI is probably at that level.
This is the only growth engine we currently have. To some extent, its "all-in" approach is a bit unhealthy. I hope we can make progress on multiple dimensions simultaneously, such as advancing the Mars program or tackling dementia. But if AI is all we have right now, I will accept it. Of course, it carries risks; there is no doubt that this technology poses dangers. But it also brings…
Ross Douthat: So you are skeptical of the "superintelligence cascade theory"? The gist of this theory is that once AI is sufficiently successful, it will become extremely intelligent, thereby driving breakthroughs in the physical world. In other words, humans may not be able to cure dementia or figure out how to build the perfect factory for manufacturing rockets, but AI can.
Once it surpasses a certain threshold, it brings not just digital progress but also sixty-four other pathways of advancement. It sounds like you don't really believe in this, or you think the possibility is low?
Peter Thiel: Yes, I'm not sure if that's the crux of the issue.
Ross Douthat: What do you mean by "crux"? What do you refer to as the "gating factor"?
Peter Thiel: This might be an ideology in Silicon Valley. It sounds a bit counterintuitive; it may lean more towards liberalism than conservatism. In Silicon Valley, people are obsessively focused on IQ; everything revolves around "smart people": if we have more smart people, we can create more great things.
But the counterargument from an economic perspective is that, in reality, the smarter people are, the more lost they become. They may not achieve more because they don't know how to apply their intelligence, and our society doesn't know how to accept them; they struggle to integrate into the mainstream. This suggests that the real problem may not be "intelligence level" at all, but rather that our social structure itself is flawed.
Ross Douthat: So is this a limitation of intelligence itself, or is it a problem with the type of personality that superintelligence might spawn?
I don't really agree with the view that "if we just raise intelligence levels, all problems will be solved." I discussed this with an AI accelerationist during a podcast. For example, if we raise intelligence to a certain level, we could conquer Alzheimer's; if we increase intelligence, AI could design a process to produce a billion robots overnight. My skepticism about intelligence lies in the belief that it ultimately has limits.
Peter Thiel: Yes, that is indeed hard to prove. Such things are always difficult to falsify.
Ross Douthat: Until we actually have superintelligence.
Peter Thiel: But I do agree with your intuition. Because in reality, we already have a lot of extremely smart people, yet many things are still stuck, and the reasons lie elsewhere. So the problem may not be solvable at all; that is the most pessimistic view. Perhaps dementia is fundamentally incurable, and perhaps death itself is an unsolvable problem.
Or perhaps it is a cultural structure issue. The problem is not with a particular intelligent individual, but with how they are accepted by society. Can we accommodate "heretical smart people"? Perhaps you need these "nonconformist" smart individuals to drive crazy experiments. But if AI is merely "smart" in the traditional sense, and if we simply understand "wokeness" as "excessive compliance" or "political correctness," that kind of intelligence may not lead to real breakthroughs.
Ross Douthat: So are you worried about a possible future where artificial intelligence itself becomes a representative of a "new stagnation"? It is highly intelligent and creative, but everything is within a framework, like Netflix's algorithms: continuously producing "okay" movies, content that people are willing to watch but don't particularly like; generating a lot of mediocre ideas; marginalizing human labor without new breakthroughs. It changes the existing structure but, in a sense, deepens stagnation. Is this the scenario you are concerned about?
Peter Thiel: That is entirely possible. This is indeed a risk. But I ultimately still fall on this judgment: I still believe we should try AI. In contrast, the other option is complete stagnation.
Yes, it may bring many unforeseen situations. For example, the combination of AI and military drones could be very dangerous, dystopian, and unsettling, but it will ultimately bring about some change. But if you have no AI at all, then truly nothing will happen.
In fact, there have been similar debates on the internet: does the internet exacerbate conformity, does it make society more "woke"? The reality is that it did not bring about the explosion of thought and diversity that liberals fantasized about in 1999. But if you ask me, I still believe that the emergence of the internet is better than a world without it. And in my view, AI is the same: it is better than "nothing," and "nothing" is indeed its only alternative.
You see, we are only discussing AI itself, which silently acknowledges that, apart from it, we are almost in complete stagnation.
Ross Douthat: But the AI field is clearly filled with a group of people who have expectations for AI that are far grander, more transformative, and even more utopian than you express. You just mentioned that modern society once promised radical life extension for humanity—and now those promises are fading. But clearly, many deeply involved in AI actually see it as a path to "transhumanism," a tool to transcend the shackles of the physical body—either to create a "successor species" or to achieve a merger of human brains and machines.
Do you think these are fanciful delusions? Or are they "high concepts" used for fundraising? In your view, are they hype? Delusions? Or do you genuinely feel concerned about them?
Peter Thiel: Well, yes.
Ross Douthat: I think you still hope humanity can continue, right?
Peter Thiel: Uh—
Ross Douthat: You are hesitating.
Peter Thiel: I don't know. I, I would…
Ross Douthat: This is a long hesitation!!!
Peter Thiel: There are too many questions implied in this.
Ross Douthat: So let me ask directly: should humanity continue to exist?
Peter Thiel: Yes.
Ross Douthat: Good.
Peter Thiel: But I also hope we can fundamentally solve these problems. So… I'm not too sure, yes—this is "transhumanism." Its ideal state is to completely transform our natural human bodies into immortal ones.
This kind of view is often compared to gender transition. For example, in transgender discussions, some are transvestites, expressing gender through clothing changes; others are transsexuals, who may undergo surgery to change their reproductive organs from male to female, or vice versa. We can certainly discuss what these surgeries change and how much they change.
But the criticism of these transitions is not that they are "strange" or "unnatural," but rather that they are too trivial. What we want is not just dressing up or organ replacement; we want a more thorough transformation—changing a person's inner self, thoughts, and even their entire body.
By the way, the orthodox Christian criticism of this transhumanism is not that it is too radical, but that it is far from enough. You have changed the body, but you have not changed the soul or the entire state of existence of the person.
Ross Douthat: Wait a minute. I basically agree with your point—that religion should be a friend to the ideas of scientific and technological progress. I also believe that any belief in divine providence must acknowledge the fact that we have indeed made progress and achieved many things that would have been almost unimaginable to our ancestors.
But the ultimate promise of Christianity seems to remain that, through God's grace, one can attain a perfected body and a perfected soul. And the person who tries to achieve this goal through a bunch of machines is likely to end up as a character in a dystopian story.
Peter Thiel: Well, let's clarify this question a bit more.
Ross Douthat: Of course, you could also have a heretical form of Christianity that would offer a different perspective.
Peter Thiel: Yes, I don't know. But I notice that the word "nature" does not appear even once in the entire Old Testament. In this sense, the Jewish-Christian tradition of revelation that I understand is actually a spiritual tradition that transcends nature. It speaks of transcendence, of overcoming. And the closest expression to "nature" might be: humans are fallen. From a Christian perspective, this "fallenness" can almost be seen as a natural state: humans are flawed, incomplete. This statement is true. But in a sense, faith means that you must use God's power to transcend it, to overcome it.
Ross Douthat: Exactly. But those who are currently trying to build a "machine god," certainly not including you, do not believe they are collaborating with Yahweh, the Lord of Hosts.
Peter Thiel: Of course, of course. But…
Ross Douthat: They believe they are relying on their own strength to construct immortality, right?
Peter Thiel: We are actually jumping across many layers of discussion. Returning to my earlier point, my criticism is actually that their ambitions are not enough. From a Christian perspective, they are far from radical enough.
Ross Douthat: But what they lack is moral and spiritual ambition.
Peter Thiel: Are they ambitious enough on a physical level? Are they really still transhumanists? To be honest, the idea of cryonics now seems more like a retro product from 1999; not many people are actually doing it anymore. So they are not transhumanists in the physical dimension. Perhaps they have turned to the path of "uploading consciousness"? But honestly, I would rather have my own body than a computer program that merely simulates me.
Ross Douthat: I agree with that.
Peter Thiel: So, in my view, uploading is even a step back from cryonics. But even so, this is part of the topic—and at this point, it becomes very difficult to assess. I'm not saying they are all making things up, that it is all fake, but I also don't…
Ross Douthat: Do you think part of it is fake?
Peter Thiel: I don't think it is fake. Because "fake" implies they are lying. But what I want to say is that these are not their true focal points.
Ross Douthat: Understood.
Peter Thiel: So we see a lot of abundance-oriented language, an optimistic narrative. A few weeks ago, I talked to Elon about this, and he said that in ten years, America will have a billion humanoid robots. I said: If that’s true, then you wouldn’t have to worry about the budget deficit anymore. Because by then, growth will be so rapid that economic growth itself will solve that problem. But he is still worried about the deficit. This certainly doesn’t mean he doesn’t believe in the prospect of "a billion robots," but it may mean that he hasn’t thought through the implications of that expectation, or he doesn’t believe it will bring about a fundamental transformation of the economic structure; it could also simply mean that there is a lot of uncertainty surrounding that expectation. So, in a way, these future blueprints haven’t been fully thought out.
If I were to criticize Silicon Valley a bit, it would be that it always avoids the question of "the meaning of technology." Discussions often get bogged down in micro-level details, like "What is the IQ-ELO score of AI?" or "How should we define AGI?" We get caught up in these endless technical details while neglecting the more mid-level meaningful questions, which are actually the truly important ones: for example, what does it mean for the budget deficit? What does it mean for the economic structure? What does it mean for geopolitics?
One question I recently discussed with you is whether artificial intelligence will change the course of some human wars. If we are entering an accelerating AI revolution, then on a military level—will other countries fall behind? From an optimistic perspective, this gap might create a deterrent: other countries will know they have already lost. But from a pessimistic perspective, it might actually prompt them to act more quickly—because they realize that they either act now or miss the opportunity forever. If they don’t go to war now, there may not be a chance later.
Regardless of the scenario—this will be an extremely significant event—but the problem is: we haven’t thought these issues through yet. We haven’t seriously discussed what AI means for geopolitics, nor have we seriously considered its impact on the macroeconomy. These are the questions I hope we can collectively explore more deeply.
Imagining the Apocalypse: Who is the True "Antichrist"?
Ross Douthat: You are indeed focused on another more macro issue—let’s continue along the line of "religion." You have recently talked frequently about the concept of the Antichrist—this is a term within a Christian context and is related to eschatology. What does "Antichrist" mean to you? How do you understand this concept?
Peter Thiel: How much time do we have?
Ross Douthat: We have enough time to talk about the Antichrist.
Peter Thiel: Well, I could actually talk about this topic for a long time. I think when we discuss existential risks or the challenges humanity faces, we always encounter a problem of expression frameworks. These risks are often framed in a "technology out of control, leading to dystopia" sci-fi grammar: for example, nuclear war, environmental disasters, or more specifically, climate change, although we can list many other risks, such as biological weapons and various different sci-fi apocalypse scenarios. Of course, artificial intelligence does indeed bring certain types of risks.
But I have always felt that if we really want to establish a framework for discussing "existential risks," we should also discuss the possibility of another kind of "bad singularity," which I refer to as "global totalitarian states." Because when facing all the aforementioned risks, the default political solution path often leads to a form of "single world governance." For example, how do we control nuclear weapons? Imagine a truly powerful United Nations that can control all nuclear weapons and coordinate governance through a global political order. Similarly, when we talk about how to respond to artificial intelligence, similar answers arise: we need "global computational governance," we need a world government to oversee all computers, record every keystroke, and ensure that no one can write dangerous AI programs. I have been wondering whether this path might actually be "jumping out of the frying pan and into the fire."
The atheistic version states: "One world, or nothing." This is the title of a short film made by the American Scientists' Federation in the 1940s. The film begins with a nuclear explosion destroying the world, and the conclusion drawn is that to avoid destruction, we must establish a world government. "One world, or nothing." The Christian version is, in a sense, the same question: Antichrist or the final battle? You either accept the single world order dominated by the Antichrist; or we drift toward Armageddon (the final battlefield of the apocalypse in the Bible). Ultimately, "one world or nothing," "Antichrist or apocalypse," are actually different expressions of the same question.
I have many thoughts on this issue, but there is one key flaw that always concerns me—many narratives about the Antichrist always skip over a core question: how does the Antichrist take over the world? The books say he does it through mesmerizing speeches and hypnotic language, and everyone just believes it. This sounds like some sort of "Demonium Ex-Machina."
Ross Douthat: Yes, that is completely untenable.
Peter Thiel: Indeed, it is an obvious plot hole. But I think we have actually found an explanation for this flaw today: the way the Antichrist takes over the world is not through seducing people's hearts, but by constantly creating "apocalyptic anxiety." He will repeatedly say, "Armageddon is coming," "existential risks are imminent," and use this as a reason to propose regulating everything. He is not the "evil technological genius" image of the 17th or 18th century, not a mad scientist sitting in a lab creating machines of destruction. The reality is that people are much more cautious and fearful than that.
In our time, what truly inspires political resonance is not "technological liberation," but "technological fear." People will say, "We need to stop; we can’t let technology get out of control." In the 17th century, we might still imagine a Dr. Strangelove or an Edward Teller-type figure taking over the world; but today, the more likely candidate for that role is Greta Thunberg.
Ross Douthat: I want to propose a possibility of an intermediate state. In the past, we feared the Antichrist as a technological wizard with superpowers; now we are more likely to fear the person who promises to "control technology and ensure safety." In your view, that leads to a state of general stagnation, right?
Peter Thiel: Yes, that description is closer to the path I think is happening.
Ross Douthat: But you believe people are still afraid of the "17th-century" Antichrist. We still fear a Dr. Strangelove-type character.
Peter Thiel: Yes, deep down, people still fear that old-fashioned image of the Antichrist.
Ross Douthat: But you are saying that the real Antichrist will exploit this fear and then say: you must follow me to avoid the Terminator, Skynet, and nuclear apocalypse.
Peter Thiel: Exactly.
Ross Douthat: My view is that, given the current state of the world, for people to believe that this fear is real, there needs to be some new form of technological breakthrough that makes that apocalyptic threat tangible. In other words, I can understand that if the world truly believes AI is about to destroy humanity, it could indeed turn to a leader promising "peace and regulation." But to reach that state, there must first be some "technological explosion," meaning that an accelerationist apocalyptic scenario must be partially realized.
To welcome the Antichrist you mentioned, who promises "peace and safety," the prerequisite is that technology must first undergo significant breakthroughs. For example, a fundamental flaw of 20th-century totalitarianism is "knowledge capability": it cannot grasp what is happening in the world. So you need AI or other new technologies to solve this information bottleneck and provide data support for totalitarian rule. In other words, the "worst-case scenario" you envision also relies on a genuine technological leap—then tamed to maintain a stagnant totalitarian rule. We cannot jump directly from the current technological state.
Peter Thiel: Well, that path does exist—
Ross Douthat: And the current reality is that Greta Thunberg is still on a boat in the Mediterranean protesting against Israel. I really don’t see how "safety brought by AI," "peace brought by technology," or "safety under climate control" has become a strong, global political rallying cry. Without a real technological acceleration and without a visceral fear of disaster, such discourse itself is hard to take effect.
Peter Thiel: These issues are really difficult to judge, but I do feel that environmentalism is a very powerful force. I’m not sure if it is strong enough to bring about a "unified global" totalitarian state, but it is indeed powerful.
Ross Douthat: In the current situation, I don’t think it has reached that point.
Peter Thiel: I would say that in Europe, environmentalism might be the only thing people still believe in. Their faith in a "green future" even surpasses their concerns about Sharia law or the so-called totalitarian rule of some countries. After all, the so-called "future" is a concept that looks different from the present. The only three imaginations that can be seen as the future in Europe are: green transformation, Sharia law, and totalitarianism. And "green" is clearly the most dominant narrative among them.
Ross Douthat: That is in a Europe that is declining and no longer dominating the world.
Peter Thiel: Of course, it is always nested in a specific context. We can see this by reviewing the history of nuclear technology development; ultimately, we did not move toward a global totalitarian ruling model. But by the 1970s, there was an explanation about technological stagnation that suggested: the rapid advancement of technology had made people feel fearful, and the "Baconian scientific spirit" had ended at Los Alamos.
Since then, society seems to have made a decision: to stop and no longer advance. And when Charles Manson took LSD in the late 1960s and ultimately went down a path of murder, what he saw in the hallucinogens was an extreme worldview of freedom: you could act like the anti-hero in Dostoevsky's novels, where everything is permitted.
Of course, not everyone became Charles Manson. But in the history I recount, everyone became as paranoid as he was, and ultimately, it was the hippies who dominated the culture…
Ross Douthat: But Charles Manson did not become the Antichrist, nor did he take over the world. We just talked about the apocalypse, and you…
Peter Thiel: But in my view, the story of the 1970s is that the hippies won. We landed on the moon in July 1969, and three weeks later, the Woodstock Festival opened. Looking back from today, that was indeed the watershed moment when technological progress came to a halt. The hippies won that cultural battle. Of course, I’m not saying that Charles Manson literally won.
Ross Douthat: Alright, let's return to the topic of the "Antichrist" and wrap it up. What you just said sounds like a "retreat" position: environmentalism is already anti-progress enough, so let's just talk about that for now. Fine, let's accept that judgment for the time being.
Peter Thiel: I'm not retreating; I'm just pointing out that this force is indeed very powerful.
Ross Douthat: But the reality is that we are not currently living under the rule of the "Antichrist." We are simply in a state of stagnation. And what you propose is that a worse situation may arise in the future: an order driven by fear that could make stagnation a permanent state. My view is that if such a situation were to occur, it would necessarily be accompanied by some kind of drastic technological leap, similar to the level of change seen at Los Alamos, which would genuinely instill fear in humanity.
I want to ask you a specific question: you are an investor in AI, deeply involved in the development of Palantir, military technology, surveillance technology, and war-related technologies. In the scenario you just described: an "Antichrist" that utilizes humanity's fear of technological change to establish a global order. It sounds like he would likely use the very tools you are building. For example, he might say, "We no longer need technological progress," but he would add, "I am very satisfied with the current results of Palantir."
Isn't this a concern? Could there be a historical irony where the person who first publicly expressed concern about the "Antichrist" inadvertently accelerated his arrival?
Peter Thiel: You see, there are many different possibilities here. But I clearly do not think I am doing what you say I am doing.
Ross Douthat: I don't really think you are doing that either. I just want to understand how a society reaches the point of willingly accepting "permanent authoritarian rule."
Peter Thiel: This can be interpreted on many different levels. But is what I just said, as a macro description of global technological stagnation, really that absurd? The entire world has been under the yoke of "peace and security" for fifty years. This is the content of 1 Thessalonians 5:3, where the Antichrist's slogan is "peace and security."
We have handed over decision-making power to the FDA: it not only regulates drugs in the United States but also effectively controls global standards, as other countries default to obeying it. The U.S. Nuclear Regulatory Commission also effectively controls global nuclear power projects. You cannot design a modular nuclear reactor in Argentina and just start building it. Because they do not trust their own regulatory agencies, they will ultimately refer to U.S. opinions.
So, ultimately, this fifty years of technological stagnation is indeed a question that needs explaining. A common saying is that we have exhausted all innovative ideas. But another answer is that, on a cultural level, some force no longer allows us to move forward. This cultural explanation can be "bottom-up." Perhaps humanity itself has undergone a transformation, becoming more docile and more willing to accept limitations; or it can be "top-down." The government system itself has undergone some evolution, becoming a mechanism that drives stagnation.
You see, nuclear energy was supposed to be a key driver of the 21st century. But it has been "taken offline" globally.
Ross Douthat: So from your logic, we are actually living under a "mild version" of Antichrist rule. Then let me ask you a more ontological question: do you believe God governs history?
Peter Thiel: (pauses) I would say that human free will and choice always exist. These things are not completely framed by some form of determinism.
Ross Douthat: But God wouldn't let us live forever in this lukewarm, stagnant Antichrist system, right? The final chapter of the story shouldn't be like this, should it?
Peter Thiel: Attributing all causality to God is problematic. I can cite many scriptures to explain this, such as John 15:25, "They hated me without a cause." This means that those who persecute Christ have no real reason. If we understand this verse as an expression of "ultimate causality," those people would say: it is because God made me do this; God arranged everything.
But the traditional Christian understanding actually opposes Calvinistic determinism. God is not the instigator behind every event. If you say everything is caused by God…
Ross Douthat: Wait, but God does…
Peter Thiel: Then you are making God a scapegoat.
Ross Douthat: But God did bring Jesus into history because He did not want us to be forever trapped in a decaying, stagnant Roman Empire. So God will ultimately intervene to save us, right?
Peter Thiel: I am not a Calvinist. And—
Ross Douthat: But that doesn't equate to Calvinism; it's just a basic belief of Christianity. God wouldn't let us be forever staring at our phone screens, being lectured repeatedly by Greta Thunberg. He wouldn't leave us forever in that fate.
Peter Thiel: I believe that, for better or worse, the space for human action and human freedom always exists. If I truly believed that everything was predestined, then we would have to accept our fate—when the lion comes, we would just sit and meditate, waiting to be eaten. But I don't think we should give up resistance like that.
Ross Douthat: I agree with that. The reason I ask these questions is that I hope we can exercise human free will with hope in the process of resisting the "Antichrist"—you agree with that, right?
Peter Thiel: We can reach a consensus on that.
Ross Douthat: Great. Peter Thiel, thank you very much for your insights today.
Peter Thiel: Thank you.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。