Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

1.6 Billion Bet on Human Brain AGI: The New Gamble of Crypto Veterans

CN
智者解密
Follow
4 hours ago
AI summarizes in 5 seconds.

On March 27, 2026, Jed McCaleb once again stood at the forefront of technological narratives, but this time, he was no longer the cryptocurrency veteran building exchanges or public chains; instead, he put forth $1.6 billion, turning to bet on artificial general intelligence (AGI) based on principles derived from the human brain. This funding, sourced from his approximately $3.9 billion in cryptocurrency fortune, will be allocated with $1 billion directed toward the development of human brain-inspired AGI architecture, and $600 million dedicated to a ten-year neuroscience research plan, directly challenging the mainstream path dominated by Transformer large models. McCaleb stated plainly that today's AI "only solves prediction problems and lacks capabilities in planning and decision-making," and he intends to use the human brain as a blueprint to rewrite the technical path of intelligence. The question is, will this extravagant bet from the cryptocurrency world turn out to be an idealistic costly experiment, or will it, in ten years, be the foundation that shifts the power dynamics of AI?

From Mentougou to AGI: Jed's Cross-Boundary and Long-term Planning

In the cryptocurrency world, Jed McCaleb is almost synonymous with the early history itself. He was involved in the creation of the once-largest Bitcoin exchange, Mt. Gox, driving the first global flow of cryptocurrency assets during an era when centralized infrastructure was extremely rudimentary; he then turned to Ripple, attempting to reshape cross-border payment systems with distributed ledger technology, and later Stellar, aiming to connect real-world assets and payment scenarios through a more open network. Throughout this journey from exchanges to protocols to public chains, he has consistently focused on building new underlying rules rather than merely engaging in speculative capital operations, intertwining technical idealism with engineering pragmatism, which forms his distinctive personal background.

It is this continuous entrepreneurial and early layout path that allowed McCaleb to accumulate approximately $3.9 billion in cryptocurrency wealth (according to a single public source) by the mid to late 2020s, providing the real basis for his later withdrawal of $1.6 billion to invest in AGI. For most entrepreneurs, such a scale would represent a “family fortune” level of risk, but for McCaleb, it resembles a continuation of his long-standing bet on “transforming underlying systems”—only this time, shifting from financial infrastructure to the intelligence infrastructure itself.

Of the $1.6 billion, $1 billion will be used to directly develop a human brain-inspired AGI architecture, aiming to construct a system distinct from Transformers; an additional $600 million is earmarked for neuroscience, supporting systematic research on human cognition and neural activity patterns with a ten-year cycle (all disclosed from a single source). In the current AI philanthropy and basic research funding landscape, such a singular donation focused on a single topic is extremely rare globally, approaching the scale of major R&D projects within leading tech companies.

More critically, McCaleb does not regard this as merely a simple financial investment; rather, it is orchestrated through his nonprofit organization Astera Institute, firmly locking the technical route, research agenda, and even open strategies on a path he deems acceptable. This means he is not betting on whether a startup will succeed in going public; rather, he is attempting to steer the discourse on future AGI with private capital: what type of intelligence is considered the “right” form of intelligence.

Human Brain vs. Transformer: A Direct Conflict of Two Views on Intelligence

From the outset, Astera Institute has not concealed its rejection of the current mainstream Transformer technology route. They argue that today’s large models primarily remain at pattern recognition and probabilistic forecasting: by stacking massive parameters, computing power, and data, they make highly realistic “next-step predictions” in text, images, and speech modalities, but structurally lack capabilities in actual goal planning, long-term reasoning, and decision-making. This criticism reflects a comprehensive opposition to the implicit consensus of “intelligence = larger-scale prediction machine.”

In the narrative clash between the human brain and algorithms, on one side is the path of mainstream AI enterprises today: continuing to amplify data, model size, and computing power to pursue enhanced generative capabilities and higher benchmark results; on the other side is the path Astera is attempting to forge: directly mimicking the neural activity patterns and cognitive structures of the human brain and translating discoveries from neuroscience into a completely new AI architecture. The former relies on engineering stacks and industrial chain collaboration, while the latter bets on breakthroughs in basic sciences, akin to two civilizations discussing “what kind of brain to build.”

In public statements, McCaleb repeatedly emphasizes, “current AI only solves prediction problems and lacks planning and decision-making abilities,” which is precisely the key capability shortfall that Astera aims to address. What they desire is not merely a system that can generate coherent text and images but an “agent” capable of understanding goals, breaking down tasks, and autonomously adjusting strategies in uncertain environments. This yearning for a “decision-making machine” also accompanies controversy: once a machine possesses higher-level autonomy, will humans find it easier or harder to control?

The core controversy thus emerges: on the questions of human controllability and general intelligence forms, does a route closer to human brain structure indeed outperform the “pure algorithmic evolution” of large models? Or do we merely prefer to believe that “building what we are familiar with” is safer, simply because we know our own brains better? This is not only a technical choice but also a philosophical judgment about future power structures of intelligence.

$1.6 Billion Nonprofit Experiment: Open Research Colliding with the Giants' Closed Paradigms

Unlike most AI R&D projects driven by tech giants, Astera Institute chooses to advance this AGI and neuroscience experiment as a nonprofit organization. The official stance is clear: all research results related to this $1.6 billion will be “published as openly as possible,” rather than submerged in internal company codebases or patent pools. This indicates that from its inception, Astera places itself closer to the academic and open-source ecology rather than attempting to become the next closed large model platform.

This open orientation stands in stark contrast to the current mainstream landscape of the AI industry: large tech companies control data and computing power, dominate the training and iteration of proprietary large models, and an increasing number of key capabilities are encapsulated behind APIs and commercial agreements, often leaving external researchers only the opportunity to reverse engineer internal mechanisms through the gaps in black box outputs. Astera’s approach essentially uses private massive funding to replicate a discourse space closer to public research institutions, countering the trend towards closure.

The ten-year neuroscience research plan also dictates that this experiment’s rhythm will not be a fast-paced narrative of “one demo per quarter,” but more akin to traditional notions of “slow and steady craftsmanship”: needing to achieve continuous progress in foundational cognition, neural coding, and brain region functions before gradually materializing into an engineerable AI architecture. This long-termism might result in the project lacking eye-catching product displays in the short term but reserves ample exploratory space for genuine paradigm leaps.

Once Astera fulfills its promise of “open publication,” the spillover effects merit attention:

● For the academic community, it may become a new source of knowledge that bypasses the constraints of commercial giants, enabling researchers to engage with novel brain-inspired algorithms and experimental results without relying on internal data and models from companies.

● For the open-source community, theoretical insights and model ideas from Astera may be quickly absorbed, replicated, and expanded, spawning a technical branch independent of the mainstream large model empire—even if computing power does not exist on the same scale, differentiating in architecture and concepts. Ten years from now, people may reflect on this $1.6 billion and be more concerned with whether it opened new channels of knowledge circulation rather than merely creating a new AI company.

The Illusion of Controllable AGI? The Promise and Paradox of the Human Brain Paradigm

One of the core ideas supporting the narrative of Astera is that AI based on human brain principles is more likely to be controllable by humans. This notion is attractive because it resonates with the public’s intuitive fear of “uncontrollable AI”: if we can understand the internal operational logic of a system, and even directly compare its structure with our own on a biological level, then theoretically, it would be easier to predict its behavioral boundaries and design constraint mechanisms, thereby reducing the risk of technological backlash. This premise of “similarity brings controllability” forms the emotional support for the human brain paradigm.

From the perspectives of safety and interpretability, human brain-inspired AGI indeed occupies a narrative advantage. On one hand, brain science research inherently focuses on causal chains: how certain neural activities correlate with perception, memory, and decision-making, paths that are inherently closer to how humans understand the world; on the other hand, if the AI architecture structurally resembles our brains, human researchers might be able to use tools from neuroscience and cognitive science to directly analyze its “thinking pathways.” In alignment issues, such systems close to human decision-making patterns are also believed to be more likely to accept the integration of human values and social norms.

However, on a technical level, all of these advantages still remain at the level of possibilities rather than facts. There is no public information in the brief regarding specific experimental data, model performance, or phased milestones, and Astera itself has yet to present quantifiable technological achievements or assessment results externally. For now, we do not know whether human brain-inspired architectures can truly surpass or complement existing large models in planning, long-term reasoning, and decision-making on real tasks, nor do we understand whether their engineering complexity and costs are controllable. Hence, we must maintain sufficient caution toward the feasibility and risks of this route rather than preemptively endorsing the technology.

Interestingly, if we pull back the lens to the cryptocurrency world, McCaleb’s current obsession with “human control” has both consistency and contradiction in relation to his historical stance against centralized power. The consistency lies in his attempts to weaken the dominance of a single institution over systems, from decentralized payment networks to the de-mediated value transfer infrastructure; in contrast, for AGI, he hopes to enable “the entirety of humanity,” rather than a handful of technology companies, to possess firmer control. The contradiction arises from the fact that this entire endeavor is still initiated and led by a few ultra-large capital holders—when private wealth shapes the AI safety agenda across boundaries, are we truly distancing ourselves from centralization, or building a new technological oligopoly? This tension will accompany the entire decade of Astera.

The Next Steps of Crypto Whales Stirring the AGI Game

Looking back today, McCaleb’s $1.6 billion, ten-year cycle long-term funding has already leveraged the existing AI landscape along three dimensions: in terms of technical routes, it publicly supports human brain-inspired AGI, countering the orthodoxy of Transformer-driven large models; in terms of open models, it advocates a nonprofit and open publication stance, offsetting the increasingly closed R&D ecosystems of large tech companies; and in terms of power structures, it signifies that cryptocurrency capital is directly involving itself in setting foundational rules for AGI at such a high level for the first time, rather than merely chasing AI trends at the application layer.

This also poses a longer-term question: after the first cryptocurrency billionaire bets billions on fundamental research and AGI safety, will more individuals with on-chain wealth choose to shift their resources from trading, mining, and speculation to labs and research institutes? In recent years, AI narratives have been consumed multiple times in the cryptocurrency market, and McCaleb’s path offers an alternative choice—not to issue tokens around the AI concept but to reshape the underlying technology and governance discussions of AI with raw accumulation. If this model gets replicated, the future funding landscape for basic sciences might witness the emergence of “cryptocurrency capital” as a new source.

Ten years from now, we may face two possible world scenarios. One is that the human brain paradigm of AGI repeatedly encounters failures in experiments and engineering, while the existing large model route continues to advance in scale and optimization, leaving Astera with a rich but marginalized library of neuroscience literature; the other is that, at a critical juncture, the human brain-inspired architecture achieves breakthroughs, becoming a new lever to counterbalance or even partially replace the large model empires, redirecting the evolution of intelligent technologies back to the question of “how to understand the brain.” Which scenario is closer to reality remains to be determined by sufficient evidence.

For readers, perhaps the more crucial aspect is not to immediately align with one technological route, but to distinguish narratives from actual progress: in the absence of publicly available experimental data and clear milestones, it is essential to remain calm regarding any grand promises, to continuously track Astera’s subsequent papers, open-source achievements, and peer review processes rather than being easily swayed by the narrative of “crypto whales + AGI.” What can truly alter the power dynamics of AI is not the gamble of one individual, but the public knowledge and institutional arrangements that emerge over time from this gamble.

Join our community for discussions and collective growth!
Official Telegram group: https://t.me/aicoincn
AiCoin Chinese Twitter: https://x.com/AiCoinzh

OKX benefits group: https://aicoin.com/link/chat?cid=l61eM4owQ
Binance benefits group: https://aicoin.com/link/chat?cid=ynr7d1P6Z

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

原油暴动!Bybit注册100倍杠杆爆赚
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 智者解密

2 hours ago
Million Mobilization and Contactless Warfare: The Struggle Between the US and Iran over Steel and Oil
3 hours ago
There can never be a second Gary Gensler moment again.
3 hours ago
Iran's Nuclear Facilities Attacked: The Game Behind the Surge of Gold
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar智者解密
2 hours ago
Million Mobilization and Contactless Warfare: The Struggle Between the US and Iran over Steel and Oil
avatar
avatar周彦灵
2 hours ago
Zhou Yanling: March 28 Bitcoin BTC Ethereum ETH Latest Trend Prediction Analysis and Operation Strategy Today.
avatar
avatar币圈丽盈
2 hours ago
In the cryptocurrency market, Liying: On March 28, the key monthly support level for Ethereum (ETH) at 1950 is facing a test. If it falls below this level, it will confirm a break of the three-year upward trend line! Latest market analysis and trading suggestions.
avatar
avatar币圈丽盈
2 hours ago
In the cryptocurrency circle, Liying: On March 28, the bearish momentum of Bitcoin is weakening, but the moving average system is fully suppressive. Is this a trap to induce short selling or a continuation of the decline? Latest market analysis and trading advice.
avatar
avatar智者解密
3 hours ago
There can never be a second Gary Gensler moment again.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink