Written by: Nancy, PANews
Current social media appears lively, yet the "human touch" is gradually disappearing. As a large amount of AI-generated junk floods major mainstream platforms, the prevalence of fake and clickbait content has led to a growing number of real users losing their desire to share, and some even starting to leave.
In the face of the rampant AI junk, simple algorithmic reviews have proven inadequate. Recently, top venture capital firm a16z proposed the concept of Staked Media, using real money to filter out AI noise, which has attracted market attention.
As AI begins to self-replicate, the internet is being inundated with "pre-fabricated content"
"AI has started to mimic AI."
Recently, moderators of "American Tieba" Reddit have been overwhelmed as they struggle against a massive influx of AI-generated content. In the r/AmItheAsshole section, which has 24 million users, moderators complained that over half of the content is generated by AI.
In just the first half of 2025, Reddit deleted over 40 million pieces of junk and false content. This phenomenon has also spread like a virus to platforms such as Facebook, Instagram, X, YouTube, Xiaohongshu, and TikTok.
Currently, in an era where information seems to be exploding but genuine voices are becoming increasingly scarce, AI-generated content junk permeates the entire internet, quietly eroding people's thoughts. In fact, with the proliferation of generative tools like ChatGPT and Gemini, handcrafted content creation is being replaced by AI, turning into a "production line factory."
According to the latest research from SEO company Graphite, since ChatGPT was made public at the end of 2022, the proportion of AI-generated articles has surged from about 10% that year to over 40% by 2024. As of May this year, this proportion has risen to 52%.
However, most of this AI-generated content resembles "pre-packaged meals," with fixed recipes and standardized production processes, but lacking soul and being dull to read. Moreover, today's AI is no longer clumsy; it can not only mimic human tones but also replicate emotions. From travel guides to emotional disputes, and even deliberately inciting social divisions for clicks, AI can handle it all effortlessly.
More critically, when AI hallucinates, it can spout nonsense with a straight face, creating not only information junk but also a crisis of trust.
In the era of AI proliferation, building media trust with real money
In the face of rampant AI junk content on the internet, even major platforms updating their review mechanisms and introducing AI assistance have seen limited effectiveness. In a16z crypto's significant annual report, Robert Hackett introduced the concept of Staked Media. (Related reading: a16z: 17 Exciting New Directions in Crypto for 2026)
The report points out that traditional media models tout objectivity, but their drawbacks have long been evident. The internet has given everyone a voice, and more and more practitioners, creators, and builders are directly conveying their views to the public, reflecting their own interests in the world. Ironically, audiences respect them not "despite their vested interests," but "because of their vested interests."
This new trend is not merely the rise of social media but the "emergence of crypto tools," which allow people to make publicly verifiable commitments. As AI significantly lowers the cost of generating massive amounts of content and makes the process more convenient (content can be generated based on any perspective or identity to argue its truth), relying solely on human (or robotic) speech is no longer convincing. Tokenized assets, programmable lock-ups, prediction markets, and on-chain historical records provide a more solid foundation for trust: when commentators express opinions, they can prove their consistency (backing their views with funds); podcasters can lock tokens to demonstrate they won't opportunistically change positions or manipulate markets; analysts can bind predictions to publicly settled markets, creating auditable records.
This is the early form of what is called "Staked Media": such media not only embraces the idea of vested interests but can also provide media forms that offer tangible proof. In this model, credibility does not come from pretending to be neutral or from baseless claims, but from publicly transparent and verifiable commitments to interests. Staked Media will not replace other forms of media but will complement the existing media ecosystem. It sends a new signal: no longer "trust me, I'm neutral," but "this is the risk I'm willing to take, and this is how you can verify that what I say is true."
Robert Hackett predicts that this field will continue to grow, just as 20th-century mass media adapted to the technology and incentive mechanisms of the time (attracting mass audiences and advertisers) while superficially pursuing "objectivity" and "neutrality." Today, AI makes it easy to create or fabricate any content, while what is truly scarce is evidence; creators who can make verifiable commitments and genuinely support their claims will have the advantage.
Using staking mechanisms to raise the cost of fraud, suggesting the introduction of a dual content verification mechanism
This innovative idea has also gained recognition from crypto practitioners who have made suggestions.
Crypto analyst Chen Jian stated that from big media to self-media, various fake news is rampant, and an event can be reported with twists and turns. The fundamental reason is that the cost of fraud is low and the returns are high. If we view each information disseminator as a node, why not use the blockchain POS (Proof of Stake) economic game mechanism to solve this problem? He suggested that each node should be required to stake funds before expressing opinions; the more they stake, the higher their trustworthiness; others can gather evidence to challenge them, and if the challenge is successful, the system will confiscate the staked funds and reward the challenger. Of course, this process also involves privacy and efficiency issues; current solutions like Swarm Network combine ZK and AI to protect participant privacy while using multi-model data analysis to assist verification, similar to Grok's truth verification function on Twitter.
Crypto KOL Lan Hu also believes that cryptographic technologies like zero-knowledge proofs (zk) can allow media or individuals to prove their credibility online, similar to "establishing a written agreement" that cannot be tampered with once on-chain. However, having just the agreement is not enough; a certain amount of assets, such as ETH, USDC, or other crypto tokens, needs to be staked as collateral.
The logic of the staking mechanism is very straightforward: if the published content is proven to be fake news, the staked assets will be confiscated; if the content is true and reliable, the staked assets will be returned after a certain period, and they may even receive additional rewards (such as tokens issued by Staked Media or a share of the funds confiscated from fraudsters). This mechanism creates an environment that encourages truth-telling. For media, staking indeed increases financial costs, but what it gains is genuine audience trust, which is especially important in an era of rampant fake news.
For example, a YouTuber who publishes a video recommending a product needs to "establish a written agreement" on the Ethereum chain and stake ETH or USDC. If the video content is false, the staked funds will be confiscated, allowing viewers to trust the authenticity of the video content; if a blogger recommends a phone, they need to stake $100 worth of ETH and declare: "If the beauty function of this phone does not meet expectations, I will compensate." Seeing the blogger's staked funds, viewers naturally find it more reliable. If the content is AI-fabricated, the blogger will lose their staked funds.
For determining the authenticity of content, Lan Hu suggests using a "community + algorithm" dual verification mechanism. On the community side, users with voting rights (who need to stake crypto assets) vote on-chain, and if a certain proportion (e.g., 60%) deems it fake; on the algorithmic assistance side, data analysis aids in verifying the voting results; in terms of arbitration, if the content creator disagrees with the judgment, they can initiate arbitration with an expert committee; if malicious manipulation of voters is discovered, the assets of the voters will be confiscated; both participants in voting and the expert committee will receive rewards, with sources including confiscated funds and media tokens. Additionally, content creators can use zero-knowledge proof technology to generate authenticity proofs from the source, such as proving the true origin of a generated video.
For those with financial strength attempting to exploit the staking mechanism to create fraud, Lan Hu suggests raising the long-term costs of fraud, not just in terms of funds but also time, historical records, reputation systems, and legal responsibilities. For instance, accounts that are penalized will be marked, and subsequent content will require staking more funds; if an account is penalized multiple times, the credibility of its content will significantly decrease; in severe cases, legal accountability may even be pursued.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

