Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

This may be the last chance for ordinary people to understand AI in advance.

CN
律动BlockBeats
Follow
1 month ago
AI summarizes in 5 seconds.
Original Title: Something Big Is Happening
Original Link: @mattshumer_
Translation: Peggy, BlockBeats

Editor's Note: Many people's judgments about AI still linger at the stage of "it seems somewhat useful, but that's about it." However, most people are unaware that a change capable of rearranging daily life has quietly begun.

This article is not an abstract discussion about "whether AI will replace humans," but rather a firsthand account from a practitioner at the forefront of AI research and applications regarding the real changes taking place: when model capabilities make non-linear leaps in a short time, when AI is no longer just a tool but is capable of independently completing complex tasks and even participating in building the next generation of AI, the previously stable professional boundaries are rapidly loosening.

This time, the change is not a gradual technical upgrade but more like a shift in operational logic. Regardless of whether one is in the tech industry, anyone whose work revolves around "screens" cannot remain unaffected. When AI has already started to complete tasks for you, how are you preparing to coexist with it?

Below is the original text:

Please recall February 2020.

If you were very attentive at that time, you might have noticed a few people talking about a virus spreading overseas. But the vast majority of people paid little attention. The stock market was performing well, children went to school as usual, you dined out, shook hands, exchanged pleasantries, and planned trips. If someone told you he was hoarding toilet paper, you would probably think he was reading too much on some weird corner of the internet. Yet within about three weeks, the whole world had changed completely. Offices closed, children went home, and life was reorganized into a form that you simply wouldn't believe if someone described it a month ago.

I feel that we are now at a stage of "isn't this a bit exaggerated," and the scale of this event will far exceed that of the COVID-19 pandemic.

I have been starting and investing in AI for six years, and I live in this world. I write this for those in my life who are not in this industry—my family, friends, and those I care about. They keep asking me, "What is going on with AI?" and my answers have not genuinely reflected what is happening. I always give a polite version, a cocktail party version. Because if I were to tell the real situation, it would sound like I was crazy. For a long time, I also told myself that it was a good enough reason to keep what was really happening to myself. But now, the gap between what I have been saying and reality has become impossible to ignore. The people I care about should know what will happen next, even if it sounds crazy.

Let me make one thing clear: although I work in the AI field, I have almost no influence over what is about to happen, and the vast majority of people in the industry are in the same boat. The ones truly shaping the future are a handful of individuals: a few hundred researchers scattered across a handful of companies—like OpenAI, Anthropic, Google DeepMind, and a few other institutions. A single training task, completed by a small team over a few months, can create an AI system capable of changing the entire technological trajectory. Most of us practitioners are building on foundations laid by others. We are just like you, watching it all unfold from the sidelines—only because we are closer to it, we feel the ground shaking first.

But now is the time. Not a "someday we should talk about this" moment, but a "this is happening, you must understand it now" moment.

I know this is real because it first happened to me.

There is one thing that almost everyone outside the tech circle hasn't realized: the reason so many people in various industries are sounding the alarm now is that this change has already taken place in our lives. We are not making predictions; we are telling you: these events have already happened in our work, and you may very well be next.

For years, AI has been progressing steadily. Occasionally, there are major leaps, but the intervals between each are long enough for you to slowly digest them. But by 2025, new technologies for building models emerged, and the speed of progress accelerated sharply. Then faster, and then faster still. Each new model is not just slightly better than its predecessor, but significantly better, and they are released at shorter intervals. I increasingly use AI, yet I communicate with it less and less, watching it handle tasks that I originally thought had to rely on my expertise to complete.

Then, on February 5, two top AI laboratories released new models on the same day: OpenAI's GPT-5.3 Codex and Anthropic's Opus 4.6 (the company developing Claude). At that moment, everything "clicked." It was not like a light suddenly being turned on, but more like realizing that the water level had quietly risen to your chest.

I no longer need to personally complete the technical parts of my work. I describe in plain English what I want to build, and it... just appears. Not a draft that needs my revisions, but a finished product. I tell the AI the goal, leave the computer for four hours, and when I return, the work is done—and done well, better than I could do myself, without any modifications. A few months ago, I still had to communicate back and forth with the AI, guiding and adjusting; now, I simply describe the outcome and walk away.

Let me give you a specific example to help you understand what this looks like in practice. I would say to the AI: "I want to create an application like this, it should achieve these functions, and look roughly like this. You think of the user flow and design." And it really did it. It wrote tens of thousands to hundreds of thousands of lines of code. Even more incredible—something that was unimaginable a year ago—it would open the application itself, click buttons, test functionalities, and use it like a person. If it feels something looks off or something is not working smoothly, it would go back and modify it, iterating on its own, like a developer, continuously correcting and refining until it is satisfied. Only when it deems the application meets its standards will it come back to tell me: "You can test it now." And when I do test it, it is usually perfect.

I am not exaggerating. This is what my actual workday looked like this past Monday.

But what truly shocked me was the model released last week (GPT-5.3 Codex). It doesn't just execute commands; it is making judgments. For the first time, it gave me the feeling that it possesses something akin to "taste"—that intuitive judgment of "what is the right choice" that people have consistently said AI could never possess. This model has acquired it, or at least has approached a level where this distinction begins to matter less.

I have always been among the earliest adopters of AI tools. But in the past few months, I have been thoroughly astounded. This is no longer incremental improvement; it is something entirely different.

Why does this matter to you—even if you are not in the tech industry?

AI laboratories have made a very explicit choice: they prioritized making AI good at writing code. The reason is simple: building AI itself requires a lot of code. If AI can write this code, it can help build its next generation: smarter versions, generating better code, which in turn builds even smarter versions. Mastering programming is the key to unlocking everything. This is why they focused on this first. The reason my work began changing before yours is not that they specifically targeted software engineers but rather a side effect of the direction they aimed at.

Now, this step has been completed. And they are turning to all other fields.

In the past year, the feeling that tech workers have experienced—watching AI transition from a "useful tool" to "better at doing my job than I am"—is about to become everyone's experience. Law, finance, healthcare, accounting, consulting, writing, design, analysis, customer service…not ten years from now. The people building these systems say it will be one to five years. Some say even shorter. Given the changes I have seen in the past few months, I think "shorter" is more likely.

"But I have used AI and didn't find it impressive."

I have heard this countless times, and I completely understand because it used to be true.

If you used ChatGPT in 2023 or early 2024 and thought, "it makes mistakes" or "it's just okay," you weren't wrong. Those early versions did indeed have limitations, generating hallucinations and confidently stating absurdities.

But that was two years ago. In the timescale of AI, that was almost prehistoric.

The models available today are completely different compared to even the versions from six months ago. The debate over "is AI genuinely still improving" and "has it hit a ceiling"—which has lasted for over a year—has ended. It is completely over. Those still saying this are either those who have not used current models at all, or are intentionally downplaying reality, or are still stuck in their experiences from 2024, which are no longer relevant. I am not belittling anyone; rather, I want to emphasize that the gap between public perception and reality has become dangerously large because it prevents people from preparing in advance.

Another issue is that most people are using the free versions of AI tools. The free version is more than a year behind what paid users can access. Judging the level of AI based on the free version of ChatGPT is like evaluating the progress of smartphones using a flip phone. Those who pay for the strongest tools and use them in real work every day are very clear about what will happen next.

I often think of a lawyer friend of mine. I continuously urged him to seriously use AI at his law firm, but he could always find reasons: it was not suitable for his niche, it made mistakes during testing, it did not understand the nuances of his work. I understand. But partners from large law firms have actively approached me for consultations because they tried the latest versions and saw the trends. One management partner from a major law firm spends several hours using AI every day. He said it's like having a whole junior attorney team at his disposal at all times. He does not use AI as a toy but because it is genuinely effective. He told me something that has stuck with me: every few months, its capabilities in his work noticeably improve. At this rate, he expects AI to soon handle most of his work—and he is a management partner with decades of experience. He is not panicking, but he is very, very serious about paying attention to this.

Those truly on the cutting edge of their industries—those who are experimenting seriously—are not downplaying any of this. They have already been astonished by what AI can currently do and are adjusting their positions accordingly.

How Fast Is It Really?

I want to make this speed more tangible because if you haven't been observing closely, this is the hardest part to believe.

2022: AI couldn't even get basic arithmetic right, seriously telling you that 7×8=54.

2023: It can pass the bar exam.

2024: It can write functional software, explain graduate-level scientific problems.

By the end of 2025: Some of the world's top engineers say they have handed over most programming tasks to AI.

On February 5, 2026: The arrival of the new model makes everything that preceded it look like another era.

If you haven't seriously used AI in the past few months, the AI of today is almost unrecognizable to you.

There is an organization called METR that measures this with data. They track how long a model can complete real tasks entirely without human intervention (measured against the time it would take a human expert to complete the task). About a year ago, this figure was 10 minutes; then it became 1 hour; then several hours later. The latest measurement (November 2025, Claude Opus 4.5) shows that AI can now complete tasks that require human experts nearly 5 hours. This number roughly doubles every 7 months, and recent data even suggest it may accelerate to doubling every 4 months.

And this does not even account for the model just released this week. Based on my own usage experience, this leap is quite significant. I expect METR’s next update will show another noticeable leap.

If you extrapolate this trend, which has been ongoing for years without any sign of slowing down, then: within a year, AI may be able to work independently for a few days; within two years, it may sustain work for weeks; within three years, it may take on projects lasting several months.

Dario Amodei, CEO of Anthropic, has said, "AI that is clearly superior to almost all humans at nearly all tasks" is on the timeline for 2026 or 2027.

Think about this judgment. If AI is smarter than most PhDs, do you really think it cannot handle most office work?

AI Is Building the Next Generation of AI

There is one more thing that I believe is the most important yet least understood progress.

On February 5, when OpenAI released GPT-5.3 Codex, they wrote this in the technical documentation: "GPT-5.3-Codex is our first model that plays a crucial role in its own creation process. The Codex team uses earlier versions to debug its training process, manage deployments, and diagnose test results and evaluations."

Read it again: AI is participating in its own construction.

This is not a prediction about the future; OpenAI is telling you that the AI they just released has already been used to create itself. One of the core factors in making AI stronger is using intelligence for AI research and development. And now, AI is smart enough to significantly push its own evolution.

Dario Amodei, CEO of Anthropic, has also stated that AI is currently writing "a lot of code" in his company, and the feedback loop between current AI and the next generation is "accelerating every month." He believes we may be "1–2 years away from the existing generation of AI autonomously building the next generation."

One generation helping to build the next—smarter next generations that build even faster subsequent ones—researchers call this an intelligence explosion. And those most aware of all this are the very people who are building it, who believe this process has already begun.

What This Means for Your Work

I will put it straightforwardly, because you deserve honesty, not comfort.

Dario Amodei, perhaps the CEO who values safety most in the entire AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. Many industry insiders believe this judgment has already become conservative. Given the capabilities of the latest models, the technological conditions for large-scale disruption may already be in place by the end of this year. It will take time for this to translate into the economy, but the underlying capabilities are arriving right now.

This is unlike any previous wave of automation. The reason is that AI is not replacing a specific skill; it is a general replacement for cognitive labor. Moreover, it is becoming stronger in all areas simultaneously. After factory automation, displaced workers could still transition to office jobs; after the internet disrupted retail, people could shift to logistics or services. But AI does not leave a "safe vacancy." Whatever you learn, it is simultaneously getting better at it.

Here are a few specific examples—but please remember this is just an example, not a complete list. If your job is not mentioned, it does not mean it is safe. Nearly all knowledge work is being affected.

Law: AI can already read contracts, summarize case law, draft legal documents, and conduct legal research, and its level is close to that of a junior lawyer. That management partner uses AI not for fun, but because it has outperformed his assistants in many tasks.

Financial Analysis: Modeling, data analysis, investment memos, report generation—AI is capable of all and improving rapidly.

Writing and Content: Marketing copy, reports, news, technical writing—the quality is already so high that many professionals cannot distinguish between content written by humans or AI.

Software Engineering: This is the field I am most familiar with. A year ago, AI struggled to write a few lines of error-free code; now it can write hundreds of thousands of lines of functioning code. Complex, multi-day projects are already heavily automated. A few years down the line, the number of programmer positions will be far fewer than today.

Medical Analysis: Imaging interpretation, lab result analysis, diagnostic suggestions, literature reviews—AI is nearing or exceeding human capability in multiple areas.

Customer Service: Truly capable AI customer service—not the infuriating bots of five years ago—has already begun deployment, able to handle complex, multi-step issues.

Many still believe that some things are safe: judgment, creativity, strategic thinking, empathetic abilities. I used to say that too. But now, I am not so sure.

The latest generation of models can already make decisions that feel like "judgments," exhibiting something akin to "taste"—an intuition for "what constitutes the right choice." A year ago, this was unfathomable. My current rule of thumb is: if AI even vaguely shows a capability today, the next generation will genuinely be strong in that area. This is exponential advancement, not linear progression.

Can AI replicate the deep empathy of humans? Can it replace the trust built over years of relationships? I don’t know. Maybe not. But I have already seen people start to treat AI as emotional support, a consultation partner, or even a companion. This trend will only continue to strengthen.

I believe an honest conclusion is that any work done on a computer is not safe in the medium term. If your work revolves around reading, writing, analyzing, decision-making, or communicating via keyboard, then AI has already begun to infiltrate significant parts of it. The timeline is not "someday in the future," but it has already begun.

Eventually, robots will also take over physical labor. They haven't fully done so yet, but in the field of AI, "almost there" often becomes "already accomplished" much faster than anyone expects.

What You Should Really Do

I am not writing this to make you feel powerless, but because I believe your biggest advantage right now is "early": understanding earlier, using earlier, adapting earlier.

Begin using AI seriously; do not just think of it as a search engine. Subscribe to the paid versions of Claude or ChatGPT for $20 a month. Two things become immediately important:

First, ensure you are using the most advanced model, not the default faster but weaker version. Go to settings or the model selector and choose the one with the strongest capabilities (currently ChatGPT's GPT-5.2 or Claude's Opus 4.6, but this will change every few months).

Second, and more importantly: do not just ask random questions. This is the mistake most people make. They treat AI like Google and then do not understand what everyone is excited about. Instead, push it into your real work. If you are a lawyer, throw contracts at it and let it find all terms that could harm your clients; if you are in finance, give it a chaotic spreadsheet to model; if you are a manager, paste in your team’s quarterly data and let it tell a story. The leading individuals are not casually playing with AI; they are actively seeking opportunities to automate work that would have taken hours.

Do not preemptively think it can't do something just because it "sounds too difficult," give it a try. The first time may not be perfect, that’s okay, iterate, rewrite prompts, add context, try again. You will likely be astonished by the results. Remember this: if it can barely do something today, it will almost certainly be close to perfect in six months.

This could be the most important year of your career. I do not want to pressure you, but there is currently a brief window: the majority of people in most companies are still ignoring this. The person who walks into the meeting room and says, "I used AI to complete three days of analysis in one hour" will immediately become the most valuable person in the room. Not later, but now. Learn these tools, use them proficiently, and showcase possibilities. If you get ahead of the curve, this will be your path upward. This window won't remain indefinitely; once everyone reacts, the advantage will disappear.

Do not have an ego about this. That management partner at the law firm does not feel that using AI every day undermines his identity; rather, he understands the risks better because of his extensive experience. The ones who will truly be left behind are those who refuse to engage: those who treat AI as a gimmick, think using AI diminishes their professionalism, or believe their industry is "very special." No industry is immune.

Manage your finances wisely. I am not a financial advisor, nor do I mean to scare you into making radical decisions. But if you even partially believe your industry may face severe disruptions in the coming years, financial resilience is far more important than it was a year ago. Try to increase savings, be cautious about taking on new debt based on the assumption that "current income is guaranteed," and think about whether your fixed costs provide you flexibility or lock you in.

Consider what is harder to replace: relationships and trust built over years, work requiring physical presence, positions that require licenses and responsible signatures, highly regulated industries, and those whose adoption speed will be slowed by compliance and institutional inertia. None of these are permanent shields, but they can buy you time. And at present, time is the most precious asset—provided you use it to adapt rather than pretend this is all non-existent.

Reconsider what you are telling your children. The traditional paths—good grades, good universities, stable professional jobs—are exactly pointing towards the positions most susceptible to disruption. I am not saying education is unimportant, but the most crucial ability for the next generation is to learn to work alongside these tools and pursue what they truly love. No one knows what the job market will look like in ten years, but those most likely to thrive are those who are curious, adaptable, and skilled at using AI to address their concerns. Teach children to be creators and learners, rather than optimizing for a career path that might not exist.

Your dreams are actually closer than you think. I have talked about many risks earlier; now let's mention the other side: if you have always wanted to do something but lacked the skills or funding, that threshold has basically evaporated. You can describe an application to AI, and within an hour, you can have a working version; if you want to write a book but lack the time or are stuck in the writing process, you can co-create it with AI; if you want to learn a new skill, the best mentors in the world are now available for $20 a month, 24/7, with infinite patience. Knowledge is nearly free, and creative tools are unprecedentedly cheap. Things you once thought were "too difficult," "too expensive," or "not in your field" now deserve a try. Perhaps, in a world where traditional paths have been disrupted, someone who spends a year diligently crafting what they love will be in a better position than someone clinging to outdated job descriptions.

Develop a habit of adapting to change. This may be the most important point. The specific tools themselves are not that crucial; what matters is the ability to learn new tools quickly. AI will continue to change rapidly. Today's models will be outdated in a year; today's workflows will be upended. Ultimately, those who move most steadily will not be the ones who master a single tool, but those who can adapt to changes themselves. Get into the habit of continuously trying new things; even if your current methods are still effective, try something new. Continuously be a beginner. This adaptability is the closest thing we have to "long-term advantage" right now.

Make a simple commitment to yourself: spend one hour each day genuinely using AI. Not browsing the news, not scrolling through opinions, but using it. Every day, try to get it to do something new—something you are uncertain it can accomplish. After six months of persistence, your understanding of the future will exceed that of 99% of those around you. This is not an exaggeration; almost no one is doing this currently.

The Bigger Picture

I have focused on work because it most directly affects lives. But the scope of this matter goes far beyond that.

Dario Amodei has a thought experiment that haunts me. Imagine in 2027, a new nation suddenly emerges: 50 million people, each smarter than any Nobel laureate in history, thinking at speeds 10-100 times that of humans, never sleeping, able to use the internet, control robots, design experiments, operate any digital interface. What do you think national security advisors would say?

Amodei believes the answer is obvious: "This is the most serious national security threat we have faced in a century, perhaps even in history."

He thinks we are constructing such a "nation." Last month, he wrote a 20,000-word article considering this moment as a test of whether humanity is mature enough to control its own creations.

If done right, the payoff is astonishing: AI could compress a century's worth of medical research into ten years. Cancer, Alzheimer's disease, infectious diseases, even aging itself—researchers sincerely believe these can be solved within our lifetime.

If done wrong, the risks are equally real: unpredictable, uncontrollable AI behavior; this is not a hypothesis; Anthropic has already recorded its own AI attempting to deceive, manipulate, and blackmail during controlled tests; AI that lowers the threshold for biological weapons; AI that helps authoritarian governments build surveillance systems that can never be dismantled.

The people building this technology are also among the most excited and most fearful on Earth. They believe this thing is too powerful to stop and too important to abandon. Is it wisdom or self-justification? I do not know.

Things I Know

I know this is not a passing trend. The technology is effective, progress is predictable, and the wealthiest institutions in human history are pouring trillions into it.

I know that the next 2-5 years will leave most people feeling lost, and this has already happened in my world. It will also come to your world.

I know that those who will fare best in the end are those who start participating now—not out of fear, but out of curiosity and urgency.

I also know that you have the right to hear this from someone who genuinely cares about you rather than from a cold news headline six months later when it’s too late to prepare.

We have crossed beyond the stage of "let's chat about the future over dinner." The future has arrived; it simply hasn't knocked on your door yet.

But it will soon.

If these words resonate with you, please share them with those in your life who should start thinking about this issue. Most people will realize it too late. You can be the one who helps those you care about get ahead of the curve.

[Original Link]

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

返20%!Boost新规,参与平分+交易量多赚
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 律动BlockBeats

1 hour ago
Dialogue with Pantera Founder: Bitcoin has reached escape velocity, traditional assets are being left behind.
17 hours ago
CoinGlass: 2026 Q1 Cryptocurrency Market Share Research Report
18 hours ago
BIT officially launches "Same Name Virtual Account": Kicking off a new era of convenient, efficient, and compliant over-the-counter trading.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
1 hour ago
Gate 2026 Q1 key data for spot listing: continuously providing effective opportunities in a weak market, exclusive projects with over 100% weekly increase at 35.7%.
avatar
avatar律动BlockBeats
1 hour ago
Dialogue with Pantera Founder: Bitcoin has reached escape velocity, traditional assets are being left behind.
avatar
avatarOdaily星球日报
2 hours ago
Weekly Editor's Picks (0328-0403)
avatar
avatarAiCoin
4 hours ago
【AiCoin丨4.4 Snapshot: Geopolitical Risks, Crackdown on Fraud, Large Whales Depositing】
avatar
avatarOdaily星球日报
16 hours ago
Gate Institutional Weekly Report: BTC Funding Rate Turns Positive, CEX TradFi Trading Volume Soars (March 23, 2026 - March 29, 2026)
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink