Written by: Boris Cherny
Within Anthropic, Boris Cherny is humorously referred to by colleagues as the "Father of Claude Code." He personally led the team that created this deeply integrated large model programming assistant and has experienced the significant shift from "code auto-completion" to "agents writing 100% of the code."
In this sharing session aimed at entrepreneurs and engineers, he systematically narrated the story of Claude Code's birth, why he claims "coding is solved," and what changes might occur in the software industry and team structures under this premise.
From "Accidental Project" to Phenomenal Product
Boris joined Anthropic at the end of 2024, at which time there was an incubator-like team within the company called Anthropic Labs. This small team later produced several core products, including Claude Code, MCP, and desktop applications, and disbanded after completing its mission, only to be recalled for a "second round."
In the context of 2024, the industry's mainstream imagination of "AI writing code" was still stuck on "type suggestions/completions" in IDEs—just hit Tab to let the model suggest a line of code. Boris intuitively felt that the model's capabilities had already far exceeded this form, and the true "product form" was severely lagging behind, which they referred to internally as "product overhang."
Therefore, Claude Code's initial goal was very ambitious: not just to make a smarter autocomplete, but to let the agent take on the work of "writing all the code," while humans were more responsible for review and decision-making.
Reality, of course, was not so smooth. For the first six months, very few people actually loved using Claude Code; it could barely write about 10% of the code, and the experience was very rough, even within Anthropic, where it was considered an experimental tool. It wasn't until May 2025, with the release of the Opus4 model, that the usage curve truly saw exponential growth, and each subsequent model upgrade (4.5, 4.6, 4.7) brought about significant turning points of "becoming much better again."
In retrospect, the product's most distinctive feature is that it was not designed for "the current model" from day one, but rather for "the next generation model six months into the future." The team knew full well that there wouldn't be a product-market fit (PMF) for some time, yet they persisted in establishing "the right interaction form" first and then patiently waited for the model to catch up.
Why say "coding is solved"?
At the event, Boris directly asked the programmers in the audience: Who still writes 100% of their code by hand? Who uses an agent like Claude Code to write code 100%? The vast majority of people were somewhere in between, and he humorously remarked, "Then that's 50% solved."
But for him personally, the answer is quite extreme: he now generates 100% of his code using Claude Code.
- The codebase of Claude Code itself is entirely written by the model, with a very conventional tech stack of TypeScript + React, without any flashy black technology.
- One reason for choosing this stack is that when the early model capabilities were not strong enough, utilizing "the mainstream tech stack on the model training distribution" could significantly improve generation quality.
- As the model iterates, it can now almost seamlessly learn new languages and frameworks, and the choice of tech stack is no longer a bottleneck.
In his personal workflow, Boris can complete dozens of PRs each day, and there was one day he "gained" 150 PRs just to see how high he could push his efficiency; behind all these PRs, the actual code writing is done entirely by Claude. His role is that of Product/Architecture/Reviewer.
Of course, he also admits that this "100% solved" currently only holds true in certain scenarios:
- Small, clear, and mainstream tech stack codebases can now be completely handed over to the model for writing.
- Very large, historically complex codebases, or niche languages, and extremely specific engineering environments still show significant limitations for large models.
But his judgment is straightforward: most of these limitations are just "waiting for the next version of the model."
A phone + thousands of agents: his personal workflow
Boris once shared his development environment on social media, initially not expecting it to spark so much discussion because, in his view, it was just his "naturally evolved" way of working.
Now, most of his work has even moved to his phone: opening Claude App, switching to the Code tab on the left, he can see multiple parallel sessions. Typically, he maintains 5 to 10 sessions simultaneously, each with numerous sub-agents, easily totaling hundreds; by evening, there can even be over a thousand agents running longer tasks in the background.
The key concept supporting this system is a seemingly simple command: /loop.
The essence of /loop is to have Claude schedule a "task that will automatically repeat in the future," similar to cron: it can be set to execute at frequencies like every minute, every 5 minutes, daily, etc.
With this loop, he has built a complete "automated maintenance system":
- There are loops specifically to "watch over PRs": fixing CI issues, automatically rebasing, keeping the PR list clean.
- There are loops responsible for "maintaining the CI health of the entire project": automatically locating and fixing issues like flaky tests.
- Every 30 minutes, a loop grabs user feedback from Twitter, automatically clusters and organizes it, forming a feedback summary that can be directly actionable.
In his description, the loop has become like a programming primitive oriented towards the future: the simplest workable form, yet very powerful. Coupled with the recently launched routines (long-term workflows running on the server that will continue even if the computer is turned off), the model can continuously push project progress in the background.
Team structure: everyone is a "cross-disciplinary talent"
When a person can use AI to write 100% of the code and development efficiency increases by 10 to 100 times, the way teams are organized will naturally change.
Boris has a core judgment about future teams: "cross-disciplinary talents" will be far more common than today.
Today, the so-called generalist usually refers to "a generalist within the engineering system"—for example, a person who can manage iOS, Web, and Server; but the new trend he observes is:
- Generalists will cross more functional boundaries, such as: engineering + design, engineering + product + data science, engineering + finance/operations, etc.
- In their Claude Code team, there is already a state where engineering managers, product managers, designers, data scientists, finance, user research, and other positions can all write code and extensively use Claude Code to advance their respective work.
In other words, everyone still has their own professional depth, but "writing code" is no longer the privilege of a select few but a basic capability everyone possesses, much like today's skills in Office and PPT.
This also directly points to a more macro judgment: the threshold for software productivity will be thoroughly lowered, and the most knowledgeable individuals in their fields will become the most advantageous "developers."
For instance, when developing accounting software, it may not necessarily be the top engineers who should dictate the product shape and logic but a highly knowledgeable accountant who can skillfully leverage AI to write code, because "coding" has become the relatively easier part, while "a profound understanding of the field" is the scarce resource.
From "programmer class" to "全民编程" (everyone programming): a comparison to the printing press
To illustrate the depth of this transformation, Boris offers one of his favorite historical comparisons: the impact of AI on software production is likely to resemble the impact of the printing press on text production in 15th-century Europe.
Before the advent of movable type printing, about 10% of people in Europe could read and write, and they were employed around the power structures of the time (kings, nobles, churches) performing the roles of "reading" and "writing" for others. Literacy was a highly specialized skill that most people never had the chance to engage with in their lives.
In just 50 years after the invention of the printing press, the volume of published text in Europe exceeded the total from the previous thousand years, and the cost of a single book dropped by approximately 100 times. Over the next few hundred years, with continuous adjustments in the education system and social structure, global literacy rates rose to around 70%: reading and writing transformed from a professional skill for a minority into a basic ability for the vast majority.
Boris's viewpoint is: software and programming are undergoing the same curve, and the pace will be even faster.
- In the past, writing software was a "highly specialized, extremely high-threshold" profession.
- Next, writing software will become a universal skill, akin to "typing" or "texting."
- There will still be professional engineers and top system architects, but social division of labor will be completely reshaped: a large number of domain experts, entrepreneurs, and ordinary professionals will be able to directly "collaborate with models to write software."
Will SaaS face a "Great Extinction"?
When AI lowers the cost of writing software by 10 or even 100 times, what will happen to existing SaaS products? Will there be a "Great SaaS Extinction"? This is one of the questions Boris is most often asked.
His answer is far more complex than a simple "will/will not"; he borrowed the "Seven Powers" framework frequently mentioned in the Acquired podcast for analysis.
In his view, AI will quickly devalue certain business moats:
- Switching Costs: When you can quickly transfer data and rebuild workflows using models, the previous lock-in effects built on complex integrations and configurations will be significantly weakened.
- Process Power: Many companies rely on process design and complex workflows as their competitive edge, and large models are becoming increasingly adept at understanding and improving processes, especially models like 4.7 that can "automatically hill climb (iteratively optimize until achieving the goal)", are particularly good at squeezing out inefficiencies in processes.
At the same time, some more fundamental moats will not disappear because of AI; on the contrary, they may become more important:
- Network effects
- Economies of scale
- Scarce resources (such as unique data, channels, special qualifications) and so on.
Another important trend is: in the next decade, the number of startups that can "create products comparable to large companies with minimal manpower" will significantly increase, potentially tenfold from the past decade.
The reasons are:
- Large companies will face immense inertia and internal resistance when restructuring their processes and retraining all employees to use AI.
- New teams can be "AI-native" from day one, creating extremely high value density with very few people, delivering a dimensionality reduction impact on traditional vendors in many niche fields.
In his view, this era is very friendly to entrepreneurs and developers—"it may be one of the best times to create products and startups."
How does Anthropic "eat its own dog food"?
Many assume that a model company like Anthropic would use "stronger secret versions" internally in advance, thus staying ahead of the outside world for a long time. Boris's comment is quite the opposite:
- At the model level, the version used internally is the same as everyone else's (for example, extensively using Opus 4.7), with a small amount of experimental modeling using the research model Mythos, but there won't be a long-term reliance on a private version that is "difficult for the outside world to obtain."
- The true leading advantage, in his view, lies not in the models but in the depth of the organization's integration of AI.
Specifically:
- The company no longer practices "pure hand-written code," even SQL queries are generated by the model.
- Different teams' Claude instances "chat and collaborate" on Slack, helping human engineers fill gaps and communicate information across teams.
- Many processes have been restructured around loops, sub-agents, routines, etc., allowing the model to continuously push work in the background.
For this reason, he believes that the largest "gap" currently lies not in technical accessibility but in organizational and process design. This is a tremendous opportunity for startups: instead of gradually transforming old processes, it is better to design the organization from day one in an "AI-native" way.
Product opportunities for the next 6 to 12 months
Returning to questions about products and startups: if he saw a "programming product overhang" a few years ago, where does he see the next overhang today?
He mentioned several directions:
- Claude Design: A direction that can currently be used but will become even more impressive with model iterations. It represents the embryonic form of "deep AI transformation of design workflows."
- Loop/Batch/Large-scale parallel agents: enabling hundreds or thousands of tasks to be simultaneously advanced on different agents, becoming a standard capability rather than the black magic of specialist players.
- Computer Use (the model directly operating computers): using visual + control capabilities to make the model operate local software like a human; for older systems with no API/MCP, this is a universal solution.
The common feature of these directions is: they are already "barely usable" today, but the true explosive point may be after one or two more generations of models.
Just like Claude Code of yesteryear, ambitious teams can start designing product forms for "future models" now, taking the opportunity before the models catch up.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。