Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Why are the tech bros in Silicon Valley suddenly talking so much about "taste"? It's not redemption; it's a trap.

CN
Techub News
Follow
7 hours ago
AI summarizes in 5 seconds.

Written by: Uncle Rust who does not understand the classics

In the past two years, almost every industry has been overshadowed by a similar sentiment: on one hand, marveling at the rapid advancement of AI's capabilities, while on the other hand silently calculating whether one might suddenly become "no longer needed" one day.

Programmers worry about their code being generated, designers fear their creativity being mass-produced, editors are concerned about their expressions being templated, teachers dread knowledge being retrieved in real-time, and white-collar workers fear that the years of experience they accumulated might simply be compressed into a set of replicable processes.

It seems that AI is the source of anxiety; however, if we think deeper, we will find that what truly makes people lose their sense of security is not the AI technology itself.

What truly causes unease is our increasing awareness that we have long been living in a system that treats "people" as costs, "efficiency" as a belief, and "replacement" as progress.

AI merely illuminates this matter. It is like a mirror suddenly held up to our faces, allowing many to see clearly for the first time: the fear is not because technology is becoming stronger, but because the system has long defaulted to viewing people as a reducible, comparable, and optimizable line of numbers.

Ultraman thanks programmers for "writing code word by word," which has brought us to today, triggering public outrage.

Recently, I came across an article by Eric Markowitz from American Business and Technology Review, titled "It was never about AI." This has never just been about AI, but about how we view tools, how we perceive people, and what we are willing to entrust to machines while insisting on keeping certain things for ourselves.

He depicted a scene that everyone is familiar with today: a young analyst sitting in an office, looking at a spreadsheet, discovering that a certain company's headcount exceeds that of its competitors by 14%, and thus writes a judgment of "should be optimized"; the report goes out, stock prices drop, the board panics, the CEO intervenes, and weeks later, thousands of people receive layoff notices from HR. This is not some sporadic tragedy but rather "the system is operating as designed."

This phenomenon reveals the profound coldness of contemporary workplaces: people lose their sense of security not because machines suddenly possess some will, but because many organizations have long defaulted that a person's value must first be translated through financial forms to be recognized.

As long as a task can be completed faster, it must be done faster; as long as a person can be replaced, they should be replaced. The modern business world, represented by Silicon Valley and Wall Street, has built "optimization" into a religion.

Herein lies the problem.

Our fear of AI today superficially appears to be a fear of technology, but fundamentally it is a fear of "replaceability." What people really fear is not that machines will write, draw, analyze, or summarize, but that businesses and organizations will use these capabilities to define "people" themselves as a redundant item with greater justification.

In other words, AI did not invent this logic; it simply advanced it to an unavoidable extent.

Stop being friends with time; in the AI era, "space" is your path to wealth.

Not understanding: AI has consumed not just software, but the entire internet.

Silicon Valley straight men, discussing taste.

Recent trends from Silicon Valley illustrate this point well.

On one hand, things that originally belonged to "people" are being repackaged as new competitive metrics. Recently, The New Yorker published an article discussing an interesting phenomenon: in the age of AI, the Silicon Valley tech circle suddenly began frequently discussing "taste."

This word has overnight become a new buzzword, as if following "disruption," "growth," and "moat," it has become the next keyword that entrepreneurs and investors frequently mention. Silicon Valley venture capital godfather Paul Graham stated that in the age of AI, taste will become more important; some entrepreneurs said that personal taste is the new moat.

It sounds like a renaming of the value of "people": look, no matter how strong the technology is, it cannot replace human taste, judgment, and aesthetics. But the problem is, the "taste" mentioned in Silicon Valley is not a true sensitivity, not that which Voltaire referred to as "you must feel beauty and be moved by it."

Silicon Valley's "taste" is a decision-making ability that can yield profit, a quantifiable competitive advantage, a "moat." It transforms a type of human perceptual capacity that cannot be easily quantified into another efficiency metric.

It is a new sorting mechanism: in an era where everyone can produce content, create products, and write code with the help of AI, the most important remaining capabilities are to judge what is worth doing, what can sell, what is easier to spread, and what is more likely to yield returns. The New Yorker termed it "taste-washing": an attempt to cloak anti-humanistic technology in a veneer of liberal humanism.

In the eyes of Silicon Valley straight men, "taste" has also been rewritten into an efficiency language. It no longer primarily means whether you can truly be moved by beauty, distinguish subtle differences, or perceive what cannot be calculated in the work; it resembles a high-level decision-making ability, a market sensitivity, a discernment that can be monetized. To put it more directly, even "taste" has begun to be quantified into KPIs.

What is truly intriguing behind this is that the technical system is not satisfied with automating standardized, repetitive labor; it also wants to absorb the aspects of human experience that are the hardest to quantify, most humane, and most resistant to instrumental reasoning into the logic of efficiency.

Previously, we believed that at least aesthetics belonged to people, that at least judgment belonged to people, that at least questions like "why do I like this," "why am I moved," and "why do I choose this instead of that" could not be entirely taken over by platforms and models. But now, they are also packaged as productivity, translated into advantages, and incorporated into growth narratives.

The competitive advantage of individuals in the age of AI fundamentally does not lie in what capabilities one possesses.

Stop learning from Buffett: the main battleground for wealth has already shifted, but no one is telling you.

Andersen's head.

On the other hand, even the most intimate and internal capability of "self-reflection" has begun to be publicly demeaned by some tech tycoons.

Recently, another Silicon Valley venture capitalist, Marc Andreessen, expressed similar views repeatedly on podcasts and X: he opposes self-reflection, believing that people should not always look inward, nor should they be held back by the past; he even described the human mind as lacking depth, as if the so-called soul, reflection, and inner complexity are merely exaggerated illusions.

Andreessen's reasoning is: those who indulge in the past will be trapped by it. "Move forward. Just walk." He even cited Stoicism as support, claiming that Marcus Aurelius would stand on his side.

This is a brilliant misreading. The very act of Marcus Aurelius writing "Meditations" is a sustained practice of self-examination spanning decades. Socrates said, "The unexamined life is not worth living," which is not a motivational quote but the starting point of the Western rational tradition.

Andreessen's timing in presenting this viewpoint is quite subtle. Superficially, he advocates action and opposes internal consumption; in reality, however, it reveals a very dangerous tendency: it reduces people to a flat information processor, as if the meaning of a person lies solely in input, output, iteration, and progression, while those slow doubts, deep reflections, and self-interrogations can all be regarded as inefficient, useless, or obstacles to action.

Ultimately, this turns people into large language models.

Models do not self-reflect; they only generate patterns based on existing ones; models lack inner lives; they merely predict the next most likely word. They do not truly revisit their own lives; they do not change moral judgments due to memory, nor do they reinterpret responsibility through suffering.

Yet humans are profoundly different.

Humans are human not just because they can act, execute, and create results, but also because they can hesitate, look back, correct themselves, and question "why am I doing this." True self-reflection is not indulgence, not internal consumption, and not endless self-commiseration; true self-reflection is an attempt by a person to understand their desires, limitations, biases, and responsibilities so that they are not entirely led by external systems.

If even this ability is regarded as meaningless, it signifies that what certain technological elites truly espouse is not a freer person but a more efficient person; not a more complete person, but a smoother person; not a person with inner measurements, but one that is more machine-like and better suited to work with machines.

Forget the rigidity of class; a more critical bifurcation is happening.

To outpace the evolution of AI, what you need is not lobster, but a personal brand.

Considering these phenomena alongside AI anxiety, many issues become clearer.

What we truly face is not just whether "machines will replace labor," but rather a bigger trend of the era: technological capital not only aims to take over human labor but also seeks to redefine human value. It not only wishes to replace your execution but also aims to shape your judgment; it not only seeks to enhance your speed but also intends to rewrite your answer to the question "what is worth pursuing."

Thus, a dangerous paradox emerges: those qualities that inherently belong to humans, which cannot be easily quantified—taste, self-reflection, experience, prudence, empathy, and responsibility—are simultaneously proclaimed as the most scarce commodities of the AI era while quickly being translated into new performance languages, new competitive lexicons, and new commercial packaging.

They are preserved not because they are inherently valuable, but because they temporarily cannot be cheaply replicated. Once they can be replicated, they will be immediately incorporated into the same replacement logic.

This is the true reason why many people today have lost their sense of security. We have never just worried about job disappearance but rather that even those qualities that make us human are being forced to accept efficiency judgments.

You should have taste, but preferably that taste should translate into product judgments; you should be reflective, but preferably that reflection should enhance organizational decision-making; you should be creative, but preferably that creativity should be scalable; you should have emotions, but preferably that emotion should enhance user stickiness.

In the end, even the most private, slowest, and least quantifiable things are dragged into the production system.

Markowitz used a very good analogy in his article: the fastest-growing trees often fall first in a storm. What nature truly teaches us is not how to infinitely accelerate but how to take root. What survives fires, harsh winters, and droughts has never been the fastest-expanding species but rather those with deep roots, interdependence, and slow growth ecosystems.

The business world, however, has long been the opposite: it worships speed, believing in scale, views patience as weakness, sees people as friction, and considers any part that cannot be immediately converted into growth as superfluous. Thus, many businesses shout "empowerment," "the future," and "innovation," but are doing the same thing: reducing people as much as possible, compressing judgment as much as possible, folding complex human experiences into the shortest paths, lowest costs, and highest efficiencies.

Yet, precisely those "slow" things determine whether a person or organization has resilience. Human experience is slow; trust is slow; nurturing talent is slow; forming communities is slow; true judgment is also slow.

A company can make reports prettier, processes smoother, and reporting more efficient with AI, but if it continually drains these "slow variables," what remains may just be an empty shell: lightweight in the short run but more fragile in the long run; seemingly advanced yet less able to withstand storms.

There is a passage in Markowitz's article:

We have long financialized everything. We have reduced people to rows of cost items, treating them like inventory. None of this is caused by AI. AI merely holds up a mirror. And we do not like the reflection of ourselves in that mirror.

Therefore, the real question should never be "Will AI replace me?" But rather, we must return to that most fundamental question: what can tools do, and what should tools do, are not the same thing.

Knowledge arbitrage is dead; narrators live forever.

AI2028-AI2027-AI2026: Countdown to revolutionary changes and ordinary people's self-rescue guide.

This is also why the phrase "we are not our tools" is so important.

We should certainly learn new tools, understand new technologies, and enhance our capacity to collaborate with technology; but if we assume that as long as something can be automated, it must be automated; that as long as a person can be replaced, they should be replaced; that as long as a capability can be converted into KPI, it deserved to exist, then what we lose is not just some job positions, but humanity's right to define its own value.

What is truly worth preserving is not a specific profession but a more fundamental stance: people cannot be measured solely by efficiency, work is not just an exchange of costs and outputs, and organizations cannot merely be tools for capital extraction of profits. The stronger the technology, the more humans must hold onto judgment. Because whether one can do something is a technical question, while whether one should do it is always a human question.

In this sense, what makes you lose your sense of security is not AI technology itself. What truly makes people uneasy is that as technology becomes stronger, whether our society, enterprises, and culture still have the ability to adhere to the simplest matter: treating people as people, rather than understanding people as tools waiting to be optimized, prompted, and replaced.

If we cannot even uphold this principle, then what will be replaced is not just work; what will be emptied will be our entire understanding of work, dignity, meaning, and even what it means to be "human."

What is most important now is perhaps not to continue asking what AI can do, but to re-ask: what kind of person do we truly want to become, and what kind of world do we want to build that treats people well?【Understand】

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

3 hours ago
Weekend Recommended Reading: Sun Yuchen's Break with the Trump Family Project, Drift Protocol's Stolen Funds Face Class Action Lawsuit
3 hours ago
OpenAI Economist Internal Sharing: The Changing Employment Landscape
3 hours ago
After the collapse of Drift: Tether plans to invest 127.5 million dollars to rescue, while Circle's "legally non-freezing" has led to a class-action lawsuit.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
2 hours ago
Arthur Hayes new article: It is now "no trading" time.
avatar
avatar律动BlockBeats
2 hours ago
Claude Opus 4.7 Real Test: Does it Deserve to be Called the Strongest Model?
avatar
avatarOdaily星球日报
2 hours ago
World models transition from prediction to planning, HWM and the challenges of long-term control.
avatar
avatarTechub News
3 hours ago
Weekend Recommended Reading: Sun Yuchen's Break with the Trump Family Project, Drift Protocol's Stolen Funds Face Class Action Lawsuit
avatar
avatarTechub News
3 hours ago
OpenAI Economist Internal Sharing: The Changing Employment Landscape
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink