好的品味是唯一的真正护城河。
Good Taste the Only Real Moat Left

原始链接: https://rajnandan.com/posts/taste-in-the-age-of-ai-and-llms/

## 人工智能时代的品味崛起 人工智能和大型语言模型 (LLM) 极大地降低了*胜任*输出的成本——快速生成着陆页、备忘录和演示文稿。这种转变提升了**品味**的重要性——辨别平庸、真实和值得追求事物的能力,这已成为技术领域一项关键技能。然而,品味不仅仅是选择;仅仅依赖人工智能生成的内容,可能会让你成为一名策展人,而不是建设者。 真正的机会在于将人工智能的速度与人类的判断力、背景和创造真正独特事物的意愿结合起来。品味体现在注意到细节、拒绝平庸以及*解释*哪里不对——超越“感觉”,达到精确的诊断。 LLM 擅长模式匹配,通常会产生统计上合理但最终平庸的结果。现在的瓶颈不再是创作,而是**判断**——拒绝仅仅“不错”的选择,并要求具体性。人工智能揭示了你自身判断力的清晰度;生成多个选项会迫使你进行批判性评估。 最终,人工智能可以加速创作过程,但它无法取代所有权、后果或方向性决策。真正的价值来自于应用对现实世界约束、用户需求和战略权衡的理解——构建模型*无法*单独创建的东西。品味不是一项独立的技能,而是认真、专注工作的一个副产品。

## 人工智能时代“品味”的重要性 最近的Hacker News讨论集中在“良好品味”是人工智能日益塑造世界的最后一个竞争优势(护城河)的观点上。核心论点是,随着人工智能在生成统计上合理的结果方面表现出色,“品味”——定义*应该*构建什么的能力——变得至关重要。 具体而言,讨论强调,有效地提示人工智能生成高质量代码需要对“完美”代码的清晰愿景,而缺乏成熟的技术判断力的人则缺乏这种愿景。团队应该专注于定义理想的结果,而不是逆向工程现有的代码库。 然而,许多评论者对此提出异议,认为执行速度、分发、数据所有权和基础设施才是更重要的护城河。一些人认为对“品味”的关注是一种自我安慰,指出市场奖励优先考虑速度而非完美。另一些人则认为当前的科技格局充满变数,传统的职业道路正在被重新定义,从而引发了对新的价值来源的探索。最终,这场辩论凸显了人类判断在日益强大的人工智能世界中的作用不断演变。
相关文章

原文

AI and LLMs have changed one thing very quickly: competent output is now cheap.

A landing page can be generated in minutes. A product memo can appear in a single prompt. A pitch deck can look polished before anyone has done the hard work of deciding what the company actually believes.

That is why taste has become a serious topic in tech. When everyone can produce something that looks decent, the advantage shifts to judgment. The people who stand out are no longer just the ones who can produce. They are the ones who can tell what is generic, what is true, and what is worth pushing further.

But there is a second point that matters just as much: taste is not the final answer. If humans reduce themselves to selecting from AI outputs, they risk becoming reviewers of a machine-led process instead of builders with real stakes in the outcome.

The real opportunity in the age of AI and LLMs is not to become a better selector. It is to combine taste with context, constraints, and the willingness to build something that could not have emerged from the average alone.

What taste actually means

In this context, taste is not about luxury, status, or personal aesthetic branding. It is about distinction under uncertainty.

Most meaningful work does not come with perfect data. You do not get a spreadsheet that tells you which sentence will make a customer care, which feature is worth a month of engineering time, or which design crosses the line from polished to forgettable. You still have to decide.

Taste shows up in three places:

  • What you notice
  • What you reject
  • How precisely you can explain what feels wrong

That last part matters more than it first appears. Many people can say, "this feels off." Far fewer can say, "this fails because it sounds like every other SaaS product," or "this explanation collapses a regulatory constraint into marketing language and will confuse the customer."

Taste becomes useful when it moves from vibe to diagnosis.

Why AI and LLMs flatten the middle

LLMs are extraordinary pattern-compression engines. They absorb huge volumes of language, design patterns, and interfaces, then recombine them at speed. That is their strength. It is also their default bias.

By design, these systems are much better at producing statistically plausible output than at originating something deeply specific to your exact context. Left alone, they tend toward the safe center of the distribution.

That is why so much AI-generated work feels familiar:

  • Landing pages with different logos but the same structure
  • Product copy that could describe almost any app
  • Essays with clean headings and little lived judgment
  • Visual design that looks modern, but not memorable

This is not a failure in the catastrophic sense. It is a success at average. The problem is that average used to be hard enough that it still created some separation. Now it is abundant.

The result is a crowded 7 out of 10 world. The middle is full.

The new bottleneck is judgment

Before AI, mediocre work often reflected a lack of time, resources, or execution skill. Today mediocre work often means something else: the person stopped at the first acceptable draft.

That is the economic shift AI introduces. It compresses the cost of first drafts, which means the value moves downstream.

The scarce part is now the ability to say:

  • This looks fine, but it is too generic
  • This sounds impressive, but it hides the real trade-off
  • This interface is polished, but it does not fit how the user actually thinks
  • This plan is ambitious, but the operating constraints make it unrealistic

In other words, the scarce skill is not generation. It is refusal.

AI as a mirror for your own taste

One of the most useful things about AI is also one of the most humbling: it reveals how clear your own judgment actually is.

Ask an LLM to produce ten versions of a homepage hero, onboarding flow, support email, or product pitch. You will usually see a pattern:

  • A few clearly weak versions
  • A large cluster of acceptable versions
  • One or two that seem closer to what you want

The interesting question is not, "Which one should I pick?" It is, "Why are most of these still wrong?"

Your answer to that question is the quality of your taste.

If your critique stays vague, your taste is still underdeveloped. If your critique becomes precise, your judgment is stronger than the model output. You can then use the model well instead of being led by it.

A practical way to think about it is this:

LayerAI and LLMs do wellHumans still need to do
GenerationProduce many plausible variations quicklyDecide which direction matters
Pattern matchingRecombine common structures and phrasingSpot what is too generic for this situation
OptimizationImprove toward a stated targetDecide whether the target itself is right
ScalingTurn one idea into many assetsCarry the real context, stakes, and consequences

The system can generate options. It cannot supply ownership.

A practical loop for training taste

Taste improves through repeated exposure, critique, and shipping. AI can accelerate that loop if you use it correctly.

A simple method looks like this:

  1. Pick one high-leverage artifact from your week. A paragraph, a pricing explanation, a dashboard label, a customer email, or a key slide.
  2. Generate 10 to 20 versions with an AI model.
  3. For each version, write one sentence that starts with "fails because..."
  4. Rewrite the strongest version with a hard constraint such as:
    • No buzzwords
    • One idea per sentence
    • Must acknowledge a real trade-off
    • Must make sense to a first-time user
  5. Ship the final version somewhere real and observe what happens.

The goal is not to let AI choose for you. The goal is to build a sharper rejection vocabulary.

Over time, this changes how you work. You stop admiring polish for its own sake. You get faster at spotting empty specificity, borrowed tone, and fake confidence.

Why taste alone is not enough

This is where the conversation gets more interesting.

There is a strong version of the "taste matters" argument that quietly pushes humans into a narrow role. In that version, AI generates many outputs and the human stands at the end of the pipeline selecting the best one.

That is a useful role, but it is also too small.

Historically, important work did not emerge from detached selection alone. It emerged from co-creation under constraint. Builders argued with reality, with collaborators, with budgets, with materials, with timelines, and with the consequences of getting things wrong.

That friction matters. It is where depth comes from.

Once you see that, the risk becomes clearer: if human value is reduced to curation, the human becomes a discriminator in a mostly machine-driven loop.

The analogy to machine learning is imperfect but useful. In generative adversarial setups, the discriminator exists to help the generator improve. Once the generator is good enough, the discriminator is not the part that ships.

The warning is not that taste has no value. It does. The warning is that taste without authorship, stake, or construction can become a narrow and eventually fragile role.

What humans still do that models cannot own

AI can generate. It can recombine. It can optimize against prompts. What it cannot own in the human sense are the parts of work that carry real consequence.

Three examples matter:

1. Holding the stake

Real products operate under consequences that do not fit neatly inside a prompt. Trust, regulatory exposure, outage risk, team capacity, customer confusion, brand damage, and on-call pain all live here.

A model can suggest copy for a payments feature. It cannot hold responsibility when that copy obscures a regulatory limitation and support tickets spike.

2. Working with the truly new

Genuinely new ideas often look wrong at first because they do not resemble the training set. They feel awkward, incomplete, or suspiciously non-standard.

Humans can sit with that discomfort. They can protect something early and fragile long enough for it to become legible.

3. Choosing direction

The biggest decisions are not formatting decisions. They are directional decisions.

What problem is worth solving? What trade-off is acceptable? What kind of company, product, or writing do you want to be responsible for? What do you refuse to optimize for?

These are not post-processing tasks. They are authorship.

Why this matters for builders

This conversation matters beyond any single market because the temptation is now universal: settle for competent surface area and mistake that for meaningful work.

The tools are widely accessible. Small teams and solo builders can now ship what previously required much larger organizations.

That is the good news.

The risk is that teams everywhere start using AI to produce products that are globally polished but contextually shallow. A fintech interface can sound sophisticated while still failing to explain timing, settlement behavior, or support expectations clearly. A B2B SaaS site can look world-class while saying almost nothing a real buyer would recognize as grounded. A devtool can have excellent marketing language and still ignore the practical pains of understaffed teams dealing with on-call load, compliance pressure, and cost constraints.

AI makes it easier to sound sophisticated. It does not make it easier to be specific.

That specificity is where the advantage is.

For builders, taste should mean moving closer to real context, not farther away from it. That includes:

  • Writing for how people actually understand the problem, not how generic SaaS templates talk about it
  • Bringing domain and operating constraints into the product, not hiding them under abstract language
  • Designing for non-ideal, low-attention, real-world environments instead of polished demo conditions
  • Using AI to map the canon quickly, then deliberately departing from it where the context demands

What the market does not need is more competent clones. It needs builders who can use AI speed without surrendering the specifics that make a product trustworthy and useful.

A better way to use AI

If the bad use of AI is passive selection, the better use is active shaping.

That looks like:

  • Use AI to explore the design space faster
  • Use AI to study the best existing work and understand the canon
  • Use AI to generate alternatives you would not have considered immediately
  • Use your own judgment to reject what is generic, dishonest, or context-blind
  • Add constraints the model does not naturally know, then build from there

A useful question to ask whenever AI output feels polished but hollow is:

What am I adding here that the model could not have added on its own?

Good answers include:

  • A real operating constraint
  • A user truth learned the hard way
  • A regulatory nuance
  • A cultural detail
  • A strategic trade-off
  • A point of view you are willing to stand behind

If you cannot name that addition, you may still be in consumption mode.

Taste as a side-effect of serious work

The most useful conclusion is also the least glamorous. Taste is not a separate identity. It is a side-effect of paying close attention to reality.

It grows when you:

  • Study strong work carefully
  • Generate many options without falling in love with the first one
  • Learn to diagnose why something fails
  • Ship into the real world where feedback has consequences
  • Stay close to the domain instead of floating above it

AI and LLMs make the first draft cheap. They do not make judgment automatic. They do not remove the need for ownership. They do not replace the work of choosing what should exist in the first place.

That is why taste matters more now.

It is also why taste, by itself, is not enough.

The real edge in the age of AI is not having better vibes than the model. It is using the model to strip away average output faster, then applying human judgment where it matters most: direction, specificity, consequence, and the courage to build something that could not have emerged from the statistical middle alone.

联系我们 contact @ memedata.com