唯一的赢法是不下棋。
The only winning move is not to play

原始链接: https://gregg.io/the-only-winning-move

## 用户研究价值的侵蚀 生成式人工智能工具的兴起威胁着以人为本研究的核心原则。虽然人工智能在模式识别和数据分析方面表现出色,但依赖它来*进行*研究——生成计划、进行“访谈”、综合洞察,甚至创建人物角色——会导致平庸且缺乏差异化的结果。这并非创新,而是一场竞相降低成本的竞赛,牺牲了只有熟练研究人员才能提供的细致理解。 推动人工智能驱动研究的并非自然发展,而是由工具供应商寻求增加利润和高管优先考虑速度而非质量所驱动。这制造了一种危险的效率幻觉,掩盖了对真正用户理解的漠视。接受这种趋势可能会使用户研究人员变得多余,将他们的角色降低到对人工智能输出进行质量控制。 作者反对被动采用这些工具,敦促研究人员捍卫他们的专业知识,并优先考虑工作的独特人性方面——建立联系、情境化发现和提供有影响力的洞察。最终,优先考虑人际联系和严格的方法至关重要,并且明确反对自动化核心研究职能的“红线”对于维护该职业的价值至关重要。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 唯一的制胜方法是不参与 (gregg.io) 24点 由 AIBytes 3小时前 | 隐藏 | 过去 | 收藏 | 讨论 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

The premise and value of human-centered research is that subject matter experts apply their skills to the design of studies that uncover relevant information from appropriate research participants in service of organizational goals. Every concession we make in the name of efficiency or innovation that removes humanity from this process is debasement and a step toward irrelevance.

What is the role of the user researcher if we offload both users and research to generative AI platforms and tools? Once you use prompts or mash buttons to generate an end-to-end research plan; or automate synthetic or AI interviews, surveys, or prototype tests; or generate personas, jobs-to-be-done, insights, product recommendations, or a marketing plan, then what the fuck is your unique value? What are you when you offload your craft and expertise to a simulation in the name of innovation or efficiency? What makes you anything less than redundant?

AI is fantastic for pattern recognition, with realized benefits in medical imaging analysis. It’s great at statistical modeling and multivariate analysis! But the very best possible outcome from outsourcing research expertise to LLMs is an average result (non-gated overview of that academic article here). While it sounds pragmatic or “better than nothing” for researchers and the orgs that employ them to lean on AI for research, it also leads everyone to the exact same average quality results and removes the differentiation that leads to innovation or unique experiences. 

If organizations want to stand out in crowded marketplaces, asking for run-of-the-mill research advice from a bot over trusting subject-matter experts sure does sound like self-sabotage. And the sabotage is doubly so for the researchers who embrace the tools that will serve as their replacements.

But Gregg, this is how the profession is evolving

Says who? The executives who never cared about research in the first place? The PMs who never had the patience to wait for quality research results? The investors and tech companies that need (literal) buy-in? The tools and platforms with a vested interest in selling AI research as something that literally anyone in an org can do? Actually…

But this is how research is done today

Bullshit: this is how user research is marketed today. For the past 15 years platforms like Usertesting and UserZoom (before they were both acquired by the same private equity company and merged to form a near-monopoly in the enterprise research space) positioned themselves as tools for design and product teams to “become customer-centric” and “listen to the voice of the user.” The value proposition was that orgs could use these platforms either as an add-on to existing research and design practices or before they had an in-house research expert.

Today tooling platforms see an opportunity to sell AI-assisted research tools to organizations as an alternative to hiring research experts. When 80% of the sponsors of a large user research conference are selling tools that replace user researchers with AI in the name of democratization, we’re not the customers; we’re marks. If your business model relies on seat licenses, it’s much more profitable to sell a tool that makes everyone a researcher rather than a tool that supports a dwindling number of researchers.

But marketing isn’t reality. Just because a handful of user research thought leaders who should know better were paid to run and promote studies using AI research tools without disclosing the sponsorship in their breathless LinkedIn posts doesn’t necessarily mean these are the tools your organization should adopt. In fact, an undisclosed sponsorship is a good way to create the illusion that a product is gaining widespread adoption by experts, which is why the Federal Trade Commission regulates against it.

If I use a tool and tell you about it, that’s a recommendation. But if I am paid a fee to use a product and then tell you about it, that’s different—that’s a sponsorship. Then I’m no longer a researcher recommending a tool—I’m an influencer peddling sponsored content. If a product resorts to shady advertising practices that requires a pliant thought leader’s complicity in constructing a Potemkin industry, maybe the whole enterprise is rotten

This is also why ethics statements are important. Let’s uphold some professional standards lest we become grifters.

But regular (i.e., rigorous) research takes too long

For who? What product decision is so important that planning and spending time with users is not viable? Better yet, what product decision wouldn’t benefit from time with flesh and blood humans to gain context, mitigate risks, and zero in on the right thing?

Every research planning decision is a tradeoff between time and confidence—a good researcher can always learn something within a given time period and budget. But frequently the problem is that neither time period nor budget factor into the arbitrary milestones and deadlines a group of people place on a calendar. 

If that group of people repeatedly fails to include enough time for research, I’d argue that they might not value research in the first place. Shoehorning a half-assed generative AI research effort into an unreasonable project window isn’t going to make you look like a team player nor make people see the value of research; it’s only going to validate that research should never require time (nor researchers).

Going further, for the founders and executives who never believed in user research, AI research is a way to skip doing research entirely while presenting the veneer of “listening” to their users. When user researchers adopt AI research tools it not only debases their contributions to understanding users, it also reinforces the notion that you don’t really need to do user research to seem human-centric.

But AI lets us 10x our research efficiency

Are you listening to yourself? You sound like every bad AI-generated post on LinkedIn now. I said earlier that the work of research can be optimized to fit time and organizational constraints, but that’s not the “efficiency” I see being adopted now:

  • I fed Claude some survey results and asked it to create unique one-pagers for my executive, product, and design partners. An expert might be able to get away with this one time because they can evaluate the validity and quality of the one-pager (though why you’d rather proofread the work of an LLM than create something original is beyond me). But once you cross this chasm, you’ve demonstrated that this is how research can be summarized and shared… by anyone with access to Claude. You’ve made yourself—and those with your job title—dispensible.
  • We created a gem for designers to get started with their own research without having to work with a researcher. Right—because the problem was never that asking designers to take on an entirely different job in addition to design but without additional time was too much to ask. The problem was having to collaborate with a living and breathing research expert.

But there’s still a human in the loop!

Research is already a human-to-human loop, with meaning conveyed by participants and contextualized by researchers. Adding a human back to what was already a perfectly functional loop doesn’t enrich anything and only adds inefficiencies—even the people who review LLM answer quality warn against using LLMs for accurate answers.

Personally, I transitioned from design and design education to user research because I was—and still am—blown away that I could learn from other humans as my job. A more religious person might say I’ve been blessed to earn a living by talking to writers, readers, editors, small business owners, designers, agencies, and more in support of organizations who build products for these groups. 

But it’s not just that I enjoy practicing research—I’m good at it. User researchers are experts at it. Why would I reduce myself to quality control on a slop assembly line and then, with my whole chest, tell people I am the human in the loop? Why should we debase ourselves by implying that our expertise is replaceable?

Maybe you just hate or don’t get AI

Au contraire! AI can be magical (especially in medical imaging and programming). I used Gemini to update a SQL query recently at the encouragement of a data science peer. I use a product called Granola (not a paid mention, fwiw) for call transcription, notes organization, and pulling up quotes. I work with designers who spin up prototypes with Figma Make that I then test with humans. I work with engineers who use AI for spam mitigation and trust and safety tasks. Jess Holbrook smartly advocated for using AI to take a dissent pass on research artifacts and to challenge yourself and your findings.

What I don’t do is use generative AI or LLMs to spit out an entire research plan, synthesize hours of interviews, or conduct my interviews for me (?!). One reason why I don’t do any of these is that generative AI can’t replace the meaning-making that human researchers do. Why would we even want to use AI to replace the tasks that humans are uniquely good at, or the tasks that humans enjoy, or the tasks that connect us to other humans? To me the personal connection is the best part of being a user researcher or user-centered designer!

This is what gets my goat: AI has many useful applications. This moment in time really is akin to the start of the internet era, in that AI has broken containment and entered mainstream conversation (in no small part due to marketing hype centered on illogical use cases). However, the hype has created acolytes with an ill-fitting solution to the non-existent problem of how to study humans better.

You sound like a Luddite

The Luddites were not anti-progress; they were pro-worker. Automation increased production but eliminated jobs, lowered wages, and reduced quality. Sound familiar? 

Researchers already document findings at a faster velocity than orgs can act on them. It strains credulity that tech leaders are clamoring for even more yet worse findings.

The folks extolling the virtues of offloading critical research tasks to faulty tech are eroding not just the value of an entire professional class but of human curiosity and knowledge. Listen to Billy Bragg, support unions, and always stand on the side of workers… especially when replacing them with unreliable facsimiles helps no one but the people who stand to profit from such a move.

So what do we do?

This is a scary time! There have been thousands upon thousands of tech workers—including researchers—laid off in the last couple of years in the name of progress who kept their heads down, did quality work, and earned wonderful performance reviews. Going along just to get along didn’t earn anyone a reprieve. So it’s not like we have anything to lose by advocating for ourselves.

The only people who stand to gain the most from the adoption of generative AI research platforms and practices are those who claim it makes research better and those whose job depends on that belief. These claims are self-promoting narratives, sponsored content, or both.

My move is not to play the game of debasing ourselves in the name of progress. Just because a bunch of smart people say that “this is the future” doesn’t mean they’re right, as we just saw with web3, crypto, and NFTs. No one can predict the future (despite what NPS proponents might say).

I didn’t enter this field and take this type of job only to not do the job. My red line is conceding the things I am—we are—uniquely good at to a product, platform, or bot. My red line is trading in the parts of the job I am both an expert in and enjoy for tasks that make the job something else entirely. 

What is your red line?

End hits

  • No part of this blog post used AI. I like writing—it helps me think; I like thinking—it helps me write.
  • However, my human friends and fellow researchers Meghan Cetera, Joey Jakob, and Gabe Trionfi generously provided feedback and further reading recommendations, for which I am grateful.
  • For more on humans in the loop, read Pavel Samsonov’s ‘Human in the loop’ is a thought-terminating cliche.
  • The title of this post comes from the movie WarGames, in which a supercomputer learns about futility and no-win scenarios.
联系我们 contact @ memedata.com