谷歌的桑达尔·皮查伊表示,不要盲目相信人工智能所说的话。
Don't blindly trust what AI tells you, says Google's Sundar Pichai

原始链接: https://www.bbc.com/news/articles/c8drzv37z4jo

谷歌CEO桑达尔·皮查伊警告不要“盲目信任”人工智能工具,承认它们即使有免责声明,也容易产生错误。在BBC采访中,他强调了多样化信息环境的重要性——例如将谷歌搜索与人工智能结合使用,并理解人工智能的优势和劣势。 虽然人工智能对创意任务有帮助,但皮查伊强调用户必须批判性地评估其输出结果。此前,谷歌的人工智能概述因不准确的摘要而受到批评,凸显了对“幻觉”——人工智能生成虚假信息——更广泛的担忧,尤其是在健康和新闻等敏感话题上。 专家吉娜·内夫认为,谷歌应该优先*提高*人工智能的可靠性,而不是依赖用户的事实核查。皮查伊为快速发展与安全措施之间的平衡辩护,并指出谷歌增加了对人工智能安全的投资,并开源了检测工具。他还回应了对人工智能垄断的担忧,表示没有一家公司应该控制如此强大的技术。谷歌最近推出的Gemini 3.0旨在从ChatGPT等竞争对手那里夺回市场份额,标志着人工智能驱动搜索的“新阶段”。

最近一篇BBC文章引用了谷歌的桑达尔·皮查伊的警告,提醒用户不要盲目信任人工智能的输出,这在Hacker News上引发了讨论。核心问题在于人工智能固有的概率性——它不是确定性的,并且可能根据措辞或时间提供不一致,甚至*错误*的答案。 评论者强调了部署不可靠技术所带来的责任问题,并将人工智能的可信度比作一个年幼的孩子。许多人认为谷歌应该优先*提高*人工智能的可靠性,而不是依赖用户来核实其输出。 一些用户分享了人工智能不准确的个人经历,例如错误的事件信息。一个主要的不满是谷歌抵制允许用户轻松禁用“AI概述”,尽管它们不可靠。最终,共识倾向于怀疑态度,一些人简洁地总结了要点: “不要相信人工智能”。 还有一种愤世嫉俗的暗流质疑人工智能*以及*报道它的来源的可信度。
相关文章

原文

Faisal Islam,economics editor,

Rachel Clun,business reporter and

Liv McMahon,Technology reporter

People should not "blindly trust" everything AI tools tell them, the boss of Google's parent company Alphabet has told the BBC.

In an exclusive interview, chief executive Sundar Pichai said that AI models are "prone to errors" and urged people to use them alongside other tools.

Mr Pichai said it highlighted the importance of having a rich information ecosystem, rather than solely relying on AI technology.

"This is why people also use Google search, and we have other products that are more grounded in providing accurate information."

However, some experts say big tech firms such as Google should not be inviting users to fact-check their tools' output, but should focus instead on making their systems more reliable.

While AI tools were helpful "if you want to creatively write something", Mr Pichai said people "have to learn to use these tools for what they're good at, and not blindly trust everything they say".

He told the BBC: "We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors."

The company displays disclaimers on its AI tools to let users know they can make mistakes.

But this has not shielded it from criticism and concerns over errors made by its own products.

Google's rollout of AI Overviews summarising its search results was marred by criticism and mockery over some erratic, inaccurate responses.

The tendency for generative AI products, such as chatbots, to relay misleading or false information, is a cause of concern among experts.

"We know these systems make up answers, and they make up answers to please us - and that's a problem," Gina Neff, professor of responsible AI at Queen Mary University of London, told BBC Radio 4's Today programme.

"It's okay if I'm asking 'what movie should I see next', it's quite different if I'm asking really sensitive questions about my health, mental wellbeing, about science, about news," she said.

She also urged Google to take more responsibility over its AI products and their accuracy, rather than passing that on to consumers.

"The company now is asking to mark their own exam paper while they're burning down the school," the said.

'A new phase'

The tech world has been awaiting the latest launch of Google's consumer AI model, Gemini 3.0, which is starting to win back market share from ChatGPT.

From May this year, Google began introducing a new "AI Mode" into its search, integrating its Gemini chatbot which is aimed at giving users the experience of talking to an expert.

At the time, Mr Pichai said the integration of Gemini with search signalled a "new phase of the AI platform shift".

The move is also part of the tech giant's bid to remain competitive against AI services such as ChatGPT, which have threatened Google's online search dominance.

His comments back up BBC research from earlier this year, which found that AI chatbots inaccurately summarised news stories.

OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini and Perplexity AI were all given content from the BBC website and asked questions about it, and the research found the AI answers contained "significant inaccuracies".

In his interview with the BBC, Mr Pichai said there was some tension between how fast technology was being developed and how mitigations are built in to prevent potential harmful effects.

For Alphabet, Mr Pichai said managing that tension means being "bold and responsible at the same time".

"So we are moving fast through this moment. I think our consumers are demanding it," he said.

The tech giant has also increased its investment in AI security in proportion with its investment in AI, Mr Pichai added.

"For example, we are open-sourcing technology which will allow you to detect whether an image is generated by AI," he said.

Asked about recently uncovered years-old comments from tech billionaire Elon Musk to OpenAI's founders around fears the now Google-owned DeepMind could create an AI "dictatorship", Mr Pichai said "no one company should own a technology as powerful as AI".

But he added there were many companies in the AI ecosystem today.

"If there was only one company which was building AI technology and everyone else had to use it, I would be concerned about that too, but we are so far from that scenario right now," he said.

联系我们 contact @ memedata.com