(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43412179

Hacker News 上的一篇讨论线程关注 Cloudflare 对 AI 生成内容的加密水印的初步研究。首位评论者表示怀疑,认为该方案依赖于生成器自愿使用水印是一个根本性缺陷,因为恶意行为者不会采用它。他们建议监管是一个更有效,尽管是长期性的解决方案。 PeterStuer 质疑水印的目标实际要解决的问题,并担心这可能导致对开源模型的限制。xyzal 开玩笑地说,一旦 GitHub 上出现水印算法,他们就会“水印”所有人类生成的内容。 ForHackernews 提出了一种替代方案:在相机中嵌入安全元件以对真实内容进行数字签名,从而使伪造真实性变得困难和昂贵。该线程强调了在 AI 生成媒体日益复杂的时代验证内容的挑战。

相关文章
  • AI 生成内容的加密水印技术初探 2025-03-19
  • (评论) 2024-04-21
  • (评论) 2024-08-07
  • (评论) 2025-03-14
  • (评论) 2025-03-18

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    An early look at cryptographic watermarks for AI-generated content (cloudflare.com)
    13 points by jgrahamc 1 hour ago | hide | past | favorite | 4 comments










    Without going into the technical efficacy of such schemes ( I am a skeptic), the proposed solution requires the entity generating the media to use it. Isn’t that a flaw? Why would an attacker use it willingly? If they did not want to push an AI generated content as a real one, they would have willingly made that distinction themselves.

    The point is, there is no good solution here unless there is regulation, but these attempts at solutions are useful in the long run



    I'm not sure a good case is made here regarding the "problems" this is intended to solve.

    OTOH, could this be another step towards prohibiting Open Source models?



    Well, the moment there is any watermarking algorithm on github i'll gladly 'watermark' all my human output.


    Isn't it better/easier to go the other way? What if cameras included some kind of secured element that signed real content?

    Maybe it would technically be possible to defeat, but we're already pretty good at making it difficult/expensive to extract a private key from hardware.







    Join us for AI Startup School this June 16-17 in San Francisco!


    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com