我害怕生物计算。
I'm scared about biological computing

原始链接: https://kuber.studio/blog/Reflections/I%27m-Scared-About-Biological-Computing

作者在人工智能及其底层数学方面拥有丰富经验,对最近在培养和训练人造神经元方面的进展深感不安。具体而言,一个实验室培养的神经网络成功运行《DOOM》游戏,引发了一种令人不安的认识。 虽然大型语言模型被认为仅仅是预测下一个词元,缺乏意识,但这个生物系统模糊了界限。作者质疑,创造一个“生物计算机”并使其置于持续的、模拟的暴力环境中是否构成伦理对待,特别是考虑到该系统*解读*视觉数据,可能“看到”游戏画面。 尽管规模相对较小(20万个神经元——超过一些动物的神经元数量),但这种潜在的强大且节能的生物计算技术,为持续发展创造了强大的商业动机。作者最后表达了一种不适感,强调了围绕这项可能具有突破性——且伦理上存在争议——技术的公众讨论不足。

## 关于生物计算的担忧 最近一篇Hacker News上的帖子引发了关于生物计算伦理影响的讨论,尤其是在展示神经元“玩”DOOM的视频之后。原作者表达了不安,并将之与我们使用生物系统相关的伦理问题相提并论——类似于一个基因工程改造后*想要*被吃的猪的思想实验。 评论者们争论着,基于硅的AI中的意识是否意味着生物系统中的意识,并质疑为什么担忧的声音不够大。一个关键的争论点在于DOOM神经元实验的准确性;许多人认为报道夸大了这一过程,因为这些神经元并非在有意义上“看”或“玩”,而只是对预处理数据做出反应。 讨论还涉及通过YouTube等平台传播的错误信息带来的危险,以及其中潜在的有价值的学习资源。最终,这个帖子突显了人们对不受约束的技术进步,以及由资本主义动机驱动的潜在风险的焦虑,以及越来越复杂的生物系统可能超越人类智能的可能性。
相关文章

原文

I’ve been in the AI space since ChatGPT first dropped.

I’ve toyed around with a lot of Language Models, built random side projects, built a couple from scratch and I’ve spent hours looking at the math behind it all. I know how the weights work, I know how the matrix multiplications function. It’s just math and probabilities, a lot of it.

But seeing these exact same concepts working on literal human neurons? That is so profoundly dystopian to me.

If you’ve run into some of my work before, you know I have a thing for DOOM. I’ve spent days figuring out how to map out WADs to run it as a stateless engines or inside QR codes.

So a few months ago, when I came across a video from a company that grew neurons in a lab and trained them to play DOOM - honestly better than I do.

I saw it, read about it, nodded, and moved on.

Except I didn’t. It’s been months and I couldn’t put a finger on why it bothered me so much.

intro

We’ve discounted LLMs from being “conscious” because of the simple, slightly brutal reality that they’re next token predictors, so they’re really good at simulating the outputs of thought, but they have no inner life.

But this is where the line slightly blurs in my head. Did we possibly just build the first human biocomputer and immediately put it in a simulated hell, playing the same game on loop, forever? Using the same reward mechanisms we use for LLMs?

How do we know that isn’t conscious? Who gets to decide that?

To play DOOM, the system feeds visual data to the neurons. For the neurons to react, they have to interpret that data in some way. When our brains interpret electrical signals from our optic nerves, we call it “seeing.”

So… are the neurons on that chip seeing?

We all desperately want to say no. We want to say it’s just a science experiment, that 200,000 neurons isn’t enough to be a “person.” But 200,000 is already more neurons than a jellyfish or a worm.

Where do we draw the line?

The commercial incentives exist, they obviously do - a human brain can store a lot more information with potentially better retrieval and fractions of the power that our silicon does.

and of course it’s hilarious to even imagine that we’d stop developing this, this was always a Pandora’s box, and even things we collectively call “wrong,” like mass surveillance or black markets, keep existing because someone profits. Why would this be different?

I don’t really have a conclusion here, and I’m not sure if one even exists yet - probably why the blog’s called MindDump, but I think I’m just uncomfortable that we made this and we’re not really talking about it.

联系我们 contact @ memedata.com