我们不使用人工智能。
Why we don’t use AI

原始链接: https://yarnspinner.dev/blog/why-we-dont-use-ai/

Yarn Spinner 游戏开发工具不会整合或使用“AI”技术,原因是对其开发公司的伦理问题感到担忧。 创作者们原本具备机器学习背景,最初认为该领域有潜力,但随着其重点转向旨在降低劳动力成本的工具——本质上是裁员或在不招聘的情况下增加工作量——他们感到幻灭。 他们认为,当前 AI 开发将利润置于负责任的创新之上,忽视了偏见和可解释性等问题,并积极压制异议。 虽然承认采用 AI 的压力(“你会被抛在后面”),但 Yarn Spinner 优先构建真正*帮助*开发者创造*更好*游戏的工具,而不仅仅是利用最新的技术。 他们不会禁止在其他地方使用 AI 的用户,但强烈不鼓励这样做,强调即使是看似无害的使用,也会在经济上支持从事有害行为的公司。 他们仍然愿意在未来探索机器学习,但前提是伦理问题得到解决,并且重点重新转向有益的应用。

## Yarn Spinner 与 AI 辩论:摘要 最近一篇 Hacker News 上的帖子引发了关于 Yarn Spinner 在游戏开发过程中避免使用 AI 的决定的讨论。虽然该帖子将 AI 的使用描述为可能有害,但评论员们大多反对这种“善与恶”的叙述。 许多人认为,AI 就像以前的技术进步一样,可以成为创意工作者(甚至独立开发者)的强大*工具*,提高生产力,但不一定取代工作。有人担心,将 AI 仅仅视为消除就业的因素,忽视了它作为“力量倍增器”的潜力。 对话还涉及了自动化对社会的影响,并将之与历史上的抵制行为(如拉达运动)相提并论,强调需要解决就业流失的*社会*后果,而不是技术本身。一些评论员指出令人担忧的工程实践,例如未经检查的代码推送到生产环境,并建议 AI 可能会加剧管理不善的团队中现有的问题。 最终,这场讨论揭示了一个细致的观点:虽然承认对大型 AI 公司存在合理的担忧,但许多人认为完全否定 AI 会忽略其潜在的好处和对个人及小型团队的实际应用。
相关文章

原文

We get asked about AI a lot. Whether we’re going to add it to Yarn Spinner, whether we use it ourselves, what we think about it. Fair questions. Time to write it all down.

Yarn Spinner doesn’t use the technology that’s currently being called AI. We don’t have generative AI features in the product, and we don’t use code generation tools to build it, and we don’t accept contributions we know contain generated material. Let’s talk about why.

TL;DR: AI companies make tools for hurting people and we don’t want to support that.

The Past

A little history first. We come from a background that did a decent amount of work with AI and ML (terms we shouldn’t but will use interchangeably because everyone else does).

We gave talks about it for game developers and non-programmers. We wrote little ML bots for games. We did research and academic work. We wrote books about using ML in games, mostly for procedural animation. It was a fun series of techniques to explore, and explore we did.

O’Reilly books on AI and ML that we wrote

When we started at university, neural networks and deep learning (the main underlying techniques most AI products use today) were just too slow and hard to work with. By the time we finished our doctorates, that had changed. Tools like TensorFlow made this stuff easier and fun, and the increase in GPU access made training and inference possible for people without Big Tech budgets. For quite a while, we were genuinely excited about the potential.

Then things started to change.

It’s hard to say exactly when. Maybe it was always like this and we just didn’t see it. But by the end of 2020 (a year famous for absolutely nothing world changing whatsoever happening /s) it was clear that the AI we liked was not what the tech companies were interested in. They were increasingly about generative imagery, chatbots writing your material for you, and summaries of art instead of exposure to it. Efforts to mitigate known problems (reinforcing cultural biases, being difficult to make deterministic or explainable) were disparaged and diminished. Researchers and developers who raised concerns were being fired.

Things have only gotten worse since.

If you look at what AI companies promote now, it’s not what we wanted. When you boil down everything they say and strip it right back, what they make are tools to either fire people or demand more work without hiring anyone new to help. That’s the problem AI companies want to solve.

Anything else they achieve is a happy accident on the road to firing as many of your friends and colleagues as possible.

AI is now a tool for firing people, in a time when getting re-employed is especially difficult and being unemployed can be life-threatening. We don’t want to be part of that. Until this is fixed we won’t use AI in our work, nor integrate it into Yarn Spinner for others to use.

We don’t want to support the companies making these tools or normalise their behaviour. So we don’t.

The Future

There’s a comment we see every so often, always phrased as a fait accompli: “you’ll be left behind if you don’t adopt AI”, or its cousin, “everyone is using it”. We disagree.

This isn’t the right approach regardless of our opinions on AI. It’s tool driven development. The goal should never be “we use this tool”. It should be “how do we help you make better games?”.

Great games are made when people are passionate about an idea and push it into existence. Often this means reduction, not addition. Changing ideas. Keeping yourself and colleagues healthy. Being willing to adapt and take feedback. Good tools need to do the same.

We’re constantly asking “how would this help make better games?” and following where that leads. The exploration matters, and most of the time we find an idea doesn’t survive even a little scrutiny. We’d rather have fewer polished features that solve real problems than a load of garbage that exists for the sake of marketing copy.

We’re proud of Yarn Spinner. We don’t think it’s a coincidence it’s used in so many games. Our process works, and we’re always adding new features. We also change and remove features if they don’t meet the needs of devs. We’re always chatting, internally and with other game devs and even non-devs, about potential ideas and approaches. We’re going to keep asking “how would this help make better games?” and ship what survives that gauntlet.

Who knows. Maybe the world will change and we can take another look at ML.

Likely to be Frequently Asked Questions

Why do you only care about people getting fired? I read that AI is also bad for SOME OTHER REASON! There are so many issues AI (and in particular the companies making it) have. Some are potential or even hypothetical concerns that might eventuate. Some are very real and happening right now in front of our eyes. Some are much worse than people being fired. Some of these worse issues appeared between us starting this blog post and publishing it. If the labour concerns around AI suddenly vanished, there are still many issues we’d need to see fixed before we’d be comfortable using it. But it’s stronger to argue a single point at a time. The labour concerns can be fixed and should be pushed back on. Once that’s sorted we can look at the next issue.

Why don’t you do ML correctly so no one gets hurt? Given our background and experience, we probably could make our own AI tooling in a way we feel is helpful, ethical, and doesn’t fund companies we disagree with. Two problems with this. First, these things take a lot of time to make, and like we said, most ideas don’t survive initial exploration. It would be very difficult to balance exploring an idea while also building new models to test it. Second, while we could make our own tools, most people can’t. If they saw us using a technique and wanted to try it, they’d end up supporting the very companies we object to. We don’t want to normalise it, so we have to not use it.

My boss wants me to use AI for work, am I part of the problem? Getting and keeping a job is a necessity, and this has only gotten worse recently. If you can push back on this, do. But no one will fault you for wanting to keep your job.

Are you going to ban people who use AI from using Yarn Spinner? No. While we wish you didn’t use it, we get this is our line in the sand, not yours. We’ll still advocate against these tools and we’re still concerned about the harm they do. You need to realise that if you use them, you’re both financially and socially supporting dodgy companies doing dodgy things. They will use your support to push their agenda. If these tools are working for you, we’re genuinely pleased. But please also stop using them.

I kinda like using AI and no one at my work is getting fired? This comment pops up a few times, often from programmers. Unfortunately, because of how messy the term AI now is, the same concerns still apply. Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.

Are you zealots or luddites who just hate AI? Nah. Just upset at the people making these things. There’s great potential in AI and machine learning, and it’s being squandered to make already dodgy rich people richer and more dodgy. We still keep up with developments because we hope one day we can explore it again. But for now, the people pushing these tools aren’t people we want to give money or support to.

Header image: WGA Strike, June 21, 2023 via Wikimedia Commons

联系我们 contact @ memedata.com