Jellyfin LLM/“AI”开发政策
Jellyfin LLM/"AI" Development Policy

原始链接: https://jellyfin.org/docs/general/contributing/llm-policies/

## Jellyfin 关于 LLM 贡献的政策 Jellyfin 项目优先考虑高质量的代码——可读性、简洁性和精炼性——并正在制定利用大型语言模型(LLM),如 ChatGPT 和 Claude 的贡献指南。虽然承认 LLM 可能是潜在的有用工具,但 Jellyfin 旨在防止代码标准下降。 **禁止在所有社区交流(问题、请求、评论、论坛帖子)中使用直接由 LLM 生成的文本。** 贡献必须使用作者自己的语言,以展示理解。 **允许使用 LLM 辅助的代码贡献,但会受到严格审查。** “氛围编码”——提交不理解的代码——将被拒绝。贡献必须简洁、专注、格式良好且经过彻底测试。开发者必须解释 *为什么* 进行了更改,而不仅仅是 *LLM 做了什么*,并准备好解决审查反馈。大型更改需要多个小提交。 **对于在社区内共享的非官方项目:** 主要由 LLM 生成的项目必须明确标记。必须正确许可和署名现有代码,对抄袭零容忍。 Jellyfin 管理员不会主动监管第三方项目中的 LLM 使用情况,而是专注于执行许可和署名规则。核心原则是负责任的开发:LLM 可以 *辅助*,但开发者仍然对其贡献的质量和理解负责。

Jellyfin 实施了一项关于在贡献中使用 LLM(“AI”)的新政策,引发了 Hacker News 上的讨论。该政策的核心禁止使用 LLM 生成的文本进行直接沟通——这意味着不允许使用 AI 编写的拉取请求描述或评论。然而,LLM *被允许* 用于翻译和语法辅助,承认它们作为辅助工具的价值。 讨论强调了对最近大量低质量、LLM 生成的拉取请求涌入 Jellyfin 问题追踪器的不满——通常将多个不相关的问题结合在一起。贡献者担心这会使维护者不堪重负,并阻碍进展。 许多人同意该政策是合理的,本质上是说“不要提交糟糕的代码,无论它是如何创建的”。一些人建议更严格的执行,例如对违规行为进行直接禁止,而另一些人则建议制定类似于现有编码标准的 LLM 贡献“代理政策”。一个关键问题是 LLM 生成贡献的*数量*过多,超过了维护者可以合理审查的范围。
相关文章

原文

The rise of LLMs as a useful development tool over the last year or so has been significant. The power and flexibility of tools like Claude Code and ChatGPT have given a lot of functionality both to experienced developers and new developers alike. But there are trade-offs.

The Jellyfin project has, from day one, had a major focus on code quality - readability, simplicity, conciseness. This is a mostly manual effort driven by a dedicated team of individuals, and is motivated by a desire to fix the code Jellyfin is based off of which, without beating a dead horse too much, was extremely fragile, spaghettified, and prone to over-engineered complexity.

We are seeing a precipitous rise in contributors using AI within the Jellyfin ecosystem, both in the server and for clients, as well as a rise in criticism and concern about LLMs generally. At this time we are writing this policy to address exactly what we expect and desire with respect to contributions and interactions within our community that may use LLMs. These rules apply to all of our official projects and community spaces.

General Guidelines

  1. LLM output is expressly prohibited for any direct communication, including the following:

    • issues or comments
    • feature requests or comments
    • pull request bodies or comments
    • forum/chat/etc. posts or comments

    In short, if you are posting any of those things, the output must be your own words, explanation, description, etc., not a verbatim dump of an LLM's output. We expect you to understand what you're posting. Violating this rule will result in closure/deletion of the offending item(s).

    An exception will be made for LLM-assisted translations if you are having trouble accurately conveying your intent in English. Please explicitly note this ("I have translated this from MyLanguage with an LLM") and, if possible, post in your original language as well.

  2. LLM code contributions are subject to more granularity below, but the general principle is that "pure 'vibe coding' will be rejected" and "you are responsible for what you commit". We will review in that vein. If the code looks terrible, it will be rejected as such.

LLM Code Contributions to Official Projects

The use of LLMs for code is controversial and open to much interpretation. These guidelines are our best effort attempt to ensure that knowledgeable developers who seek to use these tools as a legitimate aid are not overly-hindered, while also preventing an ongoing flood of slop contributions that violate our core ethos above. These apply to all official Jellyfin projects.

  1. Contributions should be concise and focused. If the PR claims to target X, and is also touching unrelated Y and Z, it will be rejected. This includes incidental changes to unrelated functionality, a hallmark of poorly-worded or too-general prompts. Similarly, a large PR must be broken into multiple small, manageable commits for review and history purposes.
  2. Formatting and quality standards must be upheld. Excessive unhelpful comments, spaghetti code, spaces on empty lines, etc. will be interpreted as pure LLM output and rejected; you must clean up the mess before submitting. Also do not commit LLM metafiles (e.g. .claude configs) or any other editor-created non-code files.
  3. You must review the output and be able to explain in the PR body - without LLM output as noted above - what is being changed and why. Your PR body (and, if applicable, commit bodies) should be providing context to other developers about why a change was made, and if your name is on it, we want your words and explanations, not an LLM's. If you can't explain what the LLM did, we are not interested in the change.
  4. The changes must be tested. The code should build and run correctly, or it will be rejected. You should also explicitly test the functionality being modified.
  5. You must be able and willing to handle review feedback and implement the suggested change(s) as required. What this means in practice is, if you do not know what has been changed or why (see #3), and thus can't implement suggested changes or discuss them yourself, then we are not interested in the change. Just dumping reviewer feedback into an LLM and expecting what comes out to be "good enough", is not.
  6. Features or refactors require an in-depth level of understanding about what is being changed and why. It is obvious to our reviewers when changes are made without the developer making them understanding what is happening. These will be rejected. And as noted in #1, the PR must contain multiple discrete commits. We will squash commits as deemed appropriate after review. Large changes must also follow our other development policies (discussion, review, implementation, testing process).
  7. The final discretion always lies with the reviewers. If your PR is not capable of being reasonably reviewed, for any reason (over-complexity, size, squashed commits, etc.) it will be rejected, and this goes just as much for non-LLM-assisted PRs as it does for LLM-assisted PRs. You will be asked to split such a PR up into multiple PRs that each present a focused, concise set of changes instead.

The golden rule is this: do not just let an LLM loose on the codebase with a vague vibe prompt and then commit the results as-is. This is lazy development, will always result in a poor-quality contribution from our perspective, and we are not at all interested in such slop. Make an effort or please do not bother. And again, you are free to use LLMs to assist you, but not as the sole source of code changes.

You are of course free to do whatever you wish for your own non-official projects. However, we will be enforcing the following rules for any sharing of such projects within our communities.

  1. Any primarily-LLM-developed projects should be clearly marked as such. It is up to users to decide if this is acceptable to them or not. If you used an LLM for secondary assistance (e.g. docs, formatting, etc.) in an obvious way, we would err towards disclosure as well.
  2. You must respect and follow licenses. If you are basing your project off of existing code, following its license is not optional. You must credit existing contributors in full for all contributions. Do not mangle the Git history, and do not commit pending 3rd party changes as your own (i.e. by copying the code and then committing it). Doing so will result in, not just rejection, but a ban from our organization and community. We have a zero tolerance policy for code theft and bad-faith attribution attempts.
  3. For members of the community, do not report LLM-generated tools, clients, etc. on that basis alone, and do not engage in anti-LLM "witch hunts". As mentioned above, this is permitted and it is your choice whether to "support" said tool/client/etc. or not.
  4. We, the moderators, are not going to play "LLM police" about 3rd party projects by nitpicking to try to "find LLM contributions" that otherwise follow our rules here; this is tedious and a waste of our time and effort. What this means in practice is that rule #1 is up to the author, and rule #3 must be interpreted in that vein. If you only suspect a tool is LLM-generated and violates rule #1, then downvote/ignore it and move on. Only if we see blatant breaking of rule #1 we will enforce it, but again we will not be going through code line by line playing the "was this LLM generated?" game. Rule #2 will always be enforced regardless of LLM-ness or not.
联系我们 contact @ memedata.com