Calibre 添加了 AI “讨论”功能
Calibre adds AI "discussion" feature

原始链接: https://lwn.net/Articles/1049886/

## Calibre 添加 AI 功能,引发用户反弹 Calibre 这款流行的电子书管理软件最近推出了一项“与 AI 讨论”功能(版本 8.16.0),允许用户向 AI 模型查询书籍并获取推荐。开发者 Kovid Goyal 迅速批准了这项功能,并设想未来进一步集成 AI,例如封面生成和文本转语音。然而,该功能引发了用户强烈反弹,他们担心 AI 会侵入阅读体验,并对 AI 训练数据存在伦理方面的担忧。 许多用户表达了道德上的反对,特别是那些作品可能被用于训练 AI 模型作者。尽管 Goyal 最初驳回了这些担忧,但他现在已经接受了一个补丁,将该功能从界面中*隐藏*,但底层的 AI 功能仍然存在。 目前,用户可以将 calibre 连接到各种 AI 提供商(包括本地模型),但该功能需要配置并且可靠性不高。已经出现了一些 calibre 的分支版本,旨在完全删除 AI 功能。虽然存在 calibre 的替代品用于*阅读*电子书,但目前没有一款能与 calibre 媲美的全面管理和转换功能,这使得用户选择有限,只能退回到旧版本或接受 AI 集成。这一事件凸显了开源社区内部日益增长的紧张关系,即 AI 的日益普及。

## Calibre增加AI功能:褒贬不一 Calibre,流行的电子书管理软件,增加了一个AI“讨论”功能,引发了用户之间的争论。一些人认为拥有一个可以讨论书籍的AI伴侣很有价值,而另一些人则对侵入式AI集成和数据隐私表示担忧。 一个主要的争议点在于,即使是不必要的,AI也被强行添加到应用程序中,这让人联想到过去的应用泛滥趋势,例如为每个网站都开发应用程序。用户质疑在Calibre中加入AI的必要性,因为像ChatGPT这样的替代方案已经很容易获得。担忧还包括潜在的跟踪、与大型公司共享数据以及依赖可能存在偏见的语言模型的伦理影响。 然而,Calibre提供了使用本地AI模型的选项,解决了一些隐私问题。支持者强调了潜在的好处,例如自动记笔记和回答关于复杂叙述的问题。最终,该功能的成功将取决于用户的选择,以及它是否能够在简单的任务之外证明其真正的实用性。开发者已经通过提供隐藏该功能的UI选项来回应最初的抵制,但计划继续开发AI集成。
相关文章

原文
Did you know...?

LWN.net is a subscriber-supported publication; we rely on subscribers to keep the entire operation going. Please help out by buying a subscription and keeping LWN on the net.

By Joe Brockmeier
December 15, 2025

Version 8.16.0 of the calibre ebook-management software, released on December 4, includes a "Discuss with AI" feature that can be used to query various AI/LLM services or local models about books, and ask for recommendations on what to read next. The feature has sparked discussion among human users of calibre as well, and more than a few are upset about the intrusion of AI into the software. After much pushback, it looks as though users will get the ability to hide the feature from calibre's user interface, but LLM-driven features are here to stay and more will likely be added over time.

Amir Tehrani proposed adding an LLM query feature directly to calibre in August 2025:

I have developed and tested a new feature that integrates Google's Gemini API (which can be abstracted to any compatible LLM) directly into the Calibre E-book Viewer. My aim is to empower users with in-context AI tools, removing the need to leave the reading environment. The results: capability of instant text summarization, clarification of complex topics, grammar correction, translation, and more, enhancing the reading and research experience.

Kovid Goyal, creator and maintainer of calibre, quickly voiced approval. He dismissed the idea that it might bother some calibre users and suggested that Tehrani submit a pull request for the feature. On August 10, Tehrani submitted the patches, and Goyal later merged them into mainline after refactoring the code. He provided a description of the additional LLM features he had in mind as well:

There are likely going to be new APIs added to all backends to support things like generating covers, finding what to read next, TTS [text-to-speech], grammar and style fixing in the editor and possibly metadata download.

Goyal did promise that calibre would "never ever use any third party service without explicit opt-in".

Discuss removing the feature

It did not take long after the Discuss feature was released for users to start asking for its removal. User "msr" on the Mobileread forum started a thread to ask if there was a way to block or hide all AI features:

I generally find the AI-push to be morally repugnant (among other things, I am an author whose work has been stolen for training) and I hate to see these features creep into software I use. I have zero interest in ever using so-called AI for anything.

Goyal replied that the features do nothing unless they are enabled. "The worst you get is a few menu entries. Simply ignore them."

Other users echoed the anti-AI sentiment. "Quoth" said they would not update calibre until the feature was scrapped. "It's a thin end of a wedge and encouraging people to use these over-hyped LLMs, even though off by default." Goyal replied that it is in calibre to stay:

It's not going to be scrapped, so good bye, I guess. You are more than welcome to not use AI if you don't want to. calibre very nicely makes that easy for you by having it off by default to the extent that the AI code is not even loaded unless you enable it. What you DO NOT get to do is try to make that choice for other people.

What's added so far

The feature is displayed in the calibre user interface by default; it shows up in the View menu as "Discuss selected books with AI". The naming is unfortunate on its own. Calling the process of sending queries to an LLM provider a discussion encourages people to anthropomorphize the tools and furthers the misconception that these tools "think" in the way that people do. Whatever value the responses may have, they do not reflect actual thought.

As Goyal pointed out, though, the Discuss feature does not work until an LLM provider is configured. If a user attempts to use it without doing so, calibre displays a dialog that directs the user to configure a provider first. Each provider is supplied as a separate plugin. Currently, calibre users have a choice of commercial providers, or running models locally using LM Studio or Ollama.

The Discuss feature shows up as a plugin as well. It is located in the calibre preferences in the "User interface action" category. However, it is a plugin that cannot be disabled or removed; nor can any of the other alleged plugins in that category. It seems fair to question whether something is actually a "plugin" if it cannot be unplugged. The separate provider plugins, in the "AI provider" category, can be disabled or removed, though. The provider plugins are enabled by default, but they do nothing until a user supplies credentials of some kind.

Users do not need to worry about accidentally enabling a feature that sends data off to a provider, because it is impossible to accidentally configure the plugins. For example, the GitHub AI provider requires an access token before it will work, and Google's AI provider needs an API key to function. Using a local provider requires the user to actually have LM Studio or Ollama set up, and then jump through a couple of hoops to enable them.

Even if a user wants to query an LLM about a book, they may encounter problems. I tried setting calibre up to use GitHub AI, but even after appearing to have successfully configured it as provider with the token, I had no luck. I could send queries, but received no reply. I was able to get calibre working with Ollama, though the experience was not particularly compelling.

[Calibre preferences dialog and chat window]

Responses from GitHub AI or Ollama about books are of little interest to me; a model may have ingested a million or more books as it was trained, but it hasn't read a single one, nor had any life experience that could spark an insight or reaction. Thoughtful discussions of books with well-read people with real perspectives, on the other hand, would be delightful—but beyond calibre's capabilities to provide.

Hide AI

Despite dismissing complaints the addition of AI, Goyal has grudgingly accepted a pull request to hide AI features. He said that anyone offended by a few menu entries is not worth worrying about but, "I don't particularly mind having a tweak just to hide the menu entries, but that is all it should do". He added that someone would need to supply patches to hide additional AI functionality in the future. "That someone isn't going to be me as I don't have the patience to waste my time catering to insanity."

A "remove slop" pull request from "Ember-ruby" that would have stripped out AI features from calibre was rejected without comment. The calibre forked repository with those patches may be of interest, however, to those interested in forking calibre.

At least two forks have been announced so far; one seems to have only gotten so far as the name, clbre "because the AI is stripped out". To date the only work that has shown up in that repository is to update the README. Xandra Granade announced rereading on December 9; that project is currently working on a fork called arcalibre, but its goals are limited to a snapshot of calibre "with all AI antifeatures removed" that can be used for future forks of calibre. No new features are planned for arcalibre.

The rereading draft charter suggests that the project will develop additional applications based on arcalibre. It is, of course, far too early to say whether the project will produce anything interesting in the long term. Any future forkers should note that the name "Excalibre" is right there for the taking.

Resistance seems futile

No doubt part of calibre's audience is pleased to see the feature; but it has proven to be an unwelcome addition for some of calibre's users. It is not surprising that those users have asked for it to be removed or changed in such a way that it can be hidden.

It has been a disappointing year overall for Linux and open-source enthusiasts who object to the seemingly relentless AI-ification of everything. It is fairly commonplace at this point to see companies shoving AI features into proprietary software whether the features actually make sense or not. However, an open-source project like calibre has no shareholders to please by ticking off the "AI inside" box, so few people would have had "adds AI" to their calibre bingo card for 2025.

An AI feature landing in calibre seems a fitting coda to the recurrent theme of AI and open source in 2025; whether users want to engage with AI or not, it is seemingly inescapable. One might wonder if AI has come to calibre, a project with no commercial incentive to add it, is there is no refuge to be had from it at all?

Bitwarden, which makes an open-source password manager and server, is now accepting AI-generated contributions, as is the KeePassXC password-manager project. Even projects like Fedora and the Linux kernel are accepting or leaning toward accepting LLM-assisted contributions; Mozilla is all-in on AI and pushing it into Firefox as well. This is not an exhaustive list of AI-friendly projects, of course; it would be exhausting to try to compile one at this point.

In most cases, though, users still have options without LLM features. When it comes to calibre, there is no alternative to turn to. Then again, there was no real alternative to calibre before it adopted "Discuss with AI", either. There are many open-source programs that handle reading ebooks; that is well-covered territory. Some, like Foliate, are arguably better than calibre at that task.

But there is no other ebook-management software (open source or otherwise) that has all of calibre's conversion features and support for exporting to such a wide variety of ebook readers. Evan Buss attempted a calibre alternative, called 22, in 2019. Buss threw in the towel after learning "ebook managers are much more difficult to get right than I had previously imagined", and maintaining compatibility with calibre "proved near impossible". Phil Denhoff started the Citadel project in late 2023. It looked like a promising calibre-compatible ebook-library manager, but its last release was in October 2024. Denhoff continues to make commits to the repository, though, so one might still hold out hope for the project.

While the lack of alternatives is frustrating for some, it is not Goyal's fault. The fact that the open-source community, to date, has not produced anything else that can fill in for calibre is not his problem. It is not his responsibility to take the program in any particular direction, nor is he obliged to entertain user complaints. Whether users love or loathe seeing calibre adding LLM features, it's up to its maintainer to decide what gets in and what doesn't.

For now, the AI-objectors on Linux have a few options. One is to live with lurking LLM features, or stick with calibre versions before 8.16.0. Goyal has made it easy to revert to an older version; the download.calibre.com site seems to have all prior releases of calibre going back to its pre-1.0 days. The Download for Linux page has instructions on reverting to previous versions, too. Those who get calibre from their Linux distribution may be LLM-free for some time without taking any action. Debian 13 ("trixie") users, for example, should be on the 8.5.0 branch for the remainder of the release's lifetime. Fedora 42 is still on the 8.0 branch, and Fedora 43 is on 8.14. Rawhide has 8.16.2, though, so users are likely to get the Discuss feature in Fedora 44.

The strong reaction against calibre's Discuss feature may seem more emotional than logical. It is also understandable. Books are a human endeavor, even those that are in electronic format. AI models have often been trained by plundering a corpus of books, without respect for the author's wishes or copyright. Suggesting that the readers now turn to the technologies that seek to replace humans to supplement their reading experience is, for some at least, deeply offensive. It is a little puzzling that Goyal, who has catered to a large audience of book lovers for nearly 20 years, seems not to understand that.



联系我们 contact @ memedata.com