原文
| |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
![]() |
原始链接: https://news.ycombinator.com/item?id=43753890
这篇 Hacker News 讨论线程关注本地大语言模型 (LLM) 推理的挑战和潜力。原文强调了本地 LLM 在笔记本电脑上的强大功能,但也指出了由于模型体积巨大,查找、部署和下载特定任务模型的困难,从而影响用户体验。 评论者提出了克服这些障碍的策略。有人建议最初使用 LLM 供应商的 API,同时在后台下载本地模型,以降低长期的推理成本。另有人建议专注于针对特定任务、使用相关数据训练的小型模型,以便在受限应用中获得比单纯依赖 API 的优势。游戏被认为是很有前景的早期采用者,因为 API 调用成本高昂,并且游戏领域已经拥有现成的 GPU 基础设施。 评论中提到了 Llamafile(用于可移植性)和 optillm(用于改进推理性能)等具体工具作为潜在的解决方案。总体共识似乎是,虽然本地 LLM 推理并非对所有用例都已准备好投入生产,但持续的开发和优化正在迅速提高其可行性,尤其是在针对性应用方面。
| |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
![]() |
- You're running a personal hosted instance. Good for experimentation and personal use; though there's a tradeoff on renting a cloud server.
- You want to run LLM inference on client machines (i.e., you aren't directly supervising it while it is running).
I'd say that the article is mostly talking about the second one. Doing the first one will get you familiar enough with the ecosystem to handle some of the issues he ran into when attempting the second (e.g., exactly which model to use). But the second has a bunch of unique constraints--you want things to just work for your users, after all.
I've done in-browser neural network stuff in the past (back when using TensorFlow.js was a reasonable default choice) and based on the way LLM trends are going I'd guess that edge device LLM will be relatively reasonable soon; I'm not quite sure that I'd deploy it in production this month but ask me again in a few.
Relatively tightly constrained applications are going to benefit more than general-purpose chatbots; pick a small model that's relatively good at your task and train it on enough of your data and you can get a 1B or 3B model that has acceptable performance, let alone the 7B ones being discussed here. It absolutely won't replace ChatGPT (though we're getting closer to replacing ChatGPT 3.5 with small models). But if you've got a specific use case that will hold still enough to deploy a model it can definitely give you the edge versus relying on the APIs.
I expect games to be one of the first to try this: per-player-action API costs murder per-user revenue, most of the gaming devices have some form of GPU already, and most games are shipped as apps so bundling a few more GB in there is, if not reasonable, at least not unprecedented.
reply