AMD柠檬水:一款快速、开源的本地LLM服务器,使用GPU和NPU。
Lemonade by AMD: a fast and open source local LLM server using GPU and NPU

原始链接: https://lemonade-server.ai

## 柠檬汁:人人都能用的本地AI 柠檬汁是一个免费、开源的AI平台,旨在让您在电脑上直接运行强大的模型——例如gpt-oss-120b,仅需128GB内存。它优先考虑隐私、速度和易用性,拥有1分钟安装和轻量级2MB服务。 主要功能包括通过OpenAI API兼容流行的AI应用,自动配置您的硬件(GPU和NPU),以及支持多种推理引擎(llama.cpp等)。柠檬汁通过统一的API处理各种AI任务——聊天、图像生成、视觉、转录和语音合成。 柠檬汁由本地AI社区构建,提供内置的GUI用于模型管理和跨平台支持(Windows、Linux、macOS)。它不断发展,并进行频繁的更新和改进。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 AMD 的 Lemonade:一个快速且开源的本地 LLM 服务器,使用 GPU 和 NPU (lemonade-server.ai) 13 分,AbuAssar 1 小时前 | 隐藏 | 过去 | 收藏 | 1 条评论 帮助 nijave 2 分钟前 [–] 有人和 ollama 比较过吗?几天前我在 9070 XT 上使用最新的 ollama 和 ROCm 7.4 取得了不错的成果。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Open source. Private. Ready in minutes on any PC.

Chat

What can I do with 128 GB of unified RAM?

Load up models like gpt-oss-120b or Qwen-Coder-Next for advanced tool use.

What should I tune first?

You can use --no-mmap to speed up load times and increase context size to 64 or more.

Image Generation

A pitcher of lemonade in the style of a renaissance painting

Speech

Hello, I am your AI assistant. What can I do for you today?

Built by the local AI community for every PC.

Lemonade exists because local AI should be free, open, fast, and private.

Built on the best inference engines

Works with great apps.

Lemonade is integrated in many apps and works out-of-box with hundreds more thanks to the OpenAI API standard.

Built for practical local AI workflows.

Everything from install to runtime is optimized for fast setup, broad compatibility, and local-first execution.

Native C++ Icon

Native C++ Backend

Lightweight service that is only 2MB.

Install Icon

One Minute Install

Simple installer that sets up the stack automatically.

OpenAI Icon

OpenAI API Compatible

Works with hundreds of apps out-of-box and integrates in minutes.

Auto-config Icon

Auto-configures for your hardware

Configures dependencies for your GPU and NPU.

Multi-engine Icon

Multi-engine compatibility

Works with llama.cpp, Ryzen AI SW, FastFlowLM, and more.

Multiple Models Icon

Multiple Models at Once

Run more than one model at the same time.

Cross-platform Icon

Cross-platform

A consistent experience across Windows, Linux, and macOS (beta).

Built-in app Icon

Built-in app

A GUI that lets you download, try, and switch models quickly.

One local service for every modality.

Point your app at Lemonade and get chat, vision, image gen, transcription, speech gen, and more with standard APIs.

POST /api/v1/chat/completions

Always improving.

Track the newest improvements and highlights from the Lemonade release stream.

联系我们 contact @ memedata.com