在Linux上安装Ollama和Gemma 3B
Installing Ollama and Gemma 3B on Linux

原始链接: https://byandrev.dev/en/blog/ollama-in-linux

## Ollama:轻松本地运行LLM Ollama 简化了在您的电脑上直接使用大型语言模型(LLM),例如 Gemma。它消除了传统 LLM 设置的复杂性——无需再为依赖项或配置而烦恼! Ollama 允许您轻松**从可搜索的库(可在 [https://ollama.com/search](https://ollama.com/search) 找到)下载预训练模型**,并**使用单个命令运行它们**。例如,`ollama run gemma3:1b` 会下载并执行 Gemma 3 模型。 Gemma 的 1B 版本因其低 RAM 使用量(1.5-2GB)和快速响应时间而突出,使其非常适合快速任务。只需在 Ollama 终端中输入您的提示,即可收到生成的文本。 安装通过一个简单的脚本非常简单:`curl -fsSL https://ollama.com/install.sh | sh`。在 [https://ollama.com/download](https://ollama.com/download) 了解更多信息并找到其他操作系统的安装说明。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 在 Linux 上安装 Ollama 和 Gemma 3B (byandrev.dev) 9 分,byandrev 1 小时前 | 隐藏 | 过去 | 收藏 | 讨论 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Ollama is a tool that makes working with Large Language Models (LLMs), like Gemma 3B, much easier. Instead of having to install a mountain of dependencies and configure complex environments, Ollama simplifies the entire process.

Think of it as a personal assistant for AI that allows you to:

  • Download models: Ollama lets you quickly find and download pre-trained models.
  • Hassle-free testing: It eliminates the need to set up complicated development environments.

1. Install Ollama

Run the following command in your terminal:

Visit the official Ollama website for more information or to install it on a different operating system: https://ollama.com/download

2. Install a model in Ollama

Ollama has a library where you can browse available models at https://ollama.com/search. In this example, I will install Gemma 3, a model capable of running even on a single CPU.

Model gemma3

Gemma 3 model on Ollama

Execute the following command in your terminal:

The text after the colon (":") specifies the exact version, as models can have different variations based on size, context window, supported inputs, etc.

Why use the 1B version?

Mainly for two reasons:

  1. Minimal RAM usage: It only requires about 1.5 GB to 2 GB of RAM.
  2. Instant speed: It is ideal for tasks where the response needs to be immediate.

3. Enter your prompt

Type your prompt (the question or instruction you give the model), and the Ollama terminal will display the generated text.

Example of running a model in Ollama

联系我们 contact @ memedata.com