非怠工作室
Unsloth Studio

原始链接: https://unsloth.ai/docs/new/studio

## Unsloth Studio:本地AI模型训练与推理 Unsloth Studio (Beta) 是一款新的、开源的、无需代码的网页界面,旨在简化本地AI模型的使用。它允许用户在Mac、Windows和Linux上运行、训练和导出开源模型——即使对于基本的聊天推理,也不需要强大的GPU。 主要功能包括:训练速度提升2倍,VRAM使用量减少70%,支持500多种模型(文本、视觉、音频、嵌入),以及轻松微调Qwen3.5等LLM。用户可以通过PDF、CSV和JSON上传数据,使用“数据配方”创建自定义数据集。实时观测工具跟踪训练进度,模型可以导出为与llama.cpp和Ollama等流行平台兼容的格式。 Unsloth优先考虑隐私,提供完全离线操作和安全认证。目前仍处于Beta阶段,未来的更新将支持Apple MLX、AMD、Intel,并改进多GPU功能。提供Google Colab笔记本,方便快速探索。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Unsloth Studio (unsloth.ai) 42 分,由 brainless 发表于 5 小时前 | 隐藏 | 过去 | 收藏 | 2 条评论 帮助 meowokIknewit 发表于 17 分钟前 | 下一个 [–] 微调的 GUI 看起来很有趣。希望这能带来很多新的定制模型。回复 jawerty 发表于 1 小时前 | 上一个 [–] 很高兴能使用这个,我已经使用 Unsloth 模型好几年了。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Run and train AI models locally with Unsloth Studio.

Today, we’re launching Unsloth Studio (Beta): an open-source, no-code web UI for training, running and exporting open models in one unified local interface.

boltQuickstartstarFeaturesgithubGithub

  • Run GGUF and safetensor models locally on Mac, Windows, Linux.

  • Train 500+ models 2x faster with 70% less VRAM (no accuracy loss)

  • Run and train text, vision, TTS audio, embedding models

  • MacOS and CPU work for Chat GGUF inference. MLX training coming soon.

No-code training

Upload PDF, CSV, JSON docs, or YAML configs and start training instantly on NVIDIA. Unsloth’s kernels optimize LoRA, FP8, FFT, PT across 500+ text, vision, TTS/audio and embedding models.

Fine-tune the latest LLMs like Qwen3.5 and NVIDIA Nemotron 3. Multi-GPU works automatically, with a new version coming.

Data Recipes

Data Recipes transforms your docs into useable / synthetic datasets via graph-node workflow. Upload unstructured or structured files like PDFs, CSV and JSON. Unsloth Data Recipes, powered by NVIDIA DataDesignerarrow-up-right, auto turns documents into your desired formats.

Observability

Gain complete visibility into and control over your training runs. Track training loss, gradient norms, and GPU utilization in real time, and customize to your liking.

You can even view the training progress on other devices like your phone.

Export / Save models

Export any model, including your fine-tuned models, to safetensors, or GGUF for use with llama.cpp, vLLM, Ollama, LM Studio, and more.

Stores your training history, so you can revisit runs, export again and experiment.

Model Arena

Chat with and compare 2 different models, such as a base model and a fine-tuned one, to see how their outputs differ.

Just load your first GGUF/model, then the second, and voilà! Inference will firstly load for one model, then the second one.

Privacy first + Secure

Unsloth Studio can be used 100% offline and locally on your computer.

Its token-based authentication, including password and JWT access / refresh flows keeps your data secure and under your control.

circle-exclamation

Unsloth Studio works on Windows, Linux, WSL and MacOS (chat only currently).

  • CPU: Unsloth still works without a GPU, but only for Chat inference.

  • Training: Works on NVIDIA GPUs: RTX 30, 40, 50, Blackwell, DGX Spark/Station etc.

  • Mac: Like CPU - Chat only works for now. MLX training coming very soon.

  • Coming soon: Support for Apple MLX, AMD, and Intel.

  • Multi-GPU: Works already, with a major upgrade on the way.

Windows, MacOS, Linux, WSL: