Llama 3.1 8B
Meta · 8B
Meta's versatile 8B — great quality/speed ratio
原始链接: https://www.canirun.ai/
## CanIRun.ai:AI模型兼容性检查器 CanIRun.ai 帮助用户确定他们的电脑可以本地运行哪些AI模型。该平台提供一个全面、可排序的模型列表——从微小的0.8B参数模型到巨大的671B+参数选项——并提供VRAM需求、上下文窗口大小和发布日期等详细信息。 该网站按提供商(如Meta、阿里巴巴、Google)和任务适用性(聊天、代码、推理、视觉)对模型进行分类。它还提供不同的量化选项(Q2_K到F16),影响性能和质量。模型根据硬件兼容性进行评级(S-F),指示它们是否可以流畅运行、是否勉强能运行或是否过于 demanding。 数据来自llama.cpp、Ollama和LM Studio,提供对AI领域的广泛概述,并使能用户探索本地AI的可能性。
Find out which AI models your machine can actually run.
Estimates based on browser APIs. Actual specs may vary.
Meta · 8B
Meta's versatile 8B — great quality/speed ratio
Alibaba · 9B
Multimodal Qwen 3.5 mid-size
Microsoft · 14B
Microsoft's reasoning-focused model
OpenAI · 21B
OpenAI's open-weight MoE with configurable reasoning
Mistral AI · 24B
Multimodal Mistral with vision support
Google · 27B
Google's flagship Gemma 3 model
Alibaba · 32B
Best open-source coding model at release
Alibaba · 32B
Qwen 3 flagship dense model
DeepSeek · 32B
R1 reasoning distilled into Qwen 32B — sweet spot
Meta · 70B
Best open model at 70B class
Meta · 109B
MoE with 16 experts, 17B active params
OpenAI · 117B
OpenAI's flagship open-weight MoE — 52.6% SWE-bench
Mistral AI · 123B
Dense 123B coding model — 72.2% SWE-bench Verified
DeepSeek · 671B
Massive MoE reasoning model — 37B active
DeepSeek · 685B
State-of-the-art MoE — 37B active params
Moonshot AI · 1T
1T-param MoE with 384 experts — 32B active, strong agentic coding
Alibaba · 0.8B
Ultra-tiny model for embedded and edge
Meta · 1B
Meta's smallest Llama for edge devices
Google · 1B
Google's tiny Gemma for on-device
Community · 1.1B
Ultralight model for constrained devices
Alibaba · 1.5B
Ultra-lightweight coding model
DeepSeek · 1.5B
Tiny reasoning model distilled from R1
Alibaba · 1.7B
Compact multilingual Qwen 3
Alibaba · 2B
Small multimodal Qwen 3.5
Google · 2B
Google's compact open model
Meta · 3B
Lightweight Llama for mobile and edge
HuggingFace · 3B
Lightweight multilingual reasoning
Microsoft · 3.8B
Microsoft's efficient small model with long context
Microsoft · 3.8B
Lightweight reasoning model
Alibaba · 4B
Compact Qwen 3 for general tasks
Google · 4B
Multimodal Gemma with 128K context
Alibaba · 4B
Small multimodal Qwen 3.5
Mistral AI · 7B
High-quality 7B with sliding window attention
Alibaba · 7B
Strong multilingual and coding capabilities
Alibaba · 7B
Dedicated coding model
DeepSeek · 7B
R1 reasoning distilled into Qwen 7B
Alibaba · 8B
Qwen 3 with thinking mode support
Mistral AI · 8B
Mistral's efficient 8B model
Google · 9B
Google's best mid-size open model
Zhipu AI · 9B
Multilingual model supporting 26 languages with 128K context
NVIDIA · 9B
Hybrid Mamba2 architecture for reasoning
Meta · 11B
Multimodal vision and text model
Google · 12B
Multimodal Gemma with 128K context
Mistral AI · 12B
Multilingual 12B with 128K context
Alibaba · 14B
Excellent quality for its size class
Alibaba · 14B
Strong all-rounder with thinking mode
DeepSeek · 14B
R1 reasoning distilled into Qwen 14B
Liquid AI · 24B
Hybrid MoE with convolution+attention layers — 2.3B active
Mistral AI · 24B
Coding-focused model with 256K context — 68% SWE-bench
Google · 27B
Google's largest Gemma 2 model
Alibaba · 27.8B
Flagship native multimodal Qwen 3.5
Alibaba · 30B
MoE with only 3.3B active — extremely efficient
NVIDIA · 30B
MoE with 1M context and 3B active
Alibaba · 32B
High-quality reasoning and multilingual
LG AI · 32B
Hybrid reasoning, multilingual
Allen AI · 32B
Fully open research model by Allen AI
Cohere · 35B
Optimized for retrieval-augmented generation
Alibaba · 35B
Efficient multimodal MoE with 3B active
Mistral AI · 47B
MoE with 12.9B active params
Alibaba · 72B
Alibaba's flagship open model
Alibaba · 122B
Large multimodal MoE with 10B active
Mistral AI · 141B
Large MoE with 39B active params
Alibaba · 235B
Massive MoE with 22B active — frontier quality
Alibaba · 397B
Largest multimodal Qwen 3.5 MoE
Meta · 400B
Multimodal MoE with 128 experts — 17B active, 1M context
Meta · 405B
Largest open-weight dense model by Meta
Alibaba · 480B
Largest open coding MoE — 35B active
DeepSeek · 671B
Improved V3 with hybrid thinking and tool use
No models found
Try adjusting your search or filters