克劳德3模型家族
Claude 3 model family

原始链接: https://www.anthropic.com/news/claude-3-family

Anthropic 推出了名为 Claude 3 的最新模型,该模型拥有令人印象深刻的新功能,可增强人工智能能力。 该模型系列包括三个变体 - Claude 3 Haiku、Claude 3 Sonnet 和 Claude 3 Opus - 每个变体都旨在根据其认知能力、速度和成本满足不同的要求和价位。 Claude 3 Opus 是该系列中最智能的型号,在复杂任务上具有卓越的性能,并提供接近人类水平的智能、理解力和流畅度。 该模型的输入成本仅为每百万代币 15 美元,输出成本为 75 美元,在复杂性和成本效益之间取得了完美平衡,使其适合企业应用、任务自动化和高级科学分析。 其显着的优势包括更长的上下文窗口、降低拒绝率、提高准确性以及解决早期版本固有的偏见。 另一方面,Claude 3 Sonnet 在智能和速度之间提供了一个很好的中间立场,专为数据处理和战略分析等企业规模部署而设计。 由于输入成本较低(每百万代币输入 3 美元与 15 美元),与同类模型相比,Claude 3 Sonnet 模型的成本结构较低,因此可以以极低的价格提供类似的情报。 最后,Claude 3 Haiku 专为快速有效地处理简单的查询和请求而设计,这要归功于其几乎立即处理大量请求的无与伦比的速度。 每百万输入令牌(0.25 美元)和输出令牌(1.25 美元)的成本较低,对于涉及客户交互、成本节约和优化目的的应用程序来说,它是一个绝佳的选择。 这些尖端模型利用了宪法人工智能原则,为希望利用其独特功能的企业带来了许多潜在的好处。 所有 Claude 3 型号均提供更强的视觉功能,包括复杂的图表、图形和图像解释等。 此外,它还通过先进的模式识别算法最大限度地降低拒绝率。 此外,它还纳入了旨在防止和监控框架内偏差的安全协议,最终促进负责任的扩展实践。 总体而言,这些新模型展示了人工智能和机器学习能力领域的重大进步,为全球关键行业的无数机会和进步铺平了道路。 据 Anthropic 称,该公司计划不断推出功能

根据讨论,LLM(语言模型)近年来似乎逐渐变得更加准确和复杂。 OpenAI 开发的 GPT-4 和 CLUED3 推动了 LLMS 系统和应用的进步。 显着的改进之一是生成响应的准确性提高,特别是在复杂的编程场景中,从而产生更多的实际应用。 LLMS 的引入导致了各种有用的应用程序的创建,例如 copy、codeium 和 Double,为开发人员提供了更多的灵活性和功能。 然而,仍然存在局限性,特别是在金融计算或预测中使用人工智能方面,需要采取额外的措施来减少潜在的错误。 尽管取得了进展,但围绕有限训练数据的问题仍然是一个需要克服的挑战,导致模型偶尔会与特定概念或想法作斗争。 尽管如此,改进这些模型能力的持续努力标志着未来有希望的发展。 此外,有些人更喜欢扩展,例如 Emacs,而不是内置的 AI 功能。 总体而言,LLMS 的发展展示了通过先进的人工智能技术彻底改变各个行业的巨大前景。
相关文章

原文
Claude 3

Today, we're announcing the Claude 3 model family, which sets new industry benchmarks across a wide range of cognitive tasks. The family includes three state-of-the-art models in ascending order of capability: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Each successive model offers increasingly powerful performance, allowing users to select the optimal balance of intelligence, speed, and cost for their specific application.

Opus and Sonnet are now available to use in claude.ai and the Claude API which is now generally available in 159 countries. Haiku will be available soon.

Claude 3 model family

A new standard for intelligence

Opus, our most intelligent model, outperforms its peers on most of the common evaluation benchmarks for AI systems, including undergraduate level expert knowledge (MMLU), graduate level expert reasoning (GPQA), basic mathematics (GSM8K), and more. It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence.

All Claude 3 models show increased capabilities in analysis and forecasting, nuanced content creation, code generation, and conversing in non-English languages like Spanish, Japanese, and French.

Below is a comparison of the Claude 3 models to those of our peers on multiple benchmarks [1] of capability:

Near-instant results

The Claude 3 models can power live customer chats, auto-completions, and data extraction tasks where responses must be immediate and in real-time.

Haiku is the fastest and most cost-effective model on the market for its intelligence category. It can read an information and data dense research paper on arXiv (~10k tokens) with charts and graphs in less than three seconds. Following launch, we expect to improve performance even further.

For the vast majority of workloads, Sonnet is 2x faster than Claude 2 and Claude 2.1 with higher levels of intelligence. It excels at tasks demanding rapid responses, like knowledge retrieval or sales automation. Opus delivers similar speeds to Claude 2 and 2.1, but with much higher levels of intelligence.

Strong vision capabilities

The Claude 3 models have sophisticated vision capabilities on par with other leading models. They can process a wide range of visual formats, including photos, charts, graphs and technical diagrams. We’re particularly excited to provide this new modality to our enterprise customers, some of whom have up to 50% of their knowledge bases encoded in various formats such as PDFs, flowcharts, or presentation slides.

Fewer refusals

Previous Claude models often made unnecessary refusals that suggested a lack of contextual understanding. We’ve made meaningful progress in this area: Opus, Sonnet, and Haiku are significantly less likely to refuse to answer prompts that border on the system’s guardrails than previous generations of models. As shown below, the Claude 3 models show a more nuanced understanding of requests, recognize real harm, and refuse to answer harmless prompts much less often.

Improved accuracy

Businesses of all sizes rely on our models to serve their customers, making it imperative for our model outputs to maintain high accuracy at scale. To assess this, we use a large set of complex, factual questions that target known weaknesses in current models. We categorize the responses into correct answers, incorrect answers (or hallucinations), and admissions of uncertainty, where the model says it doesn’t know the answer instead of providing incorrect information. Compared to Claude 2.1, Opus demonstrates a twofold improvement in accuracy (or correct answers) on these challenging open-ended questions while also exhibiting reduced levels of incorrect answers.

In addition to producing more trustworthy responses, we will soon enable citations in our Claude 3 models so they can point to precise sentences in reference material to verify their answers.

Long context and near-perfect recall

The Claude 3 family of models will initially offer a 200K context window upon launch. However, all three models are capable of accepting inputs exceeding 1 million tokens and we may make this available to select customers who need enhanced processing power.

To process long context prompts effectively, models require robust recall capabilities. The 'Needle In A Haystack' (NIAH) evaluation measures a model's ability to accurately recall information from a vast corpus of data. We enhanced the robustness of this benchmark by using one of 30 random needle/question pairs per prompt and testing on a diverse crowdsourced corpus of documents. Claude 3 Opus not only achieved near-perfect recall, surpassing 99% accuracy, but in some cases, it even identified the limitations of the evaluation itself by recognizing that the "needle" sentence appeared to be artificially inserted into the original text by a human.

Responsible design

We’ve developed the Claude 3 family of models to be as trustworthy as they are capable. We have several dedicated teams that track and mitigate a broad spectrum of risks, ranging from misinformation and CSAM to biological misuse, election interference, and autonomous replication skills. We continue to develop methods such as Constitutional AI that improve the safety and transparency of our models, and have tuned our models to mitigate against privacy issues that could be raised by new modalities.

Addressing biases in increasingly sophisticated models is an ongoing effort and we’ve made strides with this new release. As shown in the model card, Claude 3 shows less biases than our previous models according to the Bias Benchmark for Question Answering (BBQ). We remain committed to advancing techniques that reduce biases and promote greater neutrality in our models, ensuring they are not skewed towards any particular partisan stance.

While the Claude 3 model family has advanced on key measures of biological knowledge, cyber-related knowledge, and autonomy compared to previous models, it remains at AI Safety Level 2 (ASL-2) per our Responsible Scaling Policy. Our red teaming evaluations (performed in line with our White House commitments and the 2023 US Executive Order) have concluded that the models present negligible potential for catastrophic risk at this time. We will continue to carefully monitor future models to assess their proximity to the ASL-3 threshold. Further safety details are available in the Claude 3 model card.

Easier to use

The Claude 3 models are better at following complex, multi-step instructions. They are particularly adept at adhering to brand voice and response guidelines, and developing customer-facing experiences our users can trust. In addition, the Claude 3 models are better at producing popular structured output in formats like JSON—making it simpler to instruct Claude for use cases like natural language classification and sentiment analysis.

Model details

Claude 3 Opus is our most intelligent model, with best-in-market performance on highly complex tasks. It can navigate open-ended prompts and sight-unseen scenarios with remarkable fluency and human-like understanding. Opus shows us the outer limits of what’s possible with generative AI.

Cost
[Input $/million tokens | Output $/million tokens]
$15 | $75
Context window200K*
Potential uses
  • Task automation: plan and execute complex actions across APIs and databases, interactive coding
  • R&D: research review, brainstorming and hypothesis generation, drug discovery
  • Strategy: advanced analysis of charts & graphs, financials and market trends, forecasting
DifferentiatorHigher intelligence than any other model available.

*1M tokens available for specific use cases, please inquire.

Claude 3 Sonnet strikes the ideal balance between intelligence and speed—particularly for enterprise workloads. It delivers strong performance at a lower cost compared to its peers, and is engineered for high endurance in large-scale AI deployments.

Cost
[Input $/million tokens | Output $/million tokens]
$3 | $15
Context window200K
Potential uses
  • Data processing: RAG or search & retrieval over vast amounts of knowledge
  • Sales: product recommendations, forecasting, targeted marketing
  • Time-saving tasks: code generation, quality control, parse text from images
DifferentiatorMore affordable than other models with similar intelligence; better for scale.

Claude 3 Haiku is our fastest, most compact model for near-instant responsiveness. It answers simple queries and requests with unmatched speed. Users will be able to build seamless AI experiences that mimic human interactions.

Cost
[Input $/million tokens | Output $/million tokens]
$0.25 | $1.25
Context window200K
Potential uses
  • Customer interactions: quick and accurate support in live interactions, translations
  • Content moderation: catch risky behavior or customer requests
  • Cost-saving tasks: optimized logistics, inventory management, extract knowledge from unstructured data
DifferentiatorSmarter, faster, and more affordable than other models in its intelligence category.

Model availability

Opus and Sonnet are available to use today in our API, which is now generally available, enabling developers to sign up and start using these models immediately. Haiku will be available soon. Sonnet is powering the free experience on claude.ai, with Opus available for Claude Pro subscribers.

Sonnet is also available today through Amazon Bedrock and in private preview on Google Cloud’s Vertex AI Model Garden—with Opus and Haiku coming soon to both.

Smarter, faster, safer

We do not believe that model intelligence is anywhere near its limits, and we plan to release frequent updates to the Claude 3 model family over the next few months. We're also excited to release a series of features to enhance our models' capabilities, particularly for enterprise use cases and large-scale deployments. These new features will include Tool Use (aka function calling), interactive coding (aka REPL), and more advanced agentic capabilities.

As we push the boundaries of AI capabilities, we’re equally committed to ensuring that our safety guardrails keep apace with these leaps in performance. Our hypothesis is that being at the frontier of AI development is the most effective way to steer its trajectory towards positive societal outcomes.

We’re excited to see what you create with Claude 3 and hope you will give us feedback to make Claude an even more useful assistant and creative companion. To start building with Claude, visit anthropic.com/claude.

联系我们 contact @ memedata.com