我们在首届 LlamaCon 上宣布的一切
Everything we announced at our first LlamaCon

原始链接: https://ai.meta.com/blog/llamacon-llama-news/?_fb_noscript=1

LlamaCon展示了赋能Llama开发者的新工具和计划。全新的Llama API提供了有限的预览版,它结合了闭源模型的易用性和开源的灵活性,包括一键式API密钥创建、交互式游乐场以及使用Llama 3.3 8B微调自定义模型的功能。与Cerebras和Groq的合作实现了更快的推理速度。 Llama Stack集成正在扩展,包括NVIDIA NeMo微服务,以简化应用程序部署。增强的安全性是优先考虑的,新的Llama保护工具,如Llama Guard 4和Llama Defenders计划,支持安全的AI应用程序开发。 第二轮Llama影响力资助计划向10个国际获奖者提供了超过150万美元的资金,支持在不同领域(包括市政服务、药房运营和教育)中使用Llama的创新解决方案。Llama强调经济适用性、可及性和无需专有限制的自由构建,旨在推动全球人工智能驱动的创新和积极变革。

Meta举办了首届LlamaCon,展示了其Llama AI模型及相关工具的更新。一些人赞扬Meta尽管元宇宙遭遇挫折仍坚持AI研发,但也有人认为此次发布的消息令人失望,尤其是在阿里巴巴发布Qwen3之后。讨论主要围绕大型语言模型(LLM)的竞争格局展开,一些用户强调Llama的实际应用和Meta进军AI云服务的潜力。批评意见包括缺乏真正的开源方面(训练数据)、某些模型的访问受限,以及Llama相较于竞争对手的优势问题。讨论还涉及本地LLM的使用、专业化模型以及Meta在智能家居市场中的潜在作用。一些评论还将Meta的做法称为“开源洗白”,并认为其许可模式如果用户因担心强制执行而转向真正开源的模型,可能会适得其反。

原文

Takeaways

  • We’re celebrating the first-ever LlamaCon by sharing some exciting new tools that will make it even easier to build with Llama and create custom models to fit your use cases.
  • The Llama API, launching as a limited preview, combines the best features of closed models with open-source flexibility, offering easy one-click API key creation and interactive playgrounds to explore different Llama models.
  • We’re also sharing new Llama Protection Tools and announcing the Llama Defenders Program, where select trusted partners can access AI-enabled tools to help them evaluate the security of their systems to safeguard against potential threats.
  • We’re announcing the 10 international recipients of the second Llama Impact Grants. With over $1.5 million USD awarded, these grants support companies, startups, and universities using Llama to drive transformative change.

LlamaCon has officially kicked off. Our inaugural event brings together developers from around the world who all have one thing in common: They love building with Llama. It’s been a little more than two years since we launched Llama. During that time, we’ve surpassed one billion downloads and, most importantly, established Llama as a leader in the open source AI ecosystem. Developers, startups, governments, and enterprises are finding success with Llama by leveraging its capabilities to drive innovation, improve efficiency, and solve complex problems.

Of course, we couldn’t bring together a group of developers without also sharing some open source tools that will make it easier for them to explore Llama models, build faster, and use the latest tools to defend and protect. Here’s a look at what we’re announcing today and how you can get started with our newest releases.

Introducing Llama API in preview: The best features of closed model APIs, with the flexibility of open source

We want to make it even easier for you to quickly start building with Llama, while also giving you complete control over your models and weights without being locked into an API. Today, we’re announcing Llama API—our upcoming developer platform for Llama application development, which is available as a limited free preview.

Llama API provides easy one-click API key creation and interactive playgrounds to explore different Llama models, including the Llama 4 Scout and Llama 4 Maverick models we announced earlier this month. When you’re ready to build your application, we provide a lightweight SDK in both Python and Typescript. Llama API is also compatible with the OpenAI SDK, making it easy to convert existing applications.


We know that it’s also important to have access to tools that help you customize and run your models more efficiently. As part of this release, we’re sharing tools for fine-tuning and evaluation in our new API, where you can tune your own custom versions of our new Llama 3.3 8B model. We’re sharing this capability to help you reduce costs while also working toward increased speed and accuracy. You can generate data, train on it, and then use our evaluations suite to easily test the quality of your new model. Making evaluations more accessible and easier to run will help move from gut feelings to data, ensuring you have models that perform well to meet your needs. The security and privacy of your content and data is our top priority. We do not use your prompts or model responses to train our AI models. When you’re ready, the models you build on the Llama API are yours to take with you wherever you want to host them, and we don’t keep them locked on our servers. We’re introducing these features to select customers, with plans for a broader rollout in the coming weeks and months. This will open up new possibilities for developers to build custom models for every kind of use case.

This preview is just step one for the Llama API. We’re excited to have more people building with our API and will use their feedback to help us iterate on future versions. If you’re interested in applying for one of the early limited spots for our free preview, please fill out this form.

Fast inference with Llama API

We’re excited to announce a collaboration with Cerebras and Groq that will help empower more Llama developers to achieve faster inference speeds using the Llama API. Early experimental access to Llama 4 models powered by Cerebras and Groq are now available by request, offering a convenient way for developers to experiment and prototype use cases before scaling with their chosen vendor. By simply selecting the Cerebras or Groq model names in the API, developers can leverage this capability and enjoy a streamlined experience with all usage tracked in one location. This collaboration reinforces our commitment to fostering a diverse ecosystem that offers flexibility and choice, and we look forward to expanding partnerships with additional providers to bring even more options to build on top of Llama.

New Llama Stack integrations

We heard from developers that we need to make it easier to deploy applications using different service providers. Last year, we collaborated with partners to build Llama Stack distributions for their downstream enterprise clients. This year, we’re expanding those collaborations, including a recently announced integration of Llama Stack with NVIDIA NeMo microservices, and working closely with partners like IBM, Red Hat, Dell Technologies, and others on new integrations that will be announced soon. Alongside our partners, we envision Llama Stack as the industry standard for enterprises looking to seamlessly deploy production-grade turnkey AI solutions.

New Llama Protections and security for the open source community

We’re committed to providing developers with the best tools and resources to build secure AI applications. Today, we’re releasing new Llama protection tools for the open source community, including Llama Guard 4, LlamaFirewall, and Llama Prompt Guard 2. We’re also sharing updates to help organizations evaluate the efficacy of AI systems in security operations with CyberSecEval 4, as well as announcing the Llama Defenders Program for select partners. As more capable AI models become available, we believe this is an important effort to improve the robustness of software systems. Read more about our latest open source protection tools and AI security advancements news.

Meet the Llama Impact Grant recipients

The Llama Impact Grants are aimed at fostering innovation and creating economic opportunities through open source AI. Today, we’re excited to announce the 10 international recipients of the second Llama Impact Grants. With over $1.5 million USD awarded to 10 international recipients, these grants support companies, startups, and universities using Llama to drive transformative change. This year’s recipients showcase a diverse range of solutions, including E.E.R.S. from the US, which uses a Llama-enabled chatbot to enhance public access to civic services, and Doses AI in the UK, which transforms pharmacy operations with real-time error detection. Other notable winners include Solo Tech, which provides offline AI support in rural US communities, and FoondaMate, a multilingual study tool that aids millions of students across Africa and beyond. Read more about the Llama Impact Grant recipients.

The future is open source

We’re proud to be leaders in open source AI and have our Llama ecosystem at the forefront of this revolution. With Llama, developers and enterprises have the freedom to build whatever they want, without being tied down by proprietary systems or locked-in contracts. This flexibility, combined with its portability and accessibility, makes Llama the go-to choice for those looking to harness the power of AI.

We’re committed to being a long-term partner for enterprises and developers and providing a seamless transition path from closed models. Llama is affordable, easy-to-use, and enabling more people to access the benefits of AI regardless of their technical expertise or hardware resources. We believe in the potential of AI to transform industries and improve lives, which is why we’re excited to continue supporting the growth and development of the Llama ecosystem for the benefit of all. We can’t wait to see what you’ll build next.

联系我们 contact @ memedata.com