Meta 的全语言机器翻译,覆盖 1600 种语言。
Meta's Omnilingual MT for 1,600 Languages

原始链接: https://ai.meta.com/research/publications/omnilingual-mt-machine-translation-for-1600-languages/?_fb_noscript=1

## 泛语言机器翻译:覆盖1600种语言 当前的机器翻译(MT)系统虽然因大型语言模型(LLM)而得到改进,但在语言覆盖范围方面仍然存在困难,尤其是在世界上的7000种语言中——特别是那些濒危或边缘化的语言。研究人员推出了**泛语言机器翻译(OMT)**,该系统支持超过1600种语言,向前迈出了重要一步。 OMT通过全面的数据策略实现这一规模,将现有资源与新的数据集(如人工整理的双语语料和合成数据)相结合。探索了两种模型架构:**OMT-LLaMA**(仅解码器)和**OMT-NLLB**(编码器-解码器)。值得注意的是,参数从1B到8B的OMT模型*优于*70B LLM基线,证明了专业化的优势。 重要的是,OMT显著提高了先前支持不足的语言的*生成*能力,而不仅仅是理解。严格的评估使用了新的数据集(BOUQuET、Met-BOUQuET)和指标,包括无参考质量评估和毒性分析。该项目的Datasets和排行榜是公开可用的,促进了真正泛语言翻译的进一步发展。

黑客新闻 新的 | 过去的 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Meta 的万语模型,支持 1600 种语言 (meta.com) 7 分,来自 j0e1 2 小时前 | 隐藏 | 过去的 | 收藏 | 讨论 帮助 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系方式 搜索:
相关文章

原文

Advances made through No Language Left Behind (NLLB) have demonstrated that high-quality machine translation (MT) scale to 200 languages. Later Large Language Models (LLMs) have been adopted for MT, increasing in quality but not necessarily extending language coverage. Current systems remain constrained by limited coverage and a persistent generation bottleneck: while crosslingual transfer enables models to somehow understand many undersupported languages, they often cannot generate them reliably, leaving most of the world’s 7,000 languages—especially endangered and marginalized ones—outside the reach of modern MT. Early explorations in extreme scaling offered promising proofs of concept but did not yield sustained solutions. We present Omnilingual Machine Translation (OMT), the first MT system supporting more than 1,600 languages. This scale is enabled by a comprehensive data strategy that integrates large public multilingual corpora with newly created datasets, including manually curated MeDLEY bitext, synthetic backtranslation, and mining, substantially expanding coverage across long-tail languages, domains, and registers. To ensure both reliable and expansive evaluation, we combined standard metrics with a suite of evaluation artifacts: BLASER 3 quality estimation model (reference-free), OmniTOX toxicity classifier, BOUQuET dataset (a newly created, largest-to-date multilingual evaluation collection built from scratch and manually extended across a wide range of linguistic families), and Met-BOUQuET dataset (faithful multilingual quality estimation at scale). We explore two ways of specializing an LLM for machine translation: as a decoder-only model (OMT-LLaMA) or as a module in an encoder–decoder architecture (OMT-NLLB). The former consists of a model built on LLaMA3, with multilingual continual pretraining and retrieval-augmented translation for inference-time adaptation. The latter is a model built on top of a multilingual aligned space (OmniSONAR, itself also based on LLaMA3), and introduces a training methodology that can exploit non-parallel data, allowing us to incorporate the decoder-only continuous pretraining data into the training of an encoder–decoder architecture. Notably, all our 1B to 8B parameter models match or exceed the MT performance of a 70B LLM baseline, revealing a clear specialization advantage and enabling strong translation quality in low-compute settings. Moreover, our evaluation of English-to-1,600 translations further shows that while baseline models can interpret undersupported languages, they frequently fail to generate them with meaningful fidelity; OMT-LLaMA models substantially expand the set of languages for which coherent generation is feasible. Additionally, OMT models improve in cross-lingual transfer, being close to solving the “understanding” part of the puzzle in MT for the 1,600 evaluated. Beyond strong out-of-the-box performance, we find that finetuning and retrieval-augmented generation offer additional pathways to improve quality for the given subset of languages when targeted data or domain knowledge is available. Our leaderboard and main humanly created evaluation datasets (BOUQuET and Met-BOUQuET) are dynamically evolving towards Omnilinguality and freely available.

Download the Paper

联系我们 contact @ memedata.com