This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
原始链接: https://github.com/unslothai/unsloth
提供的自述文件详细介绍了 Unsloth,这是一家人工智能研究组织,专门从事大规模文本数据集的快速深度学习模型微调。 他们的主要重点是使用 PyTorch、OpenLayers、xFormers 和其他相关库进行本地 QLoRA 微调。 他们为希望通过 Pip 软件包发布渠道使用其技术的 Linux 发行版和 Python 用户提供安装说明。 提供的性能指标包括将内存使用量减少 50%,针对不同数量的参数将速度提高 4 倍至 8 倍,在两个 Tesla T4 GPU 上将峰值内存消耗降低 18。1 倍至 6。5 倍,并支持包括日语和简体中文在内的 7 种以上语言。 They also provide documentation resources such as tutorials, examples, and starred repositories for interested parties。 Additionally, they have published several academic papers based on local QLoRA weight methodology used to train and fine-tune state-of-the art pre-trained models, leading to significant improvements in efficiency, especially when working with larger scale data sets。 There are current limitations, however, that need to be addressed, particularly concerning non-LLM model integration into the system framework。 Nevertheless, these accomplishments position them favorably among comparable organizations within the field, such as Hugging Face, while providing affordable alternatives for small businesses and individuals seeking similar results。
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.