简单的去噪扩散
Simple Denoising Diffusion

原始链接: https://github.com/utkuozbulak/pytorch-simple-diffusion

本仓库提供了一个简化的PyTorch实现的去噪扩散概率模型(DDPMs),基于诸如“The Annotated Diffusion”和Phil Wang的扩散模型仓库等资源。代码经过重构,更清晰易懂,将扩散函数(funct_diffusion.py)、数据集处理(cls_dataset.py)和U-Net模型(cls_model.py)分离到不同的模块中。 该仓库通过提供一个精简的代码库,方便学习DDPM的概念。训练过程在`main_train_diffusion.py`中管理,而`main_generate_images.py`则负责使用训练好的模型生成图像。其中示例使用了金鱼数据集,并进行了旋转等数据增强,导致一些生成的图像出现倒置的情况。生成的图像虽然不如原始数据集清晰,但该仓库为理解和构建更复杂的扩散模型实现提供了坚实的基础。作者鼓励探索Phil Wang的仓库以学习更高级的技术。

Hacker News 最新 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 简单的去噪扩散模型 (github.com/utkuozbulak) jvkersch 1小时前 3 分 | 隐藏 | 过去 | 收藏 | 讨论 加入我们,参加 6 月 16-17 日在旧金山举办的 AI 初创公司学校! 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系我们 搜索:
相关文章

原文

This repository contains a bare-bone implementation of denoising diffusion [1,2] in PyTorch, with majority of its code taken from The Annotated Diffusion and Phil Wang's diffusion repository. Both resources are great to get started with diffusion models but they were still a bit convoluted for me when I first started learning about diffusion models so I refactored majority of The Annotated Diffusion's implementation and made a bare-bone implementation with functions and classes logically separated into different files as a learning exercise. My goal was to understand the building blocks of diffusion models in order to use them in some upcoming projects. I'm sharing this repo in hopes that my exercise will be useful for you in understanding more complex implementations.

Code is organized under src folder as follows:

  • funct_diffusion.py - Contains all necessary functions for forward and backward diffusion process, including the scheduler.

  • cls_dataset.py - Contains data-related functions and classes. I used a single class (n01443537 - Carassius auratus - Goldfish) with some augmentations (e.g., rotations and flips), that's why generated images have several upside down fishes.

  • cls_model.py - Contains the model. The model in this repo is basically a copy paste of The Annotated Diffusion's implementation, except for dim_mults=(1, 2, 4, 8) and channe=3 (RGB).

  • main_train_diffusion.py - I wanted to separate training and generation into two different files to digest what parameter is needed for what. This file is used to train the diffusion model.

  • main_generate_images.py - Generates images using the trained model.

Examples from the dataset:

Examples generated by the diffusion model (rotations are because of the data augmentation, it's hilarious though):

Example diffusion process:

As you can see, generated images are not as crisp as those from the dataset. There are many improvements that can be incorporated to improve the image quality, with each of those adding more to the complexity. Phil Wang's diffusion repository is a great place to discover some of those methods.

torch
torchvision
datasets
PIL
numpy

[1] Song and Ermon, Generative Modeling by Estimating Gradients of the Data Distribution

[2] Ho et al., Denoising Diffusion Probabilistic Models

联系我们 contact @ memedata.com