CS 180/280A Project 5: Fun with Diffusion Models

Gina Wu (Fall 2024)

This project creates a pipeline for automatically stitching images together to create a larger panoramic image.


Part A: The Power of Diffusion Models

In the first part, I experiment with pre-trained diffusion models, implement diffusion sampling loops, and use them for tasks including inpainting and creating optical illusions.

Note: seed=42 for all results shown.

0: Setup

I load the 2-stage DeepFloyd IF model from Hugging Face, which is a text-to-image model. Below are some examples of the model usage with given prompts. Notice the quality of generated images becomes better with higher inference steps, especially in the first set of realistic images.

im
'a man wearing a hat' (step=20)
im
'a man wearing a hat' (step=50)
im
'a man wearing a hat' (step=100)
im
'a rocket ship' (step=20)
im
'a rocket ship' (step=50)
im
'a rocket ship' (step=100)
im
'an oil painting of a snowy mountain village' (step=20)
im
'an oil painting of a snowy mountain village' (step=50)
im
'an oil painting of a snowy mountain village' (step=100)

1: Sampling Loops

1.1: Implementing the Forward Process

The two main stages of diffusion are the forward and the reverse processes. Here I implement the forward process, which takes in a clean image and adds randomly sampled Gaussian noise scaled by values from a noise schedule (given by the pre-trained model).

im
Campanile
im
Noisy Campanile (t=250)
im
Noisy Campanile (t=500)
im
Noisy Campanile (t=750)

1.2: Classical Denoising

Naively denoising images with a Gaussian blur filter does not work very well. I tried very hard here to recover the images with different kernel sizes, but the results are still very noisy.

im
Noisy Campanile (t=250)
im
Noisy Campanile (t=500)
im
Noisy Campanile (t=750)
im
Gaussian Blur Denoising (t=250, kernel_size=11)
im
Gaussian Blur Denoising (t=500, kernel_size=15)
im
Gaussian Blur Denoising (t=750, kernel_size=21)

1.3: One-Step Denoising

I can use the pretrained diffusion model to denoise in a single step. For each noisy image, I estimate the noise by passing it through the model along with the text embedding for the prompt "a high quality photo". Then I remove the noise by reversing the forward process in one step.

The results are much better than simply applying a blurring filter, but still not very sharp.

im
Noisy Campanile (t=250)
im
Noisy Campanile (t=500)
im
Noisy Campanile (t=750)
im
One-Step Denoised Campanile (t=250)
im
One-Step Denoised Campanile (t=500)
im
One-Step Denoised Campanile (t=750)

1.4: Iterative Denoising

With iterative denoising, I start with the noisy image at the last time step, and step forward in time to recover the clean image in strides. The image gets recovered gradually as time steps decrease.

im
Noisy Campanile (t=90)
im
Noisy Campanile (t=240)
im
Noisy Campanile (t=390)
im
Noisy Campanile (t=540)
im
Noisy Campanile (t=690)
im
Original
im
Iteratively Denoised Campanile
im
One-Step Denoised Campanile
im
Gaussian Blurred Campanile

1.5: Diffusion Model Sampling

We can also randomly sample from pure noise iteratively.

im
Sample 1
im
Sample 2
im
Sample 3
im
Sample 4
im
Sample 5

1.6: Classifier-Free Guidance (CFG)

Here sampling is improved with classifier-free guidance, which involves calculating both a conditional and an unconditional noise estimate. The results have higher diversity and quality.

im
Sample 1 with CFG
im
Sample 2 with CFG
im
Sample 3 with CFG
im
Sample 4 with CFG
im
Sample 5 with CFG

1.7: Image-to-image Translation

Here we look at the task of editing. Following the SDEdit algorithm, I run the forward process to get a noisy image, and then run the iterative denoising algorithm. The results show a series of edits progressively closer to the target image.

im
SDEdit (i_start=1)
im
SDEdit (i_start=3)
im
SDEdit (i_start=5)
im
SDEdit (i_start=7)
im
SDEdit (i_start=10)
im
SDEdit (i_start=20)
im
Campanile Original
im
SDEdit (i_start=1)
im
SDEdit (i_start=3)
im
SDEdit (i_start=5)
im
SDEdit (i_start=7)
im
SDEdit (i_start=10)
im
SDEdit (i_start=20)
im
Donut Original
im
SDEdit (i_start=1)
im
SDEdit (i_start=3)
im
SDEdit (i_start=5)
im
SDEdit (i_start=7)
im
SDEdit (i_start=10)
im
SDEdit (i_start=20)
im
Cat Original

1.7.1: Editing Hand-Drawn and Web Images

We can do the same thing to pictures and web images, which tend to look better than real images. I particularly like the flower drawing one here and how the shape and colors show up in each edit.

im
SDEdit (i_start=1)
im
SDEdit (i_start=3)
im
SDEdit (i_start=5)
im
SDEdit (i_start=7)
im
SDEdit (i_start=10)
im
SDEdit (i_start=20)
im
Web Image Cat
im
SDEdit (i_start=1)
im
SDEdit (i_start=3)
im
SDEdit (i_start=5)
im
SDEdit (i_start=7)
im
SDEdit (i_start=10)
im
SDEdit (i_start=20)
im
Hand Drawn Flower
im
SDEdit (i_start=1)
im
SDEdit (i_start=3)
im
SDEdit (i_start=5)
im
SDEdit (i_start=7)
im
SDEdit (i_start=10)
im
SDEdit (i_start=20)
im
Hand Drawn Person

1.7.2: Inpainting

For inpainting, I mask out a part of an image and force the model to have the same pixels everywhere else by adding them in after each denoising step, in order to preserve the rest of the photo.

im
Campanile
im
Mask
im
To Replace
im
Inpainted
im
Berkeley
im
Mask
im
To Replace
im
Inpainted
im
Cat
im
Mask
im
To Replace
im
Inpainted

1.7.3: Text-Conditional Image-to-image Translation

We can also guide SDEdit with a text prompt. The first photo input is the campanile with the text prompt "rocket ship", the second is a cat with the prompt "photo of a dog", and the third is a hand-drawn flower with the prompt "oil painting of a campfire". I like how you can see the model recovering the shape of the cat in the second series of photos, and the colors in the third.

im
Rocket Ship (i_start=1)
im
Rocket Ship (i_start=3)
im
Rocket Ship (i_start=5)
im
Rocket Ship (i_start=7)
im
Rocket Ship (i_start=10)
im
Rocket Ship (i_start=20)
im
Campanile
im
Dog (i_start=1)
im
Dog (i_start=3)
im
Dog (i_start=5)
im
Dog (i_start=7)
im
Dog (i_start=10)
im
Dog (i_start=20)
im
Cat
im
Campfire (i_start=1)
im
Campfire (i_start=3)
im
Campfire (i_start=5)
im
Campfire (i_start=7)
im
Campfire (i_start=10)
im
Campfire (i_start=20)
im
Flower

1.8 Visual Anagrams

Visual anagrams are photos that look different when flipped. This is simple to implement, where we just flip the input image and calculate two different noise estimates. I noticed that simply taking the average is not enough sometimes, probably due to some model bias, so for some results I took an emperical weighted average of the noises. I think all the photos here turned out really well, and I really like the hipster barista holding a coffee cup!

im
'an oil painting of people around a campfire'
im
'an oil painting of an old man'
im
'a photo of a hipster barista'
im
'a lithograph of a skull'
im
'a photo of a dog'
im
'a man wearing a hat'

1.9 Hybrid Images

Hybrid images are created by taking noises from two different text prompts and adding one's low frequency features and the other's high frequency features. I loved the way the first three hybrid images turned out. I think having the prompt styles match (e.g. watercolor in photo 3) really helps, but also I had to experiment a lot with various text prompts. The watercolor sunset and beach is a failure case, where the two prompts "blended in" with each other rather than having them show up at different viewing distances.

im
Hybrid image of a skull and a waterfall
im
Hybrid image of a rocket and a snowy village
im
Hybrid image of a watercolor cat and bird
im
Hybrid image of a watercolor sunset and beach

Part B: Diffusion Models from Scratch

In the second part, I implement and train my own diffusion models from scratch on the MNIST dataset.

1: Training a Single-Step Denoising UNet

The first objective is to directly predict a clean image from a noisy image. We can formulate our objective as follows:

im

For this task, I created a noisy MNIST dataset by manually adding Gaussian noise to the torch MNIST dataset.

im

1.1: Implementing the UNet

The UNet architecture consists of a series of downsampling layers, followed by a bottleneck, and upsampling back to input size.

im

1.2: Using the UNet to Train a Denoiser

Training is straightforward and the loss curve is quite smooth. Here I show the results of denoising after the first and fifth epoch of training.

1.2.1: Training

im

Epoch 1:

im
im
im

Epoch 5:

im
im
im

1.2.2: Out-of-Distribution Testing

We trained on noise level of 0.5, but the model performs reasonably well on other noise levels.

im
im

2: Training a Diffusion Model

Now we change the problem objective to predict noise. This is a more difficult problem, but more rubust as it allows us to sample from a pure noise image. This is the new objective function:

im

2.1: Adding Time-Conditioning to UNet

We add time conditioning by injecting it into the decoding blocks through fully connected blocks.

im

2.1.1: Training

I follow the DDPM algorithm of picking a random clean image, adding in random noise, and training the unet to predict noise.

im

Training is noisier than before as it is a much harder task, but still achieves a good loss trend.

im

2.1.2: Sampling

Now we can sample from pure noise! The results achieve very clean background and definition.

im
im
Epoch 1
im
Epoch 5
im
Epoch 10
im
Epoch 15
im
Epoch 20

2.2: Adding Class-Conditioning to UNet

Sometimes the results from before don't look like anything. We can improve on this by adding class conditioning, which allows us to control the image generation. I did this by adding in two additional fully connected blocks that take in class label as one hot vectors. Dropout is also implemented such that 10% of class conditioning is masked out.

2.2.1: Training

Training doesn't involve class conditioning.

im
im

2.2.2: Sampling

During sampling, I used classifier-free guidance by calculating two noises, one without class conditioning and one with conditioning. The results look very good.

im
im
Epoch 1
im
Epoch 5
im
Epoch 10
im
Epoch 15
im
Epoch 20

This project was rewarding, especially in the second part with debugging a model from scratch. There are many components to modeling from the building blocks, to training and sampling. It was helpful for me to go through the motions of debugging and pinpointing exactly where to look for mistakes, despite it being painful at times. I also appreciated being able to work with pretrained models in part 1!