Dreambooth is a way to put anything — your loved one, your dog, your favorite toy — into a Stable Diffusion model. We will introduce what Dreambooth is, how it works, and how to perform the training.
This tutorial is aimed at people who have used Stable Diffusion but have not used Dreambooth before.
You will follow the step-by-step guide to prepare your training images and use our easy 1-click Colab notebook for dreambooth training. No coding is required!
You can put real-life objects or persons into a Stable Diffusion model and generate images in different styles and settings.
Do you know many custom models are trained using Dreambooth? After completing this tutorial, you will know how to make your own.
You will first learn about what Dreambooth is and how it works. But You can skip to the step-by-step guide if you are only interested in the training.
Software
To follow this tutorial and perform a training, you will need to
- Be a member of the site, OR
- Purchase the training notebook
Either option grants you access to the training notebook and example images.
Note:
- This notebook can only train a Stable Diffusion v1.5 checkpoint model. Train an SDXL LoRA model if you are interested in the SDXL Model.
- This notebook can be run with a free Colab account. A paid account allows you to use a faster V100 GPU, which speeds up the training.
What is Dreambooth?
Published in 2022 by the Google research team, Dreambooth is a technique to fine-tune diffusion models (like Stable Diffusion) by injecting a custom subject into the model.
Why is it called Dreambooth? According to the Google research team,
It’s like a photo booth, but once the subject is captured, it can be synthesized wherever your dreams take you.
Sounds great! But how well does it work? Below is an example from the research article. Using just 3 images of a particular dog (Let’s call her Devora) as input, the dreamboothed model can generate images of Devora in different contexts.
How does Dreambooth work?
You may ask why you can’t train the model with additional steps with those images. The issue is that doing so is known to cause catastrophic failure due to overfitting (since the dataset is quite small) and language drift.
Dreambooth resolves these problems by
- Using a rare word for the new subject (Notice I used a rare name, Devora, for the dog) so that it does not have a lot of meaning in the model in the first place.
- Prior preservation on class: In order to preserve the meaning of the class (dog in the above case), the model is fine-tuned in a way that the subject (Devora) is injected while the image generation of the class (dog) is preserved.
There’s another similar technique called textual inversion. The difference is that Dreambooth fine-tunes the whole model, while textual inversion injects a new word, instead of reusing a rare one, and fine-tunes only the text embedding part of the model.
What you need to train Dreambooth
You will need three things
- A few custom images
- An unique identifier
- A class name
In the above example. The unique identifier is Devora. The class name is dog.
Then you will need to construct your instance prompt:
a photo of [unique identifier] [class name]
And a class prompt:
a photo of [class name]
In the above example, the instance prompt is
a photo of Devora dog
Since Devora is a dog, the class prompt is
a photo of a dog
Now you understand what you need, let’s dive into the training!
Step-by-step guide
Step 1: Prepare training images
As in any machine learning task, high-quality training data is the most important factor to your success.
Take 3-10 pictures of your custom subject. The picture should be taken from different angles.
The subject should also be in a variety of backgrounds so that the model can differentiate the subject from the background.
I will use this toy in the tutorial.
Step 2: Resize your images to 512×512
In order to use the images in training, you will first need to resize them to 512×512 pixels for training with v1 models.
BIRME is a convenient site for resizing images.
- Drop your images to the BIRME page.
- Adjust the canvas of each image so that it shows the subject adequately.
- Make sure the width and height are both 512 px.
- Press SAVE FILES to save the resized images to your computer.
Alternatively, you can download my resized images if you want to go through the tutorial.
To download the training images:
- Site Members: Visit the members’ resources page.
- If you have purchased the notebook, you can download the training images on the product page.
Step 3: Training
I recommend using Google Colab for training because it saves you the trouble setting up. The following notebook is modified from Shivam Shrirao’s repository but is more user-friendly. Follow the repository’s instructions if you prefer other setups.
The whole training takes about 30 minutes. If you don’t use Google Colab much, you can probably complete the training without getting disconnected. Purchase some compute credits to avoid the frustration of getting disconnected.
The notebook will save the model to your Google Drive. Make sure you have at least 2GB if you choose fp16
(recommended) and 4GB if you don’t.
1. Open the Colab notebook.
- Site Members: Visit the members’ resources page.
- If you have purchased the notebook, you can access the notebook on the product page.
2. Enter the MODEL_NAME. You can use the Stable Diffusion v1.5 model (HuggingFace page). You can find more models on HuggingFace here. The model name should be in the format user/model
.
runwayml/stable-diffusion-v1-5
3. Enter the BRANCH name. See the screenshot below for the model and branch names.
fp16
4. Put in the instance prompt and class prompt. For my images, I name my toy rabbit zwx so my instance prompt is:
photo of zwx toy
My class prompt is:
photo of a toy
5. Click the Play button ( ▶️ ) on the left of the cell to start processing.
6. Grant permission to access Google Drive. Currently, there’s no easy way to download the model file except by saving it to Google Drive.
7. Press Choose Files to upload the resized images.
8. It should take 10-30 minutes to complete the training, depending on which runtime machine you use. When it is done, you should see a few sample images generated from the new model.
8. Your custom model will be saved in your Google Drive, under the folder Dreambooth_model
. Download the model checkpoint file and install it in your favorite GUI.
That’s it!
Step 4: Testing the model (optional)
You can also use the second cell of the notebook to test using the model.
Prompt:
oil painting of zwx in style of van gogh
Using the prompt
oil painting of zwx in style of van gogh
with my newly trained model, I am happy with what I got:
Note that you have to run this cell right after the training is complete. Otherwise your notebook may be disconnected.
Using the model
You can use the model checkpoint file in AUTOMATIC1111 GUI. It is a free and full-featured GUI. You can run it on Windows, Mac, and Google Colab.
Using the model with the Stable Diffusion Colab notebook is easy. Your new model is saved in the folder AI_PICS/models
in your Google Drive. It is available to load without any moving around.
If you use AUTOMATIC1111 locally, download your dreambooth model to your local storage and put it in the folder stable-diffusion-webui > models > Stable-diffusion.
How to train from a different model
Stable Diffusion v1.5 may not be the best model to start with if you already have a genre of images you want to generate. For example, you should use the Realistic Vision model (see below) if you ONLY want to generate realistic images with your model.
You will need to change the MODEL_NAME
and BRANCH
.
Currently, the notebook only supports training half-precision v1 and v2 models. You can tell by looking at the model size. It should be about 2GB for v1 models.
You can find the model name and the branch name below on a Huggingface page. The page shown below is here.
Example: a realistic person
Realistic Vision v2 is a good model for training a new model with a realistic person. Use the following settings for a woman.
MODEL_NAME:
SG161222/Realistic_Vision_V2.0
BRANCH:
main
Instance prompt:
photo of zwx woman
Class prompt:
photo of woman
To download the training images:
- Site Members: Visit the members’ resources page.
- If you have purchased the notebook, you can download the training images on the product page.
Below are some samples of the training images.
Here are a few images from the new model. You can find the training images in the Dreambooth guide.
Tips for successful training
Each training dataset is different. You may need to adjust the settings.
Training images
The quality of training images is argueably the most important for a successful dreambooth training.
If you are training a face, the dataset should make of be high-quality images that clearly show the face. Avoid full-body images where the face is too small.
The images ideally should have different background. Otherwise, the background may show up in the AI images.
You don’t need too many images. 7-10 images are enough. Quality is more important than quantity.
Training steps
It is possible to over-train the model so that the AI images all look too much like the training images. The goal is to train just enough so that the model can generalize your subject to all scenes.
Reduce the steps if the model is over-trained.
Typically, you need 100 to 500 steps to train.
Class prompt
Adding more qualifiers to the class prompt helps the training.
For example, if the subject is a middle-aged woman, instead of using
Photo of a woman
You can use:
Photo of a 50 year old woman
You can also add ethnicity. It helps train a subject of a minority.
The dreambooth token
Although the traditional wisdom is to use a rare token like zwx or sks, it is not always the best.
This especially true for training a face of a realistic person.
It could be better off to use a generic name like Jane, Emma, Jennifer, etc. Prompt the model with a single word to see what you get. Find a name that looks like your subject.
Learning rate
A large learning rate trains the model faster. You need fewer steps. But if it is too large, the training won’t work and you get bad results.
If you don’t get good results, you can experiment with reducing the learning rate. But at the same time, you should increase the training steps. Roughly, if you reduce the learning rate by half, you should double your training steps.
Further readings
I recommend the following articles if you want to dive deeper into Dreambooth.