You can change clothes in an image with Stable Diffusion AI for free. You can specify the new clothes with:
- A text description of the new clothes
- A pattern image
- An image of the new clothes
Here are some samples of AI-generated clothes.
This tutorial will show you how.
Software
We will use AUTOMATIC1111, a popular and free Stable Diffusion software. You can use this GUI on Windows, Mac, or Google Colab.
New to Stable Diffusion? Check out the Quick Start Guide to start using Stable Diffusion. Become a Scholar Member to access the structured courses.
Inpaint Anything extension
To install Inpaint Anything extension in AUTOMATIC1111 Stable Diffusion WebUI:
- Start AUTOMATIC1111 Web-UI normally.
2. Navigate to the Extension Page.
3. Click the Install from URL tab.
4. Enter the following URL in the URL for extension’s git repository field.
https://github.com/geekyutao/Inpaint-Anything
5. Click the Install button.
6. Wait for the confirmation message that the installation is complete.
7. Restart AUTOMATIC1111.
Inpaint with Inpaint Anything
Let’s change the dress in the following image while keeping everything else untouched. This is a common use case for showing variations of a fashion product.
You will need to create an inpaint mask over her dress. However, it is difficult to create the mask manually with precision. This is where the Inpaint Anything extension can help.
Step 1: Upload the image
Before using an image in Inpaint Anything, you may need to resize it to a suitable size for Stable Diffusion. Let’s resize the width to 1024 pixels.
You should see the Inpaint Anything page after installing the extension. Go to the Inpaint Anything page.
Upload the image to the Input Image canvas.
Step 2: Run the segmentation model
Click the Run Segment Anything button. It runs the Segment Anything model (SAM), which creates masks of all objects in the image.
You should see a segmentation map like this. Different colors represent different objects identified.
Step 3: Create a mask
Use the paintbrush tool to paint over the object you want to keep. Since we want to regenerate the dress, you will need to paint over the dress.
You don’t need to paint the whole dress. A dot in each segment of the dress will do.
Click Create Mask to create a mask over the selected area.
If you see the mask not covering all the areas you want, go back to the segmentation map and paint over more areas.
If the image is too small to see the segments clearly, move the mouse over the image and press the S key to enter the full screen. Press the R key to reset.
Here, I put an extra dot on the segmentation mask to close the gap in her dress.
You can further use the following buttons to modify the mask:
- Expand mask region: Expand the mask slightly in all directions.
You can also add or subtract an area manually. First, use the inpainting tool to mask an area of the masked image.
Use the following buttons:
- Trim mask by sketch: Subtract the painted new area from the mask.
- Add mask by sketch: Add the painted new area to the mask.
Since our mask looks pretty good, we don’t need to use any of these functions to refine the mask.
Step 4: Send mask to inpainting
You can do inpainting directly in the Inpaint Anything extension. However, the functionality is limited. I prefer to send the mask to the img2img page for inpainting.
On the Inpaint Anything extension page, switch to the Mask Only tab.
Click Get Mask. A black-and-white mask should appear under the button.
Click Send to img2img inpaint.
The image and the inpaint mask should appear in the Inpaint upload tab on the img2img page.
Click the Auto Detect Size button (the triangular scale icon) to detect the size of the image.
Use the following settings for inpainting.
- Checkpoint Model: sd_xl_base_1.0.safetensors (Stable Diffusion XL base model)
- Prompt:
a woman in a dress with floral pattern
- Mask mode: Inpaint masked
- Inpaint area: Whole picture
- Sampling method: Euler a
- Sampling steps: 25
- Denoising strength: 0.8 (Higher values change more. But the image may be incoherent if it is too high.)
Now, she has new dresses!
Advanced inpainting techniques
If you don’t quite get what you want, you can try the following techniques.
Use an inpainting model
If the inpainted area is inconsistent with the rest of the image, you can use an inpainting model. They are special models for inpainting.
Let’s use the Realistic Vision Inpainting model because we want to generate a photo-realistic style.
- Checkpoint model: Realistic Vision Inpainting
- Denoising strength: 0.8 – 1.0
With an inpainting model, the denoising strength can be as high as you want without losing consistency.
Use ControlNet inpainting
However, there are not many inpainting models available.
You can use ControlNet inpainting with any Stable Diffusion v1.5 model!
- Checkpoint model: cyberrealistic_v33
- Prompt:
a woman in a dress with floral pattern
- Negative prompt:
disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w, 2d, 3d, illustration, sketch, nfsw, nude
- Denoising strength: 1
In the ControlNet section:
Enable: Yes
Control Type: Inpaint
Preprocessor: inpaint_global_harmonious
These are what we get.
We get some new patterns by using a different model!
ControlNet Canny
You can also experiment with other ControlNets, such as Canny, to let the inpainting better follow the original content.
- Enable: Ues
- Control Type: Canny
- Preprocessor: Canny
- Model: control_v11p_sd15_canny (For a v1.5 model)
You can keep the denoising strength as 1.
You must set the width to a size compatible with Stable Diffusion v1.5, i.e. 512 px. I did that by setting the resize scale factor to 0.5. You can also set Resize to 512×767 directly.
Here’s what I got.
Like inpainting, the Canny method allows you to use a very high denoising strength. It also honors the lines and shapes of the original dress. See the hands are always behind the dress like the original image.
ControlNet IP-adapter
What if you have a certain pattern you want to put on? Like the one below.
You will need two controlNets. Keep the Canny ControlNet and add an IP-adapter ControlNet.
ControlNet Unit 0 settings:
- Enable: Yes
- Control Type: Canny
- Preprocessor: Canny
- Model: control_v11p_sd15_canny (For a v1.5 model)
- Control Weight: 0.6
ControlNet Unit 1 settings:
- Enable: Yes
- Upload independent control image: Yes. Upload the pattern to the canvas above.
- Control Type: IP-adapter
- Preprocessor: ip-adapter_clip_sd15
- Model: ip-adapter_sd15_plus (For a v1.5 model)
- Control Weight: 0.9
- Ending control step: 0.8
You can keep the denoising strength as 1.
You can also put an exact image of the dress you want in the IP-Adapter unit.
Notes
If you specify the dress’s color in the prompt but don’t quite get it, try expanding the mask in Inpaint Anything. There could be some pixels surrounding the mask with the original color.
In ControlNet, increase the weight to increase the effect. Decrease the Ending Control Step to decrease the effect.