Do you ever need to create consistent AI characters from different viewing angles? The method in this article makes a grid of the same character like the one below. You can use them for downstream artwork that requires the same character in multiple images.
Here’s the video version of this tutorial for AUTOMATIC1111.
Video tutorial for ComfyUI.
Software
I will provide instructions on how to create this in AUTOMATIC1111.
I am working on a ComfyUI workflow. Stay tuned!
AUTOMATIC1111
We will use AUTOMATIC1111 , a popular and free Stable Diffusion software. Check out the installation guides on Windows, Mac, or Google Colab.
If you are new to Stable Diffusion, check out the Quick Start Guide.
Take the Stable Diffusion course if you want to build solid skills and understanding.
Check out the AUTOMATIC1111 Guide if you are new to AUTOMATIC1111.
ComfyUI
We will use ComfyUI, an alternative to AUTOMATIC1111.
Read the ComfyUI installation guide and ComfyUI beginner’s guide if you are new to ComfyUI.
Take the ComfyUI course to learn ComfyUI step-by-step.
How this workflow works
Checkpoint model
This workflow only works with someSDXL models. It works with the model I will suggest for sure. Switching to using other checkpoint models requires experimentation.
The reason appears to be the training data: It only works well with models that respond well to the keyword “character sheet” in the prompt.
Controlling the grid of viewing angles
With the right model, this technique uses the Canny SDXL ControlNet to copy the outline of a character sheet, like the one below.
The control image is what ControlNet actually uses. When using a new reference image, always inspect the preprocessed control image to ensure the details you want are there.
Copying the face
We will use IP-adapterFace ID Plus v2 to copy the face from another reference image. This IP-adapter model only copies the face. Because it uses Insight Face to exact facial features from the reference image, it can accurately transfer the face to different viewing angles.
Face correction
We must stay with an image size compatible with SDXL to ensure global consistency, for example, 1024×1024 pixels. The challenge is that the faces are too small to be rendered correctly by the model.
We will use automatic inpainting at higher resolution to fix them.
AUTOMATIC1111
This section covers using this workflow in AUTOMATIC1111. A video version is also available.
Software setup
Checkpoint model
We will use the ProtoVision XL model. Download it and put it in the folder stable-diffusion-webui > models > Stable-Diffusion.
Extensions
You will need the ControlNet and ADetailer extensions.
The installation URLs are:
https://github.com/Mikubill/sd-webui-controlnet
https://github.com/Bing-su/adetailer
In AUTOMATIC1111, go to Extensions > Install from URL. Enter an URL above in URL for extension’s git repository. Click the Install button.
Restart AUTOMATIC1111 completely.
Scroll down to the ControlNet section on the txt2img page.
You should see 3 ControlNet Units available (Unit 0, 1, and 2). If not, go to Settings > ControlNet. Set Multi-ControlNet: ControlNet unit number to 3. Restart.
IP-adapter and controlnet models
You will need the following two models.
Download them and put them in the folder stable-diffusion-webui > models > ControlNet.
Step 1: Enter txt2img setting
Go to the txt2img page, enter the following settings.
- Checkpoint model: ProtoVision XL
- Prompt:
character sheet, color photo of woman, white background, blonde long hair, beautiful eyes, black shirt
- Negative prompt:
disfigured, deformed, ugly, text, logo
- Sampling method: DPM++ 2M Karras
- Sampling Steps: 20
- CFG scale: 7
- Seed: -1
- Size: 1024×1024
Step 2: Enter ControlNet setting
Scroll down to the ControlNet section on the txt2img page.
ControlNet Unit 0
We will use Canny in ControlNet Unit 0.
Save the following image to your local storage. Upload it to the image canvas under Single Image.
Here are the rest of the settings.
- Enable: Yes
- Pixel Perfect: Yes
- Control Type: Canny
- Preprocessor: canny
- Model: diffusers_xl_canny_mid
- Control Weight: 0.4
- Starting Control Step: 0
- Ending Control Step: 0.5
ControlNet Unit 1
We will use ControlNet Unit 1 for copying a face using IP-adapter.
Save the following image to your local storage and upload it to the image canvas of ControlNet Unit 1. You can use any image with a face you want to copy.
Below are the rest of the settings.
- Enable: Yes
- Pixel Perfect: No
- Control Type: IP-Adapter
- Preprocessor: ip-adapter_face_id_plus (or ip-adapter-auto)
- Model: ip-adapter-faceid-plusv2_sdxl
- Control Weight: 0.7
- Starting Control Step: 0
- Ending Control Step: 1
It should look like this:
Step 3: Enable ADetailer
We will use ADetailer to fix the face automatically.
Go to the ADetailer section.
Enable ADetailer: Yes.
We will use the default settings.
Step 4: Generate image
Press Generate.
You should get an image like this.
ComfyUI
Software setup
Workflow
Load the following workflow in ComfyUI.
Every time you try to run a new workflow, you may need to do some or all of the following steps.
- Install ComfyUI Manager
- Install missing nodes
- Update everything
Install ComfyUI Manager
Install ComfyUI manager if you haven’t done so already. It provides an easy way to update ComfyUI and install missing nodes.
To install this custom node, go to the custom nodes folder in the PowerShell (Windows) or Terminal (Mac) App:
cd ComfyUI/custom_nodes
Install ComfyUI by cloning the repository under the custom_nodes folder.
git clone https://github.com/ltdrdata/ComfyUI-Manager
Restart ComfyUI completely. You should see a new Manager button appearing on the menu.
If you don’t see the Manager button, check the terminal for error messages. One common issue is GIT not installed. Installing it and repeat the steps should resolve the issue.
Install missing custom nodes
To install the custom nodes that are used by the workflow but you don’t have:
- Click Manager in the Menu.
- Click Install Missing custom Nodes.
- Restart ComfyUI completely.
Update everything
You can use ComfyUI manager to update custom nodes and ComfyUI itself.
- Click Manager in the Menu.
- Click Updates All. It may take a while to be done.
- Restart the ComfyUI and refresh the ComfyUI page.
Checkpoint Model
We will use the ProtoVision XL model. Download it and put it in the folder comfyui > models > checkpoints.
ControlNet model
Download this ControlNet model: diffusers_xl_canny_mid.safetensors
Put it in the folder comfyui > models > controlnet.
IP-adapter models
Download the Face ID Plus v2 model: ip-adapter-faceid-plusv2_sdxl.bin. Put it in the folder comfyui > models > ipadapter. (Create the folder if you don’t see it)
Download the Face ID Plus v2 LoRA model: ip-adapter-faceid-plusv2_sdxl_lora.safetensors. Put it in the folder comfyui > models > loras.
Step 1: Select checkpoint model
In the Load Checkpoint node, select the ProtoVision XL model.
Step 2: Upload reference image for Controlnet
Download the following image.
Upload it to the ControlNet Canny preprocessor.
Step 3: Upload the IP-adapter image
Download the following image.
Upload it to the IP-adapter’s Load Image node.
Step 4: Generate image
Press Queue Prompt.
You should get two output images with consistent faces. The face-fixed images is on the right.
Tips
When you work on the prompt, mute (Ctrl-M) the FaceDetailer node to speed up the process. Once you are happy with the prompt, unmute it with Ctrl-M.
Customization
The image can be customized by the prompt.
character sheet, color photo of woman, white background, long hair, beautiful eyes, black blouse
Troubleshooting
If the face doesn’t look like the image:
- Increase the control weight of the IP adapter.
- Lower the control weight and ending controlstep of the Canny ControlNet.
Make sure the sum of the control weights of the two ControlNets is not too much higher than 1. Otherwise, you may see artifacts.