Artificial intelligence has come a long way in recent years, and one area where it has made significant progress is in the generation of images. AI-generated images are becoming increasingly realistic, and they are being used in a wide range of applications, from video games to movies to advertising. However, one of the challenges with AI-generated images is that they can sometimes produce unexpected or undesirable results. One such example is the issue of faces being distorted or "messed up" in images generated using the stable diffusion model.
Stable diffusion is a model that is used to generate high-quality images by training a neural network on a large dataset of images. However, one of the limitations of this technique is that it can sometimes produce distorted or "messed up" faces. This is because the neural network is not able to fully capture the subtle details and variations in human faces, leading to unnatural or distorted results.
Fortunately, there are ways to restore faces that have been distorted or "messed up" by stable diffusion. One of the most effective ways to do this is by using the AUTOMATIC1111 stable-diffusion-webui. This open-source tool is specifically designed for generating images using stable diffusion, and it offers a variety of features and options for achieving the best possible results.
Some of the features of the stable-diffusion-webui include:
(You can find all of its features at this link.)
If you’re creating new images, you can simply select the “Restore Faces” option in the menu:
If you already have an image with eyes to fix, you can actually just go to the “Extras” tab and upload the image you’ve already generated.
Don’t forget to set the impact (from 0 to 1) of GFPGan or CodeFormer. Depending on the image, you might need different configurations.
Inpainting is another powerful feature of the AUTOMATIC1111 stable-diffusion-webui that allows you to fill in missing or distorted parts of the image. This feature is particularly useful for restoring faces that have been distorted or "messed up" by stable diffusion. The inpainting feature allows you to select the area of the image that you want to fill in, and then the tool will automatically generate a new image that fills in the missing parts.
For this, you can go to the img2img tab and choose “inpaint”
You can then select the eyes and Stable Diffusion will fill the missing piece.
A configuration that worked well for me can be found in the next screenshot.
If you don't want to use AUTOMATIC1111 stable-diffusion-webui, you can install the same tools they used on your own implementation. The two main tools are CodeFormers and GFPGAN.
You can find here the huggingface/sczhou space doing face restoration with CodeFormers. And here is a good tutorial by EdXD on how to use GFPGAN yourself.
More and more artists are getting on Flaneer to create art with next-gen tools. With the Workstations on Flaneer, with GPUs up to 24GB of vRAM, our team is looking to get more
Generating images & videos, then restoring faces, inpainting, upscaling images… All of those functionalities are becoming easier than ever with the cloud infrastructure Flaneer provides. If you want to try it out, don’t hesitate to contact us at support+stablediffusion@flaneer.com so that we can help you put everything in place!