#aivideo #aianimation #stablediffusion #a1111 #controlnet #automatic1111 #video2video #img2img
This video is for beginners who are interested in seeing how AI Videos can be generated based on a Reference Video in stable diffusion and Automatic 1111.
00:00:00 introduction and showcases
see how reference videos are used to generated new videos using AI in stable difffusion and A1111
00:01:14 video selection process
shows example free sources for videos to use from online resources
00:02:16 PNG Sequence generation process
See how PNG sequence is created using Davinci resolve, cropping video,
00:04:01 A1111 settings Setting noise multiplier =0
00:04:58 start testing in img2img with controlnets
Ghibli style LoRA, and using Controlnets
00:06:21 setting batch image processing and check results
00:07:01 removing image backgrounds in A1111
00:08:03 composing PNG Sequence back to a video in Davinci resolve
00:11:08 using deflicker effect in Davinci resolve to reduce flickering
delifcker and dirt removal in V Resolve
00:14:18 more tests, face inpaint with after detailer and dw openpose controlnets
00:19:13 another tests with uniform faces
00:20:12 see how to remove background in Davinci resolve using Keyer
00:23:22 using image2img with after detailer to face swap using LoRA
Computer Specs:
Laptop: Legion 5 Pro
Processor :AMD Ryzen 7 5800H , 3201 Mhz
System RAM: 16.0 GB
Graphics GPU: NVIDIA GeForce RTX 3070 Laptop GPU 8GB
and converted back to video, how background can be removed, best video to choose, settings in img2img, controlnet usage, davinci resolve deflicker, production tool and several tips and tools)
We will be using the simplest and most effective method which is img2img and not rely on extensions such as Deforum, temporal kit, mov2mov or others, because it is effective, easy to use and gives us more control.
Now! No matter what you do in stable diffusion, you cannot completely stop flickering or inconsistency, but you can reduce it by understanding what to expect from Stable Diffusion and what can help produce better results.
We will see that there is no specific one setting that works for all, and that settings must be selected based on your video and understanding of this process.
If you want to create impressive real world videos, then Best results are achieved when you train your own checkpoint or LoRAs, using a set of characters/styles/clothes and use that to generate the images, but it is a time consuming process, so we will not be using it, and use existing check points and LoRAs
Conclusion:
Img2Img can produce quality and smooth videos
Img2Img is simple to use and easy
Controlnet can help improve results but not always
Which control net to use? Experimental, Normalmaps, Openpose, Reference, Tile, Temporal…etc.
Low denosing strength can produce smoother videos.
Davinci resolve can generate intermediate frames and reduce flickering too.
all videos in this Tutorila are based on the following Free videos from pexels, to whom I thank the authors for creating them and making them available for free:
VIDEO by Tima Miroshnichenko found at [ Ссылка ]
MART PRODUCTION video found at [ Ссылка ]
Greensccreen dancing video Designed by Freepik [ Ссылка ]
Monstera Production in Pexels [ Ссылка ]
headphone girl at freepik [ Ссылка ]
thanks for the creators of photon check point
[ Ссылка ]
majicmix v4
[ Ссылка ]
and aniverse v 1.3
[ Ссылка ]
as for well Ghibli style lora [ Ссылка ]
![](https://i.ytimg.com/vi/PDlmnhtkgMQ/maxresdefault.jpg)