AI Case Study: Leo and the Scary Noise

AI has infiltrated just about every aspect of the animator’s workflow all the way from script writing to final composite. And very soon the public will have access to text to video AI tools as well. But what can we do with it besides create the next Magic the Gathering Card?

In this ridiculously fast-paced environment, I challenged myself to create a short animation utilizing AI tools in every step of the pipeline in hopes of learning new tools and more importantly – learning how to use existing tools in a creative way.

Below is my case study for Leo and the Scary Noise.


You can find the final animation here: Leo and the Scary Noise 

I knew that I wanted to create a simple scene, but still tell a story. So I used Stable Diffusion AI to Ideate concepts of children sitting around a campfire. Then I refined the idea to just one child at a campfire. This was by far the hardest and most time consuming step as the prompts don’t always yield what is expected.

Once I was happy with my results I used Ryter AI to create a short script based on the prompt “a young boy named Leo patiently sits alone by a campfire and hears a scary sound” After rolling the dice several times, I settled on a script. I decided to remove the last sentence referencing Leo running home because who wants to animate a run cycle.

Then I plugged the script into Replicas Studios AI to create the voices for the narrator and the scarry moaning. Strangely the AI could not pronounce “wind” correctly in context and didn’t understand that the script was referencing a breeze not winding something. Also, the scripts called for a scarry moaning noise, so I just typed “…ohmmmmmohmmonnnn” and other strange collections of letters for the AI to interpret. The results were kind of silly, but worked fine for the short animation.

Now that I had multiple elements I started assembling everything in After Effects. But the fire looked dead, so I went back to Stable Diffusion to create similar, but different fire elements that I later blended between. I also created a foreground bush using Stable Diffusion’s InPaiting feature to create more depth to the camera motion.

I used a collection of compositing and animation tricks in After Effects. For the most part this was all done the old fashioned way.

In Photoshop I used the content aware fill and select subject AI tools to create masks and fill in sections. Then I used the impressive depth of field nurel filter to create a depth matte that I would later use as a displacement map and drive the camera lense blur effect in After Effects. 

Captions were generated in Premiere using its ability to analyze audio and create captions and place them in the video with correct timing. 

Over the course of this case study that took about 5 hours including research time, I got to use Ryter, Melobytes, Replica Studios, (and Murf that was not used in the final) for the first time. I had no idea that there was an AI that could make sound effects based on an image and I had no idea that the AI voices were so good. 

This was a great challenge and I would encourage others to try as there are many free AI tools available. Not everything we make will be our next Magnum Opus, but if we use these tools to refine our skills and inform our eye, we will be one step closer. Create something great!