Advertisements

10 Incredible Examples of AI-Generated Videos

10 Incredible Examples of AI-Generated Videos

10 Incredible Examples of AI-Generated Videos

While we cover a multitude of generative AI advancements, color us impressed by this wrap-up of precisely how far AI video generation has evolved in the last few years. The ability to create high-quality video FX is being unlocked to millions.

An AI powered Hollywood studio

It’s been a year of discovery as the world en-masse gets to experience the power of generative AI.

Advertisements

But while most people’s exposure may be through tools such as ChatGPT, the acceleration that has happened in the last few years in AI video generation is nothing short of incredible.

This has not been demonstrated more clearly than in a stunning thread by Rowan Cheung, founder of The Rundown AI, who collated some of the best examples of AI video generation in action on Twitter earlier this week.

Below, we’re going to look at the thread and examples in more detail and examine how AI developers and creatives are using text-to-image, text-to-video, and image-to-video solutions to bring their ideas to life.

Advertisements

Play the examples as you go through, and consider for yourself how much power is being unlocked and democratized from little more than an idea and some prompt engineering.

10 Incredible Examples of AI-Generated Videos

1. Turning AI Images into Videos with Midjourney and Motion Brush

 The first example shared in the post, originally posted by Rory Flynn, displayed how an image generated with Midjourney, in this case, a bird, a car, and a leaf, could be converted into an animated video using Runway’s Motion Brush.

Essentially, Midjourney uses a diffusion process, combining machine learningand generative AI to translate a text prompt into an image. Then, Runway’s Motion Brush lets the user paint over the image to add highly realistic motion. 

2. Hiding Words in Images with Pika Labs

This next example, originally shared by fofrAI, demonstrates how AI can be used to create a clip encrypted with a hidden word. In this post, the creator used Pika Labs to hide the term “FOMO” within the model’s shirts. 

The overall design illustrates an approach known as controllism, where images are created with a tool like ControlNet (or Pika Labs), which attempts to integrate words, shapes, or symbols into the overall design. 

3. Introducing Stable Diffusion’s New Text-to-Video Model

Our next example comes from Javi Lopez, the founder of Magnific AI, who showcased how the newly announced video generation model Stable Video Diffusion could be used to apply motion to static images. 

In the comments, Lopez explains that the images in the post were created by using a Midjourney input image, upscaling it to 1080p, and interpolating it to 24 frames per second.  

4. Bringing Memes to Life

As part of a more lighthearted exercise, fofrAI has used Stable Video Diffusion to add motion to the well-known Distracted Boyfriend meme. 

In this instance, the woman in red can be seen walking forward into the camera while the couple turns. There is also some basic facial motion, although this looks very unnatural on the couple in particular. 

5. Merging Multiple Tools Together

In another example, Cheung shared a segment shared by Steve Mills, where images created with Midjourney have been converted into animated segments. 

To achieve this effect, Mills claims he used Magnific to upscale the Midjourney-generated image to add additional details and then Topaz to interpolate footage developed by Stale Diffusion Video. This last step was used to improve the overall smoothness of the animation.  

6. The AI Music Video

Cheung’s next example, originally shared by Nicolas Neubert, a product designer at VW Group, displays a cinematic music video created with the use of Runway’s multimodal GEN-2 model.

In this instance, Neubert used image-to-video and image-to-text to create the assets used in the final video. This demonstrates that creatives don’t need to go out and film to develop this type of content. 

7. Adding Animation to More Memes

Building on the earlier example, this next post, shared originally by Pietro Schirano, founder at everartai, revealed how the famous This is Fine meme could be animated with Stable Diffusion Video.

This example is notable because the creator managed to animate the fire and the dog’s head without stepping outside of the comic panel view. 

8. Fantastical Extraterrestrials

In one of the more surreal examples in this thread, filmmaker Dave Villalvashared how Runway’s text-to-image, text-to-video, and image-to-video generation capabilities could be used to develop clips of fantastical, Avatar-style alien creatures in lush sci-fi-inspired natural landscapes. 

The results were achieved by entering a prompt which detailed the camera angle (aerial or close-up), what the photo showed, and descriptive tags like “surreal sunrise, colorful, green, purple, masterpiece,” which helped to guide the overall style of the content. 

9. AI-generated Movie Trailers

Perhaps the most standout example in the thread is an AI-generated trailer called THE OUTWORLD, created by a user known under the X handle maxescu

This trailer features detailed, high-definition images produced by Leonardo AI, video content made in Runway, and music taken from Envato. In the comment section for the YouTube version of the trailer, Max claims that the entire video took just a single afternoon to produce. 

10. Bringing X Profile Pictures to Life

Finally, this example shared by a user known by the X handle Satya Nutellashows how generative AI can be used to add animation to social profile pictures. 

While this example isn’t as developed as others on this list, it highlights that motion can be applied to all sorts of digital assets to create exciting content. 

The Bottom Line

A captivating video always starts with two things — imagination and a good story. But the tools displayed here were simply unavailable a few years ago.

Thousands of storytellers have likely seen their good ideas wither and die on the vine because it takes a studio and an army to make a good show or movie.

Is it too much of a stretch to think a future animated or special effects masterpiece will not come from a Hollywood VFX studio but through the careful and clever use of video AI tools?

Advertisements

Leave a Reply

Your email address will not be published. Required fields are marked *