Last Updated on
RunwayML Gen-2 is the biggest update yet to what was already the preeminent text-to-video generative AI. This upgrade brings a host of improvements, not least visual fidelity. So what new?
What’s new in RunwayML Gen-2?
Runway is a multimodal AI system that can generate novel videos with text, images or video clips. Try “What is Generative AI? All You Need to Know” or “Midjourney Video rumours – Everything we know so far” for more on this topic.
Essential AI Tools
Jasper AI
Best Deals
Copy.ai
Best Deals
Winston AI detector
Best Deals
Originality AI detector
Best Deals
WordAI
Best Deals
The latest runway research into video generation gives creatives a technological toolkit of 30+ “Magic Tools” with the following modes:
Mode 01: Text to Video
Exclusive to Gen-2, now you can synthesize new videos in any style using nothing but a text prompt.
Mode 02: Text + Image to Video
Generate a video in the style of an image you provide alongside a text prompt.
Mode 03: Image to Video
Content-guided video synthesis using just an input image (Variations Mode)
Mode 04: Stylization
Transfer the style of any image or prompt to every frame of your video.
Mode 05: Storyboard
Turn mockups into fully stylized and animated renders.
Mode 06: Mask
Isolate subjects in your video and modify them with simple text prompts.
Mode 07: Render
Turn untextured renders into realistic outputs by applying an input image or prompt.
Mode 08: Customization
Unleash the full power of Gen-2 by customizing the model for even higher fidelity results.
With the mission to usher in a “new era of human creativity”, it’s clear that this New York-based artificial intelligence company has a challenging, but rewarding path ahead. Since Gen-1, the collective work of researchers Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis has helped improve the fidelity of the multi-modal AI system to the point video professional are starting to use Runway in their workflows.
How to use RunwayML Gen-2
Step 1
Prompt
Input a text prompt. This is limited to 320 characters but can include a reference image.
Step 2
Customize
Customize the advanced settings. This includes features like upscaling and rendering without a watermark, though these will require a paid subscription.
Step 3
Choose
Choose your fighter. As seems to be the adopted practice for every popular visual generative AI right now (I’m lookin’ at you Midjourney), you will be presented with multiple options to choose from for your final render. Click “Generate this” and it will do so.
Step 4
Rate
Optionally, you can rate your result. This helps Runway improve the model with feedback training and ultimately benefits every end user, so I’d recommend it.
Step 5
Download
Hover over your generation and you’ll see a download icon. Click that, and it will be saved to your “Runway assets” on the site.
FAQ
How much does Gen-2 cost?
Gen-2 costs 5 credits per second of video generation. 1 credit = $.01.
Is Runway Gen-2 available to the public?
Yes! It’s available here now.
How long can I make a video with Runway?
Individual generations are currently limited to 4 seconds, but we expect that to improve along with the efficiency of the underlying AI tech.
What’s the maximum resolution of Gen-2?
The default resolution is 768×448, but paid users can upscale up to 1536×896.
Is Gen-2 available for mobile?
Yes, Gen-2 is available on iOS! Download the app here.
Gen-2 is not available for Android yet.