Home > Apps

Runway Gen-2 — What’s new?

Runway Gen-2 just got a huge upgrade
Last Updated on January 30, 2024
RunwayML Gen-2 is now available. The latest update to Runway text-to-video AI.
You can trust PC Guide: Our team of experts use a combination of independent consumer research, in-depth testing where appropriate - which will be flagged as such, and market analysis when recommending products, software and services. Find out how we test here.

RunwayML Gen-2 is the biggest update yet to what was already the preeminent text-to-video generative AI. Filmmakers are calling the latest November 2023 update “game-changing” and a “pivotal moment in generative AI”. This upgrade brings a host of improvements, not least visual fidelity. So, what’s new?

What’s new in Runway Gen-2?

It’s been said that “every abundance creates a new scarcity”. If you tune into RunwayTV, you might hear the phrase during a live broadcast of what’s possible today with the world’s most exciting AI video generator, Runway Gen-2.

Runway (also stylized Runway ML) is a multimodal AI system that can generate novel videos with text, images, or video clips. Try “What is Generative AI? All You Need to Know” or “Midjourney Video rumors – Everything we know so far” for more on this topic. The software platform enables real-time collaboration with creative tools designed for storytelling and professional composition.

On November 2nd, 2023, Runway announced via X (formerly Twitter) that Gen-2 has been updated. While not delineated as a new major version (which will be Runway Gen-3), it does bring significant quality upgrades to both text-to-video and image-to-video algorithms.

We have released an update for both text to video and image to video generation with Gen-2, bringing major improvements to both the fidelity and consistency of video results.

Runway, via X (formerly Twitter)

This Gen-2 upgrade is now live across both browser-based and mobile application versions of Runway.


Hosted on the Runway website, you’ll find a live broadcast demonstrating the possibilities of Runway tools just underneath the Home button. My favorite so far is the cinematic short directed by Jordan Rosenbloom.

Text-to-speech dialogue

TTS dialogue is now possible with Runway. The generative AI platform offers several audio-based tools, including:

  • Clean Audio, for removing noise from a recording.
  • Remove Silence, for truncating a recording without unnecessary periods of silence.
  • Transcript, for transcribing the words spoken in an audio recording.
  • Subtitles, for time-stamped transcriptions to align with video.
  • Text-to-speech, for generating an audio file with your text spoken out loud.
Runway provides generative AI tools for audio, including text-to-speech (TTS).
Runway provides generative AI tools for audio, including text-to-speech (TTS).

Voices can be searched with descriptive terms such as masculine, feminine, American, British, calm, audiobook, or narration. With 20 voices to choose from, you’ll have to hear it to believe it! Try it yourself with free credits on the official Runway website.

Runway Gen-2 motion brush

Motion brush works with image prompts. This means that it will turn an image into a video — an animated version of that image.



Open Runway

Open Runway Gen-2. You can test this tutorial using the free trial or Basic Plan, which is also free and allows limited access to Gen-2. You may need to upgrade to the $12/month Standard Plan for usable footage, however.



Prompt Gen-2 with an image

Prompt Runway Gen-2 with an image, using any of the following three input methods:

  • Upload an image from your computer. Simply drag and drop an image file into the Gen-2 UI.
  • Use the text-to-video panel (within Gen-2) to generate a new image from a text prompt.
  • Head to the standalone text-to-image tool in Runway, then drag and drop your generated image into Gen-2.



Open Motion Brush

Click “Motion Brush (BETA)” to start. This will be at the bottom of the prompt panel, underneath the “Text”, “Image”, and “Image + Description” buttons.



Create your mask (selection area)

Brush over the area you want to control. Here, you’re selecting which parts of the image will be animated. Anything not brushed over (also known as “masked”) will remain static, like the input image.

There’s a slider near the top of the interface to change how thick your selection brush is. You’ll also find an eraser button, allowing you to brush with the negative effect, removing your selection (this does not erase the image itself).

This is an existing concept in both video editing and image editing called masking. If you’ve ever heard someone refer to a layer mask in Photoshop, it’s the same process.



Customize your motion controls

Head to the motion controls along the bottom of the user interface. You’ll see three options:

  • Horizontal (X-Axis)
  • Vertical (Y-Axis)
  • Proximity (Z-Axis)

Tweak these until you’re happy wit the directionality of your animation. If at any point you want to scrap it all and try again, simply click “Clear” near the bottom right. This will reset all settings, and remove any mask (selection area) from your image.



Generate your video

When you’re happy with your settings, click “Save”.

This will return you to the Gen-2 UI, where you can hit “Generate”, to watch your image transform into an animated video based on your settings!

The latest runway research into video generation gives creatives a technological toolkit of 30+ “Magic Tools”  with the following modes:

Mode 01: Text to Video

Exclusive to Gen-2, now you can synthesize new videos in any style using nothing but a text prompt.

Mode 02: Text + Image to Video

Generate a video in the style of an image you provide alongside a text prompt.

Mode 03: Image to Video

Content-guided video synthesis using just an input image (Variations Mode)

Mode 04: Stylization

Transfer the style of any image or prompt to every frame of your video.

Mode 05: Storyboard

Turn mockups into fully stylized and animated renders.

Mode 06: Mask

Isolate subjects in your video and modify them with simple text prompts.

Mode 07: Render

Turn untextured renders into realistic outputs by applying an input image or prompt.

Mode 08: Customization

Unleash the full power of Gen-2 by customizing the model for even higher fidelity results.

With the mission to usher in a “new era of human creativity”, it’s clear that this New York-based artificial intelligence company has a challenging, but rewarding path ahead. Since Gen-1, the collective work of researchers Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Granskog, Anastasis Germanidis has helped improve the fidelity of the multi-modal AI system to the point video that professionals are starting to use Runway in their workflows.

Essential AI Tools

Editor’s pick
Only $0.00019 per word!

Content Guardian – AI Content Checker – One-click, Eight Checks

8 Market leading AI Content Checkers in ONE click. The only 8-in-1 AI content detector platform in the world. We integrate with leading AI content detectors to give unparalleled confidence that your content appear to be written by a human.
EXCLUSIVE DEAL 10,000 free bonus credits

Jasper AI

On-brand AI content wherever you create. 100,000+ customers creating real content with Jasper. One AI tool, all the best models.


10x Your Content Output With AI. Key features – No duplicate content, full control, in built AI content checker. Free trial available.


Experience the full power of an AI content generator that delivers premium results in seconds. 8 million users enjoy writing blogs 10x faster, effortlessly creating higher converting social media posts or writing more engaging emails. Sign up for a free trial.


Create SEO-optimized and plagiarism-free content for your blogs, ads, emails, and website 10X faster. Start for free. No credit card required.

How to use RunwayML Gen-2




Input a text prompt. This is limited to 320 characters but can include a reference image.




Customize the advanced settings. This includes features like upscaling and rendering without a watermark, though these will require a paid subscription.




Choose your fighter. As seems to be the adopted practice for every popular visual generative AI right now (I’m lookin’ at you Midjourney), you will be presented with multiple options to choose from for your final render. Click “Generate this” and it will do so.




Optionally, you can rate your result. This helps Runway improve the model with feedback training and ultimately benefits every end user, so I’d recommend it.




Hover over your generation and you’ll see a download icon. Click that, and it will be saved to your “Runway assets” on the site.

How much does Gen-2 cost?

Gen-2 costs 5 credits per second of video generation. 1 credit = $.01. As a result, it costs $0.05 per second of generated video.

Is Runway Gen-2 available to the public?

Yes! It’s available here now.

How long can I make a video with Runway?

Individual generations are currently limited to 4 seconds, but we expect that to improve along with the efficiency of the underlying AI tech.

What’s the maximum resolution of Gen-2?

The default resolution is 768×448, but paid users can upscale up to 1536×896.

Is Gen-2 available for mobile?

Yes, Gen-2 is available on iOS! Download the app here. Gen-2 is not available for Android yet.

Is Runway better than Stable Diffusion?

Based on user studies conducted by Runway, “results from Gen-1 are preferred over existing methods for Image to Image and Video to Video translation”. The actual result saw 73.53% of respondents preferring Runway.

Steve is an AI Content Writer for PC Guide, writing about all things artificial intelligence. He currently leads the AI reviews on the website.