Meta introduces new AI model 'Make-A-Video'

author-image
Paawan Sunam
Updated On
New Update
Meta AI


Make-A-Video is a new AI system that builds on Meta AI’s recent progress in generative technology research and lets users turn text prompts into brief, high-quality video clips.

Designed for creators and artists, and with use-case applications for more, the Make-A-Video AI system learns what the world looks like from paired text-image data and how the world moves from video footage with no associated text. As part of the efforts towards open science, Meta has also shared details in a research paper and plans to release a demo experience.

With few words or lines of text, Make-A-Video can bring imagination to life and create videos full of colors, characters, and landscapes. The system can also create videos from images or take existing videos and create new ones that are similar.

Also Read: Meta launches new AI project called PyTorch

Make-A-Video follows the announcement earlier this year of Make-A-Scene, a multimodal generative AI method that gives users control over the AI-generated content they create. With Make-A-Scene, it was demonstrated how people can create photorealistic illustrations and storybook-quality art using words, lines of text, and freeform sketches.

Make-A-Video uses publicly available datasets, which adds a level of transparency to the research. Meta is sharing this generative AI research and results with the community for their feedback and will continue to use the AI framework to refine and evolve the approach to this emerging technology.

Social media Media Social Media news