Make-A-Scene, the new AI research tool by Meta is designed to enable users to transform text and simple sketches to generate digital imagery.
The AI tool by Meta ascertains the expression of creators, for instance, creating a digital painting without picking up a paintbrush or instantly generating storybook illustrations to accompany the words.
Meta has showcased the exploratory artificial intelligence (AI) research concept, illustrating how it will allow people to bring their visions to life.
Make-A-Scene enables people to create images using text prompts and freeform sketches. Prior image-generating AI systems typically used text descriptions as input, but the results could be difficult to predict.
For example, the text input “a painting of a zebra riding a bike” might not reflect exactly what you imagined; the bicycle might be facing sideways, or the zebra could be too large or small.
With Make-A-Scene, this is no longer the case. It demonstrates how people can use both text and simple drawings to convey their visions with greater specificity using a variety of elements.
The tool captures the scene layout to enable sketches as input. It can also generate its own layout with text-only prompts, if that’s what the creator chooses. The model focuses on learning key aspects of the imagery that are more likely to be important to the creator, like objects or animals.
Crespo, a generative artist focusing on the intersection of nature and technology, used Make-A-Scene to create new hybrid creatures.
While creators are at the centre of this, Make-A-Scene is not only designed just for artists, users with minimal or no artistic skills can also use the tool to realise their imagination.