© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Meta's ‘Make-A-Scene’ AI will bring imagination to life in the metaverse

Meta introduced the multimodal generative AI method Make-A-Scene which will allow people to “describe and illustrate their vision” in the physical world and in the metaverse.

Many state-of-the-art AI systems base images on text prompts, yet, their compositions are difficult to predict and effectively control. As a result, one’s creative vision can never be truly fulfilled: your image might be facing the wrong direction, be too small, or too big.

Make-A-Scene, in turn, is supposed to solve this issue. It allows for the creation of images with greater specificity, using a variety of elements, forms, arrangements, depth, compositions, and structures by combining user-created sketches with text prompts to describe their vision. In testings with human evaluators, images generated from both text and sketch were almost always (99.54%) rated as better aligned with the original sketch and often more aligned with the text prompt (66.3%.)

“It’s not enough for an AI system to just generate content, though. To realize AI’s potential to push creative expression forward, people should be able to shape and control the content a system generates,” Meta argued in a press release.

This concept will allow users to create digital worlds in the metaverse and produce high-quality art regardless of their personal artistic skills.

“Through scientific research and exploratory projects like Make-A-Scene, we believe we can expand the boundaries of creative expression — regardless of artistic ability,” Meta explained.

AI artists Sofia Crespo, Scott Eaton, Alexander Reben, and Refik Anadol have already experimented with Make-A-Scene, testing how it expands the boundaries of creative expression both for professional artists and regular users.

“It’s going to help move creativity a lot faster and help artists work with interfaces that are more intuitive,” Crespo said.

“It made quite a difference to be able to sketch things in, especially to tell the system where you wanted things to give it suggestions of where things should go, but still be surprised at the end,” Reben reiterated.

More from Cybernews:

Adware campaign steals Google users’ search engine data

Russia takes “coercive” measures against Twitch

Microsoft uncovered exploit for macOS sandbox escape bug

Threat actors impersonate Crowdstrike to extort data and deploy ransomware

Tamagotchi generation: are you ready to raise virtual babies in the metaverse?

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are marked