We speak with Mathieu Nouzareth, the US CEO of The Sandbox, on how generative artificial intelligence AI, a type of AI technology that can produce various types of content like text can help shape experiences and environments inside the metaverse, as well as some of the current generative AI applications.
What is the role of generative AI in the metaverse?
Generative AI plays a crucial role in building a stickier metaverse experience. The metaverse relies on fresh narratives and novel interactions to create sticky experiences. It’s an enormous challenge that all metaverse platforms and creators face. People simply won’t come back if the experience doesn’t dramatically improve over the coming years.
By generating realistic and diverse virtual environments, objects, words and audio, these tools have a tremendous impact on the ability to deliver virtual worlds at scale. In fact, building diverse, dynamic worlds is one of the toughest parts about building a new metaverse – and generative AI promises to accelerate that work by providing some of the more tedious-but-necessary building blocks of virtual worlds.
Ultimately, players will benefit because they will enjoy more quality content, since creators can be more creative and produce content more efficiently, from both a time and resource perspective. That’s a win-win for everyone.
In what ways will generative AI shape the types of experiences and environments inside the metaverse?
First and foremost, generative AI will provide accelerated personalization at scale. That means that the tools will be able to take a user’s profile and past behavior and then craft a customized experience in real-time: unique visuals, personalized audio tracks, user-specific interactions with non-player characters (NPCs) and even custom quests are all possibilities here.
Most of these benefits come from generative AIs powerful ability for procedural content generation.
This is when generative AI creates virtual environments, objects and characters within the metaverse. This can include everything from terrain and vegetation to buildings, furniture and in-game items. At scale, procedural content generation creates more diverse and realistic virtual worlds while also reducing the workload for human designers and programmers. That’s a massive shift in how players will experience the built environment in the metaverse, as it should open up new possibilities for innovation in narrative, gameplay and level design.
What are some current generative AI applications in the metaverse?
There are three main components of the metaverse: the visuals, the narrative and the audio. Within each of these, there are numerous applications for generative AI to shape how users interact, engage, explore and play in the metaverse.
The visuals are the most obvious application of generative AI in the metaverse. In addition to the procedural content mentioned earlier, there’s also the ability to drive in-game items in a way that reflects what a user owns in her wallet.
Beyond using avatars to access a custom set of in-game items, generative AI can also be used to create custom avatars for users within the metaverse. By analyzing data about a user’s preferences and physical characteristics, generative AI models can create personalized avatars that closely match their real-world appearance and personality.
Another fascinating application is to use generative AI to drive storytelling, as the narrative drives users through the virtual environment. Typically, narrative designers build a series of story journeys that form pathways through the experience. However, that’s extremely difficult to do at a scale comparable to an open world environment. Generative AI tools can craft individual narratives that dynamically create user-specific storylines. These narratives can evolve over time and adjust based on each user’s past behavior, creating a different experience for everyone. That’s not only interesting for users – it also makes for games that aren’t as easy to beat, since no one can search online for cheats!
Finally, music is a compelling use case for generative AI. For instance, users spent over 2 million minutes playing in The Sandbox in 2022 – that’s an incredible amount of time, with each minute requiring an audio track. It’s not feasible to custom program each minute of gameplay – both financially and logistically. Generative AI can build a better auditory experience for individual players based on their current avatar, preferences, past behavior and ownership of digital assets.
And we’ve only begun to scratch the surface here! Generative AI can also enhance gameplay and be used to create music within the metaverse that responds in real-time to user actions and environmental factors. This can create a more immersive and dynamic experience for users and can also help to reduce the workload for human composers.
What are some challenges standing in the way of wider application?
We’re obviously in the super early stages of both the metaverse and generative AI. Funding is certainly rushing into the AI space, which is going to make it increasingly difficult to separate the hype from the substance. In fact, that’s very similar to the recent metaverse cycle. It’s much easier to say than to do.
Generative AI will face many challenges in preserving authenticity and creativity, especially as AI tends to revert to the mean when creating. We risk a world where games and virtual experiences are increasingly similar, due to their shared generative AI DNA. As we deploy generative AI in the metaverse, it needs to be based on data that’s both diverse and representative of the kinds of objects and environments that users are likely to encounter. There’s simply not enough data available yet to train effective generative AI models for the metaverse.
There are also potential intellectual property issues that could arise with the use of generative AI in the metaverse. For instance, who owns the rights to content created by a generative AI model, and how will those rights be enforced? When we’re talking about dozens of virtual worlds, each with theoretically unlimited space, that’s a massive amount of ground to cover. We don’t have a legal framework for any of this yet.
All that being said, ensuring the quality and consistency of generative AI-generated content within the metaverse is another main challenge. If generative AI is deployed in real-time for individual user interactions, there are theoretically an infinite number of ways that AI could deliver the words, visuals and audio combinations. We’ll need shared industry standards, such as via the Open Metaverse Alliance, to validate and curate the content to ensure that it meets certain standards and is appropriate for the context in which it is being used.
What should investors know about generative AI and the metaverse?
The intersection of generative AI and the metaverse represents a particularly interesting opportunity for investors. Generative AI makes it possible to create new objects, environments and experiences within the metaverse without the need for human designers or programmers. This could lead to a new wave of innovation and creativity within the metaverse, as well as new business opportunities for companies that can leverage this technology.
There’s also a massive boon ahead for individual creators and game studios, who can be not just more efficient in building out massive playable worlds but also be more creative. Consider this: rather than having to remake individual items for every game, they can use generative AI to build the base layer and then deploy their creativity to enhance gameplay, as well as the visual and auditory elements. That unlocks new possibilities for everyone – and also makes it easier for smaller studios and individual creators to compete with more established, well-funded studios and teams.
How will generative AI impact skins and avatars, which are such a significant source of revenue for metaverse platforms?
Players are passionate about their avatars. We recently conducted a research report on the role of avatars in the metaverse, which are basically the building blocks of digital identity. We found that these digital representations are central to how players express themselves in the metaverse. In fact, 20% of Roblox users change their avatars daily! And that doesn’t account for the millions of digital fashion items purchased to augment avatars with new looks, features and even utilities (like accessing a special area of the metaverse).
Generative AI has the potential to make launching customizable avatars much more seamless and feasible for more brands. For instance, brands could leverage generative AI to allow consumers to upload photos that are then transformed into an avatar that’s built by the generative AI based on a few pre-defined parameters. In this way, a brand could scale an avatar collection to millions — all without having to create complex randomizers or prohibitively expensive artwork.
For investors and brands, this is an incredible opportunity that only grow exponentially as we spend increasingly more time on digital platforms. Our avatars represent us across different communities and platforms, which unlocks massive revenue streams, as well as natural touchpoints to deepen connection with a brand’s biggest fans. For instance, one of our most successful avatar collection is Snoop Dogg’s Doggies, which give his fans closer proximity to the metaverse world that Snoop has been carefully curating over the past 2 years. The opportunities really are tremendous — and generative AI streamlines the work required to deliver avatars at scale.
Mathieu Nouzareth is the US CEO of The Sandbox, the gaming metaverse. He has several decades of experience in the traditional gaming sector and holds an MBA from Pace University in New York.
This interview originally appeared in our TradeTalks newsletter. Sign up here to access exclusive market analysis by a new industry expert each week. We also spotlight must-see TradeTalks videos from the past week.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.