corporatetechentertainmentresearchmiscwellnessathletics

Meet Fugatto -- an impressive new AI sound model from Nvidia

By Nigel Powell

Meet Fugatto  --  an impressive new AI sound model from Nvidia

Graphics and AI giant NVIDIA has announced a new AI model called Fugatto (short for Foundational Generative Audio Transformer Opus 1). Developed by an international team of researchers. It is being billed as "the world's most flexible sound machine" taking on ElevenLabs and AI music maker Suno in one hit.

With this model, we're about to witness a completely new paradigm in how sound and audio are manipulated and transformed by AI. It goes way beyond converting text to speech or producing music from text prompts and delivers some genuinely innovative features we haven't seen before.

Fugatto isn't currently available to try as it's only a research paper but it is likely it will be made available to one or more Nvidia partners in the future and then we will start to see some significant changes in how sound is developed.

Key to Nvidia Fugatto is its ability to exhibit emergent capabilities, which the team is calling ComposableART. This means it can do things it was not trained to do, by combining different capabilities together in new ways.

The authors of the launch research paper describe how the model can produce a cello that shouts with anger, or a saxophone that barks. It may sound silly but some of the demonstrations seen on the project's homepage are very impressive.

For example, the ability to instantly convert speech into different accents and emotional intensity, or adding and deleting instruments seamlessly to and from an existing music performance.

We have seen some of this from other models such as OpenAI's Advanced Voice, ElevenLabs SFX model or Google's MusicFX experiment, but not in one model.

One of the most striking examples the team puts forward is the on-the-fly generation of complex sound effects, some of which are completely new or wacky.

Video game developers and those in the movie industry will either be salivating or sweating at the news that almost any kind of soundscape will soon be AI-generated at the touch of a button.

The power of all this technology is delivered via a model that features 2.5 billion parameters and was trained on a huge suite of Nvidia computer processors, as you might expect.

As with many of these early research demonstrations, it's likely to be a while before we see a fully fledged product released onto the market. Creating a four-second audio clip of a thunderstorm or a mechanical monster is one thing, making it usable in the real world is another.

However, there's no question that the technology behind this new model shows that an important bridge has been crossed in the ability of the machine to master another art form. It may be the first time we've seen AI generational power of this type, but it's certainly not going to be the last.

Previous articleNext article

POPULAR CATEGORY

corporate

10178

tech

11454

entertainment

12507

research

5665

misc

13249

wellness

10064

athletics

13223