OpenAI’s AI Music Model Could Make Human Composers Obsolete—Or Not

After launching text and image generation models, OpenAI has set its eyes on the music industry. The company is reportedly developing a model that can create music from text or audio prompts. It is said to be training the system to produce original compositions that respond to user instructions.

The upcoming model is expected to let users describe the kind of music they want, such as a mellow piano tune for a short film, or upload an audio input like a vocal track for accompaniment. 

Reports suggest that OpenAI is collaborating with some students from The Juilliard School to annotate musical scores for training the system. This move is intended to refine the model’s understanding of rhythm, harmony, and structure, key components of meaningful musical output.

If OpenAI enters the music generation space, it will compete with existing players like Suno, Udio, Beatoven.ai and Mubert, which already offer tools that generate songs or instrumentals from text prompts. Google’s MusicLM and Meta’s MusicGen are among the other major research models in this space, capable of producing short compositions based on written descriptions.

According to a report, the global AI music generator market, estimated at around $1.54 billion in 2025, is anticipated to rise to $1.98 billion by 2026 and could reach nearly $14.04 billion by 2034. This represents a robust compound annual growth rate (CAGR) of approximately 28.5% throughout the 2026–2034 forecast period.

“OpenAI’s entry into AI music would be a significant strategic move, given that they need to add more engaging features for their 800M+ ChatGPT users so they can spend more time on the app and OpenAI can improve retention,” Beatoven.ai co-founder and CEO Mansoor Rahimat Khan told AIM.

Khan believes OpenAI’s entry could put pressure on smaller players operating in the consumer segment. “It poses significant risks for companies like Suno, Udio, and Beatoven, who are competing at the consumer level, given OpenAI’s distribution advantage over relatively smaller startups,” he said. 

However, Beatoven.ai’s sound effects model and personalised B2B offerings are likely to remain unaffected, as these are built on fine-tuned models trained on customer-specific data for various use cases, Khan added.

Meanwhile, OpenAI has not confirmed whether the model will be released as a standalone product or integrated into existing platforms such as ChatGPT or Sora. No launch timeline has been disclosed yet.

Will Music Producers Use AI Tools

“I don’t think AI is ever going to take over human-made music,” said Charlie Puth in a recent Instagram post. He explained that human-made music is special because of its imperfections, while AI-generated music feels boring and bland. “AI will take out mistakes,” he said, adding that these mistakes add to the vibe.

Puth said that AI music has no heart yet, and while AI can assist musicians, it still lacks the emotional nuances and imperfect timing that make human compositions feel authentic.  “AI is not going to wipe us off the planet creatively, like any other piece of technology that comes around every decade. We humans need to learn how to work with it to make music that no one has heard before,” said Puth.

On similar lines, Samewanbud Syiemlieh, a music producer, told AIM that AI-generated music still lacks true emotional depth. “AI-generated music can only imitate human emotions,” he said, adding that from an artistic point of view, it may never fully replicate the creative process because “music is art, and creating it often requires a spontaneous and subconscious approach.”

He explained that many musicians, especially from the late 1990s and early 2000s, relied heavily on their subconscious minds while writing songs. Sharing his own experience, he said, “As a music producer, I usually create songs from scratch, but when it comes to pre-made tracks or covers, I sometimes use AI. That’s because such music has already been polished and produced by other musicians.”

Rahul Raghavan, a music composer, told AIM at Cypher 2025 that music is deeply human and emotional. He believes artists must decide how much of their performance should involve AI to keep it ethical and transparent. “You can’t generate everything with AI and just stand there pushing a few knobs, the audience will know,” he said.

Sharing his own experience, Rahul added that during one of his sessions, he used AI to generate about 30% of the sound effects. “That’s where I needed AI’s help,” he said. While he played the instruments himself, he relied on AI to create complex sounds such as an asteroid striking the Earth, which would have been difficult to produce manually.

Meanwhile, Indian music composer and singer A R Rahman is also experimenting with his AI music band, Secret Mountain. It is a digital metahuman band that combines AI-powered avatars with music and storytelling to create a global digital music experience.

Who Owns AI-Generated Music

OpenAI’s music generation tool might raise questions about copyright and licensing. Previous generative music systems have faced scrutiny over whether their training data included copyrighted works without permission. 

“There are possibilities of artists suing OpenAI if they use their content without their permission, as music is the toughest space when it comes to copyrights. However, OpenAI will likely strike a deal with labels and publishers to mitigate these risks,” said Khan. The AI startup will likely have to address how it sources and licenses musical data to avoid legal complications once the model goes public. 

If launched, the tool could help creators like filmmakers, podcasters, and musicians experiment with new ideas. It would also build on OpenAI’s earlier work in voice and speech generation.

The post OpenAI’s AI Music Model Could Make Human Composers Obsolete—Or Not appeared first on Analytics India Magazine.

Scroll to Top