metamorworks - stock.adobe.com
Listen to this article. This audio was generated by AI.
Startup Anthropic is now among the AI vendors accused of infringing on copyrighted data.
A group of music publishers, including entertainment giant Universal Music Group, on Oct. 18 sued Anthropic -- which has a $4 billion deal to provide foundation models to Amazon -- for allegedly distributing copyrighted lyrics within its generative AI model Claude 2.
The publishers claim that despite Anthropic purporting to be an "AI safety and research" company, its model "generates identical or nearly identical copies" of lyrics.
For example, when users ask Claude to provide lyrics to popular songs such as "Sweet Home Alabama" by rock band Lynyrd Skynyrd, "Halo" by Beyoncé and "Uptown Funk" by Bruno Mars, the chatbot provides either all or most of the lyrics, according to the lawsuit.
The publishers also accuse Anthropic of depriving them and their songwriters of "control over their copyrighted works and the hard-earned benefits of their creative endeavors" while competing against platforms that license and respect the copyright law.
The Anthropic suit joins a host of lawsuits against other AI companies. For example, Getty Images sued generative AI image model developer Stability AI in January for allegedly infringing on intellectual property rights, including copyright.
OpenAI, perhaps the top generative AI developer with an $11 billion partnership with Microsoft, also faces a lawsuit for breaching copyright laws after noted authors claimed the AI vendor generated summaries of their work.
Meanwhile, also on Oct. 17, Meta, Microsoft and other tech companies were sued by former Arkansas governor Mike Huckabee, the author of several non-fiction and fiction books, and other authors for allegedly using their books to train AI.
The publishers' suit and fair use
While Anthropic has yet to respond to the allegations, some vendor, including OpenAI, that have been involved in copyright legal cases and proceedings have responded with the fair use argument. Some legal observers say Anthropic could do the same.
What distinguishes the music publishers' suit from some of the others that have popped since the generative AI tech wave exploded in late 2022 is that there's no question that the content they are alleging Anthropic infringed on is copyrightable, said Katie Gardner, an intellectual property lawyer with the firm Gunderson Dettmer.
In other cases, those bringing suits mention that models must have scanned the internet, which led to them to summarize their works.
Some other AI vendors may be able to convince judges that their use of copyrighted materials falls under fair use, the legal doctrine that some use of copyrighted work is permitted under freedom of expression, because its actual use differs from what the content owners intended. But Anthropic's argument of fair use here needs to be stronger, Gardner said.
In this case, the publishers identified specific registered works, and they identified particular reproductions of these works.
Moreover, the publishers identified in their legal action that a market for licensing copyrighted works exists. Streaming music platforms such as Apple Music and Spotify operate by legally licensing artists' work.
"That argument itself is going to make the hurdle for a fair use argument a little bit more difficult," Gardner said. "When you have companies that are paying to use copyrighted works and then you have other companies that are using it without payment, it's much harder for them to say that it's fair use."
Moreover, even though Claude users have asked for the style of specific artists in their requests of the large language model, the prompts do not matter as much as the output, said Michael Bennett, director of education curriculum and business lead for responsible AI at Northeastern University.
"When you are asking a large language model to show you lyrics of the style of Drake and it's coming back with, in some instances, actual Drake lyrics," Bennett said, referring to the popular Canadian rapper and singer, "that does not seem like that would be protected by fair use. That seems like simply copying and regurgitating."
Although some artists, including Drake, have asked consumers to use AI to create similar styles of their work, that doesn't excuse the AI vendors, Gardner said.
"The problem is the artists don't own the rights," she said.
Music contains different rights: the rights to the actual music and the rights to the composition. Various parties can own both. In Anthropic's case, the publishers own all the rights to the lyrics.
A mistake, implications and the publishers' motive
It's possible in Anthropic's case that it was unaware that Claude could do this, Bennett said.
Katie GardnerPartner, Gunderson Dettmer
"There are many instances, most of them still undiscovered, in which copyright protected material has been introduced to the training data of generative AI," he said. "Sometimes, unbeknownst to those that own those systems, that copyrighted work is showing up in responses to prompts. It's pretty shocking."
Whether Anthropic knew or not, lawsuits such as these highlight the need for more protective guardrails, Gardner said.
For example, recently, OpenAI updated its image-generating model Dall-E 2 so users can't query the model for images in the style of living artists. These kinds of protective measures are necessary so that creators of generative AI systems can argue for fair use, Gardner said.
For the publishers, part of their motivation may be to follow Getty's lead.
Although Getty sued Stability in January, it later partnered with Nvidia to create its own AI image-generating tool.
Universal Music, the lead publisher in the lawsuit, has reportedly offered commercial partnerships to companies combining generative AI and music. Universal has also revealed that it will develop AI tools with YouTube that are profitable to music rights holders.
Acting against AI vendors for using content without permission while also trying to employ AI technology has proven strategic for Getty and could also be a viable strategy for other companies, Bennett said.
Anthropic did not immediately respond to a request for comment.
Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.