AI creators must be equal to artists in eyes of the law, says researcher


A think tank has published a rebuttal of artists’ claims that AI is infringing copyright by using their work without express permission for training sessions – and it might just be the beginning of a protracted debate around ‘machine rights.’

The Center for Data Innovation (CFDI) describes itself as nonprofit and nonpartisan, but that claim will probably be disputed by disgruntled creatives, who are likely to reject its assertion that the rights of both human artists and those who program and use AI modeling for creativity are already clearly established in law.

Towards the end of a year that saw AI generator programs like ChatGPT and DALL-E2 rapidly rising in popularity, reports emerged of artists banding together to reject what they perceive – perhaps with some justification – as an existential threat.

Illustrator Nicholas Cole and fashion designer Imogene Chayes were among human creatives to post a “No AI” tag on social media platform Twitter, an attack on another platform ArtStation for its inclusion of digitally created works on its roster of film, game, and other media portfolios.

Picking up the thread, Wired magazine ran an article in January with the title: “ChatGPT stole your work. So what are you going to do?” The thrust of the piece was that tech companies are essentially stealing data – often in the form of works created and shared on the internet by artists – to feed into AI systems, which then use it to generate profitable enterprises that make the tech companies behind them grow richer. Meantime, the perennial ‘starving artist’ remains a living proverb.

“This exploitative dynamic is particularly damaging when it comes to the new wave of generative AI programs like DALL-E and ChatGPT,” said Wired. “Without your content, ChatGPT and all of its ilk simply would not exist. Many AI researchers think that your content is actually more important than what computer scientists are doing. Yet these intelligent technologies that exploit your labor are the very same technologies that are threatening to put you out of a job. It’s as if the AI system were going into your factory and stealing your machine.”

Not so, claims CFDI director, author, and researcher Daniel Castro. The assertion that training AI generator systems on human-created works of art amounts to theft and that such artists should therefore be paid compensation by those who benefit from such programs is a fallacy that “falls apart on closer examination.”

To demand such concessions of AI art generators would be to relegate machine creators to a lower tier in the eyes of the law than their human counterparts, he argues, a contention he says is evidenced by existing laws that govern copyright and reproduction.

And in a move that will doubtless infuriate artists even more, Castro stresses that such legal protections also must extend to protecting the rights of AI-generated art creators, and not just their human equivalents – again, to avoid the two-tier legal system he fears.

On the other hand, AI and its human operators are just as bound by existing laws as they are protected by them: DALL-E and the like do not constitute free license to forgery. Meanwhile, more light needs to be shed on potential abuses in the digital image domain that are already classed as illegal but could proliferate thanks to the new technology.

If that sounds confusing – welcome to the tangled twenty-first century, where tech innovation is giving rise to thorny issues once relegated to the preserve of science fiction.

AI: plagiarism or persecution?

First, for the uninitiated, a word about how large-language training models work. An AI art generator such as DALL-E takes millions of images as input during programming, or “training” sessions, learning to apply the information it has absorbed to subsequent user requests for original artwork, for instance: “paint me a landscape in the Impressionist style.”

Artists say this amounts to plagiarism, and that AI-model users should first ask “explicit permission” of the artist, in the case where the work of living ones is being used in training sessions. Failure to do so constitutes “theft” – so goes the argument of artists, as cited by Castro in his rebuttal.

Calls for AI users to request permission are unfounded, and prejudice AI content generators because they effectively hold machines to a “higher standard than human creators.”

“Seeking inspiration and learning from others is not theft,” said Castro. “It is not theft if someone watches a video legally, and that video inspires them to film their own unique creation. Indeed, TikTok and other social media platforms are filled with such videos inspired by related content.”

He goes on: “Similarly, writers, musicians, and other artists learn their craft by observing past creations. In fact, all creative works are shaped by past works, as creators do not exist in a vacuum. The inspections, impressions, and inspirations of the world around them are what give rise to new ideas.”

"Seeking inspiration and learning from others is not theft. It is not theft if someone watches a video legally, and that video inspires them to film their own unique creation."

Daniel Castro, director of the Center For Data Innovation

The issue of copyright and how that dictates whether one must seek permission to use an artist’s work also comes under scrutiny in the CFDI report. Though copyright confers certain rights of reproduction on the holder, its author argues, these do not universally apply in cases where art is already on public display.

“The law does confer certain rights to copyright owners, such as the right to reproduce a work, the right to prepare derivative works, the right to perform a work publicly, and the right to display a work publicly,” said Castro. “But if they choose to display their work in public, others can use their works in certain ways without their permission.”

He cites as examples photographers who can legally take pictures of paintings on display in an art gallery, or sculptures and graffiti freely available for viewing in public spaces. The digital realm where many modern artists have chosen to display their own works, for instance on a website or blog, constitutes much the same, he implies.

“For example, photographers can take pictures of sculptures or graffiti in public places even when those works are protected by copyright,” Castro said. “Copyright prevents photographers from selling those images, but it does not require them to get permission from the copyright owner to take photos.”

The same applies to musicians learning from a recorded, copyrighted song, so long as they do not replicate signature riffs or melodies and try to pass these off as their own original creations.

“There is no intrinsic rationale for why users of generative AI systems would need to obtain permission to train on copyrighted content they have legal access to,” said Castro. “Musicians might practice a copyrighted song they heard on Spotify hundreds of times to learn to play an instrument, or use their well-honed auditory memory to recall elements of pieces they have heard before. Learning from legally accessed works does not violate a copyright owner’s exclusive reproduction and distribution rights.”

Law applies to machines too

That said, AI-generator artists do not get to have things all their own way either. Castro warns that using a machine to generate images in the style of a well-known human artist and then passing them off as original works of said creator is still forgery.

“While generative AI allows users to create art similar to other artists, it does not allow anyone to misrepresent the creator or the provenance of the work,” he said. “Just as it is illegal for artists, no matter how talented, to misrepresent their works as that of someone else, so too is it unlawful to use generative AI to misrepresent content as being created by another artist.”

Implying that AI tools could be used to proliferate the kind of fraudulent art that saw fake copies of artwork by Thomas Kincade produced in China and Thailand at the height of his popularity, Castro adds: “Addressing this type of problem is a long-standing issue in the art world. Law enforcement can and should prosecute individuals who create frauds, and buyers should always conduct due diligence before purchasing.”

As such, he adds, AI-generator users should avoid the temptation of getting programs like DALL-E to produce identical or near-identical works of artists, instead staying within the safe parameters of inspiration.

“Generative AI may allow creators to produce works that have similar styles to existing copyrighted works, but they do not allow creators to produce identical or nearly identical works,” he said. “Copyright owners, including those of literary, musical, and artistic works, can claim infringement if someone produces a work that is substantially similar to their own because they have an exclusive right to produce derivative works.”

He cites music industry cases brought before the law over the years, including that of rock band Queen and singer David Bowie, who successfully sued the rapper Vanilla Ice over the latter’s song Ice, Ice, Baby, which bore undeniable resemblance to Under Pressure.

"Generative AI may allow creators to produce works that have similar styles to existing copyrighted works, but they do not allow creators to produce identical or nearly identical works."

Castro

However, such protective rights must also apply to works created with the help of AI. Pointing to guidelines for machine-crafted art being discussed by the US Copyright Office, Castro suggests that the – relatively ancient – art of photography already points the way to how such regulations might look in the near future.

“Copyright protection for AI-generated content will likely function similarly to that of photographs, wherein a machine – that is to say, a camera – does much of the mechanical work in producing the initial image, but it is a variety of decisions by the human photographer – subject, composition, lighting, post-production edits, and so on – that shape the final result,” he said.

“Likewise, individuals who use AI tools to create content do more than just click a button, such as experimenting with different prompts, making multiple variations, and editing and combining final works. As generative AI becomes a mainstream tool used widely by content creators, policymakers should ensure copyright law fully protects their rights, both domestically and abroad, and offer guidance and clarity for those using AI tools.”

Deepfakes threaten intellectual property

But there is one area where AI generators must be policed stringently, Castro contends, and where there is clearly more work to be done. Publicity rights, the control a famous persona has over use of their personal image by others, are under greater threat than ever.

“The right of publicity is the IP [intellectual property] right that protects individuals from the unauthorized commercial use of their identity,” he said. “This right is especially important for celebrities, as it enables them to control how others use their likeness commercially, such as in advertisements or in film and TV.”

But though the problem may have become more acute in the digital age, it is one that predates the relatively recent arrival of working AI generator systems.

“While generative AI – specifically deepfake technology – makes it easier to create content that impersonates someone else, the underlying problem itself is not new,” Castro said. “Generative AI has not changed the fact that individuals should continue to enforce their publicity rights by bringing cases against those who violate their rights.”

"Generative AI also raises questions about who owns rights to certain character elements. For example, if a movie studio wants to create a sequel to a film, can it use generative AI to digitally recreate a character, or does the actor own those rights?"

Castro

However, he concedes that the advent of generative AI technologies has undeniably made a legal forest all the thornier.

“Generative AI also raises questions about who owns rights to certain character elements,” said Castro. “For example, if a movie studio wants to create a sequel to a film, can it use generative AI to digitally recreate a character, or does the actor own those rights? And does it matter how the film will depict the character, including whether the character might engage in activities or dialogue that could reflect negatively on the actor?”

Revenge porn and other unsavory uses of deepfake technology will also remain an enduring problem for the foreseeable future.

“Deepfake technology also makes it much easier to produce hyper-realistic fake nude and sexually explicit images and videos of individuals without their consent,” said Castro. “While this problem is not entirely new, the scale of the problem is much greater than in the past. Legislation is still needed in many jurisdictions to address distribution of nonconsensual intimate images and videos including those created by deepfakes.”

He added: “While more jurisdictions have laws prohibiting distribution of this type of content, only a few of them address fake content. Policymakers should update and expand these laws to better protect individuals.”


More from Cybernews:

BreachForums cybercrime website down, admin busted

NBA warns fans over data breach, personal details copied

Hitachi Energy confirms data breach

NASA vision for future on Mars might require some physics laws to be bent

Silicon Valley Bank collapse sees cyber risks rise

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked