Updated News Around the World

AI Tech Enables Industrial-Scale Intellectual-Property Theft, Say Critics

Grzegorz Rutkowski

has studied the great masters of texture and light—Caravaggio, Rembrandt, Vermeer—and his ability to mimic their techniques has made him an in-demand painter of fantastical beasts and landscapes for the videogame industry. 

But these days, instead of devoting all his time to painting in his sun-dappled studio near the picturesque medieval square in the town of Pieńsk, Poland, he’s spending ever more of it on Zoom calls, talking to lawyers, artists and others about the strange reason he is suddenly far more famous than he ever thought possible.

It turns out that Mr. Rutkowski’s distinctive style and choice of subject matter have made him among the most popular artists to copy using the kind of image-generating artificial-intelligence systems that have exploded in popularity in the past year. The two most prominent of these systems are Dall-E 2, made by San Francisco-based startup OpenAI, and Stable Diffusion, created by Stability AI, though there are an ever-growing number of competitors.

The result is that images made by AI in Mr. Rutkowski’s style are suddenly all over forums on the internet where users share works they have generated from text prompts. Mr. Rutkowski’s name has become such a popular prompt in such AI art generators that he recently decided to join a federal lawsuit seeking class-action status against several of the companies involved. 

Grzegorz Rutkowski’s style has been widely mimicked by AI-art software.



Photo:

Dorota Rutkowska

For these algorithms to copy Mr. Rutkowski’s style, the companies that built them first had to copy his work, and that of thousands of other artists, from the internet, and then use it to train their AIs. This is the basic mechanics of all so-called generative AI—first, a company finds or creates a big enough set of data, then uses various algorithms to train software to spit out specific text, images or code based on that data. 

What worried Mr. Rutkowski most was that art generated using his work threatened to bury his own original works in the places in which it mattered, on Google’s search index and in repositories of online art.

Mr. Rutkowski’s story matters because it is, in many ways, representative of the way that generative AI has the potential to transform the production of all kinds of creative work, including novels, marketing copy, news articles, video, illustration and code. 

We’ve all been dealing for years with kinds of AI that excel in pattern recognition, enabling functions like recognizing family members’ faces in photo apps or flagging objectionable content before it reaches our social-media feeds. 

The algorithms that power generative AIs go one step further. Both kinds of AI require large sets of training data, but generative AI can actually synthesize that data to produce new content, not just recognize what already exists.

Critics of these systems include artists, coders, legal experts, and even some of the engineers who build them. They argue that the systems that make this possible, including text-generating ChatGPT from OpenAI, Microsoft’s GitHub Copilot for code, and art-generating AIs, all ingest content without securing permission from its creators. Then they use that content to create systems that actively compete with the individuals whose work they ingested. Without all that free content, gathered under what some argue are fair-use protections, these content-generating AIs could not exist.

Fans of these systems, including their builders, users and the usual menagerie of techno-optimists inclined to cheer any potentially disruptive innovation, argue that a generative AI is just doing some version of what a human does—it learns from existing works, then produces its own. They also say that, however one feels about how these systems accomplish it, this technology might be inevitable, since anyone with enough technical skill can build one.

Some who share that view are trying to create versions of the technology that incorporate ethical considerations. Musician, coder and academic

Mathew Dryhurst

has launched, with his wife, musician

Holly Herndon,

a service called Spawning that is designed to create AI art only using images from artists who have consented to the use of their work. 

“This is not going away,” Mr. Dryhurst says. “As a society, we have to digest that all media is now training data, and any style—even a timbre in music—is going to be a non-protectable thing.”

Writers, coders and artists who embrace these systems can increase their productivity in the same way that automation has increased the productivity of countless workers in other fields since the dawn of the industrial revolution. Many, including novelists and coders, have already reported that AI can make them both more creative and more productive.

In addition, these systems can democratize the creation of content. People who can barely draw a circle can now have a hand in creating breathtaking works of art. Non-native speakers of a language can generate florid prose on any topic. And programmers who are under constant pressure to be more productive can generate chunks of working code with nothing but a short text prompt.

“Everyone is saying this is the end of the arts,” says Mr. Dryhurst. “I don’t think that’s even close to the truth.”

Those behind the systems are reluctant to speak openly about them, however, on account of court cases in the works against the companies that either help create this content or host it.

Grzegorz Rutkowski
Stablie Diffusion

Those cases include the suit that Mr. Rutkowski joined as a plaintiff, against AI-art companies Midjourney and Stability AI, and DeviantArt, a popular forum for art that both hosts AI art and has created its own system for generating it. In that case, filed last month in U.S. District Court in San Francisco, the plaintiffs argue that these companies violated the copyright of tens of thousands of artists.

“The allegations in this suit represent a misunderstanding of how generative AI technology works and the law surrounding copyright,” says a spokesman for Stability AI. “We intend to defend ourselves and the vast potential generative AI has to expand the creative power of humanity.” DeviantArt and Midjourney did not respond to requests for comment.

Another proposed class-action suit, filed in November in the same federal court, targets Microsoft, its subsidiary GitHub, and OpenAI for their GitHub Copilot system. The plaintiffs argue that by producing code that doesn’t give attribution to the original authors whose code is used to generate Copilot’s results, the system violates various open-source licenses, as well as the Digital Millennium Copyright Act. The defendants have responded by asking the court to throw out the suit. Among their arguments: The plaintiffs—programmers who share their code on GitHub and elsewhere—haven’t demonstrated that Copilot harms them in any real way.

The stakes here could be high. As its proponents argue, generative AI might be the next big thing in tech, defining whole industries for decades. It also might be just another tool that makes people a bit more productive, and Microsoft’s products marginally more useful. Microsoft plans to invest billions into OpenAI, creator of Dall-E, ChatGPT and the OpenAI Codex, which powers GitHub Copilot. Whether any of this technology, especially a potential chat-powered search engine, represents a real threat to Google is unclear. In any case, ChatGPT is on pace to reach 100 million users faster than TikTok or Instagram did, according to a just-released report from

UBS.

The potential of this technology is the reason companies like Microsoft are rushing to invest in and roll out generative-AI systems despite the risks in doing so, says

Joseph Saveri,

the lawyer whose firm is leading both the case against Microsoft over GitHub Copilot and the case against various parties over Stability AI’s AI-art generator, Stability Diffusion.

Mr. Saveri sees parallels between the anticompetitive behavior for which the U.S. Justice Department successfully sued Microsoft in the 1990s—like bundling its Internet Explorer web browser with its Windows operating system—and today. Now, as then, he sees Microsoft moving quickly to dominate a field it deems important to the next generation of the internet and computing.

Microsoft declined to comment specifically on the pending litigation over its Copilot service. But a spokesman for the company said that in general, the claims made about copyright infringement in the cases against generative-AI systems could apply to anyone who was trying to use AI or machine learning in their business processes. A decision in favor of the plaintiffs could “inhibit and chill a whole bunch of uses of these technologies by a whole bunch of players out there,” he adds.

Part of what courts do when deciding matters of fair use is to weigh the potential harms of a new way of using content against its benefits, says Inyoung Cheong, a legal scholar at the University of Washington. It could take years for all of the cases against generative-AI companies to conclude, she adds. 

Most of the ways generative AI is used might prove to be totally new, and in areas where before a person might not have paid a human to perform the same function, says the Microsoft spokesman. As in cases deciding the fate of new technologies that could be used to violate copyright in the past, but had other uses, such as the VCR, courts often decide in favor of the new technology on account of its many other potential uses, he adds.

As court cases proceed, Microsoft and many of the startups involved will continue to make ever-more-powerful models. The next generation of ChatGPT is already on its way, and Microsoft has promised to incorporate the company’s technology into Word, Excel, Teams and other products. Stability AI in October raised $101 million, valuing the company at $1 billion in an investment climate most other startups have found unusually hostile. Other generative-AI startups are finding similar levels of enthusiasm among investors.

SHARE YOUR THOUGHTS

How has AI affected your view of creativity?Join the conversation below.

New legal precedents might curb these systems, or force their creators to change the contents of the databases they rely on, or push them to compensate the creators of the content without which these AIs could not exist. But whatever happens, it will come well after they are on their way to making a significant impact in the way content is created—and on who is able to make a living from it.

“AI right now is growing on the backs of artists,” says Mr. Rutkowski, “and I feel like my back was being used a lot.”

For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter.

Write to Christopher Mims at [email protected]

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsUpdate is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.