Italy’s privacy regulator ordered a temporary ban on OpenAI’s ChatGPT, saying the artificial-intelligence chatbot has improperly collected and stored information, accelerating the rush by policy makers to roll out new AI rules.
The order calls on OpenAI to suspend processing the data of Italian users, which could effectively mean OpenAI must block access to its chatbot from Italy. The platform was still accessible on Friday afternoon.
Italy’s data protection authority said OpenAI had “no legal basis” for using the data it had amassed “to train the algorithms that power the platform.”
OpenAI didn’t immediately respond to a request for comment.
The regulator said it has opened an inquiry into OpenAI and gave the company 20 days to comply with European Union privacy rules or risk fines. The maximum fine under the EU’s General Data Protection Regulation is 4% of a company’s global annual revenue or the equivalent of $21.8 million, whichever is higher.
The Italian regulator also said there was no system to verify the age of users and stop children under 13 from using the chatbot, thus exposing them to “responses that are absolutely unsuitable to their degree of development and self-awareness.”
Complying with the regulator’s demands could entail OpenAI’s adding age checks, updating its privacy policies and prompting users with more detailed information about how their information may be used.
A more complicated dispute could emerge if the regulator insists that OpenAI take measures to exclude or remove information about identifiable people in Italy from its training data, or allow individuals to object to their inclusion—since tools like ChatGPT are trained on billions of documents from across the internet.
Scrutiny is growing for AI tools and chatbots in particular. Earlier this week, a group of AI researchers and tech executives called for a six-month moratorium on the training of the next generation of AI tools to give time for the industry to set safety standards.
News publishers, for their part, have begun examining the extent to which their content has been used to train tools like ChatGPT, as well as their legal options for being compensated, The Wall Street Journal has reported.
Regulators are also looking closely at AI. The EU is nearing the final stages of debate about a bill, first proposed in 2021, that would regulate artificial-intelligence usage across the bloc. It could include bans on some uses of facial recognition and could require makers of tools like ChatGPT to do risk assessments and verify the quality of the data they use to train their software.
The U.K. for its part this week released a white paper suggesting how it could empower its regulators to oversee the development of AI, with a focus on issues like tools’ safety, transparency and fairness.
Some businesses and other entities, including
JPMorgan Chase
& Co., have in recent months blocked access to ChatGPT from their local networks.
Verizon Communications Inc.
said it was doing so because of the potential for losing ownership of proprietary data.
New York City public schools in January banned the chatbot from its internet networks and school devices because of fears over cheating and learning development.
Write to Margherita Stancati at margherita.stancati@wsj.com and Sam Schechner at Sam.Schechner@wsj.com
Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.