Italy’s privacy watchdog has banned temporarily the popular artificial intelligence service ChatGPT, after launching an investigation into the chatbot’s Microsoft-backed owner OpenAI.
The nation’s data protection authority said on Friday it would block access to the chatbot in Italy, while it examines the US company’s collection of personal information.
The move comes after a cyber security breach last week exposed user conversations and some financial details. The information exposed for a nine-hour period included first and last names, billing addresses, credit card types, credit card expiration dates and the last four digits of their credit cards, according to an email sent by OpenAI to an affected customer, and seen by the Financial Times.
The Rome-based watchdog said the US company, led by chief executive Sam Altman, would have 20 days to respond to the ban and illustrate what actions it had adopted to tackle the issues. If OpenAI fails to respond within the deadline it could face a fine of up to €20mn.
OpenAI did not immediately respond to a request for comment
The move represents the first regulatory action against the popular chatbot, with policymakers across the world seeking to respond to the rise of generative AI services.
Experts have been concerned about the huge amount of data being hoovered up by language models behind ChatGPT. OpenAI had more than 100mn monthly active users two months into its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was being used by more than 1mn people in 169 countries within two weeks of its release in January.
OpenAI has previously said that it has resolved cyber security issues related to the leak of information. However, OpenAI will be blocked from processing Italian users’ data through ChatGPT while the probe is in process.
The Italian regulator said it launched an investigation after noting the “absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of ‘training’ the algorithms” underlying ChatGPT.
It also said that, according to its internal analysis, ChatGPT did “not always provide accurate information”, which leads to a misuse of personal information.
The regulator criticised OpenAI’s lack of a filter to verify that children under 13 were not using its service. Specifically, the watchdog claimed underage children were being exposed to content and information that was not appropriate for their “level of self-consciousness”.
This week the likes of Elon Musk and Yoshua Bengio, one of the founding fathers of modern AI methods, called for a six-month pause in developing systems more powerful than the newly launched GPT-4, citing major risks to society.
Some industry experts and insiders said the call was hypocritical and it was merely a way to allow AI “laggards to catch-up” with OpenAI, at a time when large tech companies are competing aggressively to release AI products such as ChatGPT and Google’s Bard.
Currently generative AI technologies fall under the regulatory purview of existing data and digital laws like the GDPR and the Digital Services Act, which oversee some aspects of it.
However, the EU is preparing a regulation that will govern how AI is used in Europe, with companies that violate the bloc’s rules facing fines of up to €30mn or 6 per cent of global annual turnover, whichever is larger.