ChatGPT banned by Italian regulators over privacy concerns
The Italian Data Protection Agency today ordered OpenAI LP functionally to suspend the operations of its artificial intelligence chatbot ChatGPT in the country and opened a probe into how the company collects data.
The agency ordered that OpenAI stop processing the personal information of Italian users until the company respects the General Data Protection Regulation, the European Union’s privacy protection law.
The regulator said in a press release that it found OpenAI had no legal basis for the massive collection and processing of the personal data of Italian citizens to train its algorithms. The regulator also cited concerns that OpenAI has no controls currently in place that limit its use by underaged users.
ChatGPT has become particularly popular just two months after its public launch, reaching more than 100 million users in January, according to a study published by UBS. Its technology has also been embedded in numerous products, including Microsoft Corp.’s Bing.
To operate, the underlying algorithm uses massive amounts of data from a large number of sources in order to train its algorithm, which is capable of holding human-like conversations, answering questions, composing poetry and generating essays. The Italian regulator noted that OpenAI had not informed users whose data was collected that their information had been gathered.
Furthermore, in the press release, the agency noted that OpenAI had recently suffered a data breach on March 20 that exposed some users’ personal information and “prompts,” which are used to elicit answers and responses.
This news follows an open letter from the Center for AI and Digital Policy, an artificial intelligence-focused technology ethics group, calling upon AI makers to suspend the creation of new generative AI models, similar to the one underpinning ChatGPT. The letter, which included a number of AI researchers and OpenAI co-founder Elon Musk as signatories, noted that these models have a tendency towards “hallucinations,” or producing misinformation, and could lead to potential malicious use.
That can be problematic, especially when it comes to named individuals. For example, it’s not well-known exactly where the training data for ChatGPT comes from, but it’s known that it scrapes information from large portions of the public internet. So users could ask questions about publicly known named individuals. If someone asks a question about someone else and ChatGPT produces incorrect information about that person, it could spread damaging misinformation.
The Italian DPA echoed those concerns by mentioning its own tests, stating that sometimes ChatGPT would produce results that did not “match factual circumstances” and that the algorithm might be processing inaccurate personal data.
The regulator has given OpenAI 20 days to notify the agency of the measures it intends to implement to comply with the order. If it does not, the company could face fines of up to 4% of its global revenue.
Image: OpenAI
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU