
Italy blocks the use of ChatGPT artificial intelligence
Published on :
The Italian authorities announced on Friday their decision to block the chatbot ChatGPT, accused of not respecting the legislation on personal data and of not having a system to verify the age of minor users.
This is a first in the western world. Italy blocked artificial intelligence ChatGPT on Friday (March 31) over data usage concerns, two months after banning another program marketed as a “virtual friend”.
In a press release, the Italian Personal Data Protection Authority warns that its decision has “immediate effect” and accuses the chatbot of not respecting European regulations and of not verifying the age of minor users.
>> ChatGPT: putting AI on hold, “an existential issue”?
This decision will result in “the temporary limitation of the processing of Italian user data vis-à-vis OpenAI”, the company behind the application, according to the document.
ChatGPT appeared in November and was quickly taken over by users impressed with its ability to clearly answer difficult questions, write sonnets or write computer code. Funded by the computer giant Microsoft, which has added it to several of its services, it is sometimes presented as a potential competitor to the Google search engine.
Data loss
The Italian institution points out that ChatGPT “suffered on March 20 a loss of data concerning user conversations and information relating to the payment of subscribers to the paid service”.
After initial reports, OpenAI briefly discontinued service, then acknowledged a bug in a now-resolved third-party tool that affected 1.2 % of its subscribers and an indefinite number of free users.
The authority also criticizes him for “the absence of an information note for users, whose data is collected by OpenAI, but above all the absence of a legal basis justifying the collection and mass storage of personal data, for the purpose of “training” the algorithms running the platform”.
>> To read also: “ChatGPT artificial intelligence and the democratization of cybercrime”
Additionally, while the bot is aimed at people over the age of 13, it “emphasizes that the lack of any filter to verify the age of users exposes minors to absolutely non-compliant responses by relative to their level of development.
The Cnil, the French policeman for personal data, “has not received a complaint and has no similar procedure in progress”, she indicated in a reaction to AFP.
However, she has approached her Italian counterpart “in order to discuss the findings that have been made” and aims to “clarify the legal framework in the coming months”.
Replika app banned
The Italian institution had blocked at the beginning of February for similar reasons the Replika application, which offers to chat with a tailor-made avatar. Some users had complained of receiving too daring messages and images, close to sexual harassment.
This time again, the Authority asks OpenAI to “communicate within 20 days the measures taken” to remedy this situation, “under penalty of a penalty of up to 20 million euros or up to 4 % of annual worldwide turnover”, the maximum provided for by the European Regulation on personal data (GDPR).
This case shows that the GDPR, which has already resulted in billions of dollars in fines for tech giants, could also become the enemy of new content-generating AIs.
According to Nello Cristianini, professor at the University of Bath (UK), “the most important considerations” are “the use without proper legal basis of personal data for training models and the increasing possibility of seeing this data inaccurately reproduced”.
AI also feeds much deeper fears than the mere exploitation of personal data and the European Union is currently preparing a draft regulation which could be finalized by early 2024, for application a few years later.
Europol warned earlier this week that criminals were ready to take advantage of artificial intelligence to commit fraud and other cybercrimes.
ChatGPT was also blocked soon after its release in several schools or universities around the world, after fears of cheating in exams, and companies advised their employees not to use the application.
“We have seen employees provide their company’s strategic plans to ChatGPT to ask it to make a presentation by slides (editor’s note presentation materials). The idiots ! Because all the data goes into ChatGPT which can regurgitate it if a competitor asks it for the strategy of this company”, explained Françoise Soulie Fogelman, adviser to the Hub FranceIA, during a conference.
On Wednesday, billionaire Elon Musk – one of the founders of OpenAI whose board he later quit – and hundreds of global experts called for a six-month hiatus from research into AIs more powerful than GPT-4. , the latest version of the software on which ChatGPT is based, launched in mid-March, citing “major risks for humanity”.
With AFP