When ChatGPT, developed by American company OpenAI, was released, the Italian government swooped in to block it. The block was imposed by the data protection authority, the Garante della Privacy.1

There were several alleged reasons for this decision. First, ChatGPT would not have a clear privacy protection plan when collecting user data, which would put the application at odds with Italian law and the relevant European regulation, the GDPR. Secondly, there would be a lack of effective usage controls enforcing OpenAI's condition of use that ChatGPT can only be used by persons over the age of 13.

It is no mystery that there are also deeper considerations behind this decision, which imposed a temporary block of the application for Italian users and threatened OpenAI with fines amounting as much as 4% of global turnover.2

The decision came a few days after the parent company suddenly shut down ChatGPT for a few hours on March 20, after around 1.2% of users allegedly experienced risks to the protection of their data due to the autonomous operations of the artificial intelligence system.3

Essentially, the machine risked mentioning personal data (including credit card details and bank details) of some users in response to queries from other users. This obviously amounts to the disclosure of sensitive data without consent. This is why OpenAI hastened to take the application offline until the bug was fixed.

Despite everything, following the block by the Garante della Privacy, one could read on the block screen which appears when trying to connect from Italy to the official site that the company claims to operate in full compliance with the GDPR and other relevant national regulations.

The fact is that the interaction between artificial intelligence and data protection is a very complex issue. And, for this reason, talking about 'bugs' to refer to events such as the one of March 20 is not entirely accurate. The dissemination of personal data carried out by ChatGPT, in which it used information on some people to answer the questions of others, is a manifestation of how machine learning applications work, such as the LLM (Large Language Model) technology that is fundamentally behind the system.

In fact, the answers are derived from generalisations made from the gigantic corpus of conversations, articles, online content, and so on, which have been provided to the machine to 'train' it to recognise meaningful patterns and connections in order to develop the ability to recognise appropriate and meaningful answers to given queries.

To this end, ChatGPT collects messages sent by users, so that it can improve itself by replenishing the material it works on.

This is why, for instance, its ability to provide answers on events that occurred even after September 2021 (the time when the original body of data was formulated) improves as time passes. The system is thus able to use inputs from anyone who communicates with it as a basis for developing new outputs to other users.

These operations are performed autonomously by the machine, and there is no way of knowing for sure what 'reasoning' is followed to determine certain information as appropriate or inappropriate. It is not intuitive to teach ChatGPT that it is appropriate to communicate certain things and not others depending on the situation. It is said that artificial intelligence acts as a 'black box', of which we know the inputs, the outputs, but not the algorithm that leads from one to the other.

This characteristic of digital systems with this degree of autonomy is, by definition, problematic. And herein lies the difficulty on the part of both OpenAI and the competent authorities in understanding how to regulate, on the one hand, and regulate, on the other, the use of these powerful tools in society.

Indeed, if an artificial intelligence application 'decides', without consulting anyone, to do something that ends up infringing someone's rights (of privacy, for instance), whose responsibility is it? Strictly speaking, neither the developers, nor the parent company, nor the users have intentionally done anything wrong. What we are talking about here is a potential responsibility gap between the wrongdoer (in this case the machine) and the responsible party.

Taking a more down-to-earth example: suppose a fully autonomous car hits a pedestrian, and suppose the pedestrian is completely innocent and the accident was caused by an unforeseen and totally unforeseeable system malfunction, so that it cannot be simple negligence on the part of the developers.

Who should go to court for murder? The passengers who were not driving? The programmer who could have avoided it in any way? The company which merely marketed the product after testing it properly?

Some suggest we could attribute legal responsibilities to artificial intelligence itself, making it a legal entity as we already do with certain companies such as limited liability companies (LLCs). However, the comparison is not obvious and there are important differences between the two cases.

Whichever way one deals with these issues, what is certain is that it will become more and more necessary to engineer one's own regulations in terms of regulating artificial intelligence. Although both the United States and the European Union have announced plans to discuss principles on which to tackle the issue, the case of the ChatGPT block in Italy underlines that there is still much to be done. The attitude taken by the Garante della Privacy has proven to be too conservative and ultimately tries to skirt around the issue.

In fact, the crux of the matter is not that OpenAI is trying to evade GDPR regulations (although it has announced important measures to adapt to the demands of the Italian authorities.) The difficulty lies in the fact that the GDPR is obsolete as far as autonomous technologies such as these are concerned.

Burying one's head in the sand is an expression of Luddism that does nothing but discourage, on the one hand, companies from innovating and proposing solutions and improvements, and on the other hand, consumers from trusting such innovations and embracing them in a conscious and responsible manner.

This article was authored by Emanuele Martinelli. Emanuele is a fellow with Young Voices Europe and an Italian PhD student at the University of Zurich. He works on the limits and modalities of applications of AI technology in economic planning and works as a proofreader and translator in the academic and literary sectors. Emanuele also works with Liberales Institut, a Swiss think tank.

1 ChatGPT disabled in Italy: the issue of ChatGPT data and the reasons for the Privacy Guarantor's block?
2 Artificial intelligence, Privacy Guarantor blocks ChatGPT.
3 MSN. (n.d.). The ChatGPT bug exposed more private data than previously thought, OpenAI confirms.