OpenAI’s geoblocking of ChatGPT in Italy

Posted on

OpenAI has begun limiting access to its generative AI chatbot, ChatGPT, in Italy. This is not an April Fools’ Day prank.

This comes after the local data protection authority demanded on Friday that data collection and processing for the ChatGPT service be ceased in Italy.

If you have an Italian IP address and attempt to access ChatGPT, you will see a message from OpenAI saying that they “regret” having to tell you that they have blocked access to users in Italy “at the request” of the data protection authority, which they refer to as the Garante.

It also notes that it is “temporarily pausing” membership renewals in Italy so that users aren’t charged while the service is down, and promises refunds to anyone in that country who purchased a ChatGPT Plus subscription in the last month.

A virtual private network (VPN) connection to a server outside of Italy is a quick and easy way around OpenAI’s apparent geoblock on Italian IP addresses. Users who wish to avoid the block may have to establish a new ChatGPT account from a different country’s IP address if they previously used one that was registered in Italy.

On Friday, the Garante stated it had launched an investigation into ChatGPT over concerns that OpenAI had unlawfully processed the personal data of Italian citizens, in violation of the EU’s General Data Protection Regulation (GDPR).

By all appearances, OpenAI has not informed the people whose data it has discovered and used to train its technology online, such as by scraping information from online discussion forums. Neither has it been completely transparent about the information it uses to train its most recent algorithm, GPT-4. The GDPR’s transparency principles imply that both the users and the individuals whose data it scraped should have been informed, even if the training data it used was public (in the sense of being posted online).

The Garante raised concerns about child safety in a statement it released yesterday by pointing out the absence of an age verification feature to avoid inappropriate access.

The authority has also expressed doubts about the veracity of the chatbot’s data.

ChatGPT and other generative AI apps are prone to “hallucinating,” or making up false information about specific people. This seems problematic in the EU, where people have a number of rights with respect to their data under the General Data Protection Regulation (GDPR), including the right to have inaccurate data corrected. And it’s not entirely clear that OpenAI has a mechanism in place where users can ask the chatbot to cease lying about them.

Our request for feedback from the San Francisco-based firm regarding the Garante’s probe remains unanswered. However, it claims to comply with GDPR and other privacy laws in a public statement directed at Italian users who are barred from accessing the service.

Many of you have informed us that you find ChatGPT helpful for everyday tasks, and we look forward to making it available again shortly. The company also writes, “We will engage with the Garante with the goal of restoring your access as soon as possible.”

OpenAI ends its statement on an optimistic note, but it’s unclear how the company plans to address the Garante’s compliance issues, which span a broad range of GDPR concerns.

The EU-wide law mandates “data protection by design and default,” which means that privacy-centric processes and principles must be built into any system that handles people’s data from the ground up. The inverse strategy of “data first, ask questions later”

Meanwhile, a data processor can be fined up to 4% of its yearly worldwide turnover (or €20M) for confirmed breaches of the GDPR.

Since OpenAI has no permanent presence in the EU, any of the bloc’s data protection authorities can oversee ChatGPT, meaning that authorities in any of the EU member countries can launch an investigation and levy fines if they discover a violation. (in relatively short order, as each would be acting only in their own patch). As a result, it is most vulnerable to the effects of GDPR because it is ill-equipped to engage in the “forum shopping” game that other tech behemoths have used to postpone privacy regulation in Europe.

Last but not least – this is a wake up call that #GDPR#Article8 Charter, data protection law in general & particularly in the EU IS APPLICABLE TO AI SYSTEMS today, right now, and it has important guardrails in place, if they are understood & applied. 18/🧵

— Dr. Gabriela Zanfir-Fortuna (@gabrielazanfir) March 31, 2023