Categories
Technology | May 24, 2023

Artificial intelligence regulation is on its way

GPT has revolutionized the entire world—not just the tech community. Experts say that there are very few fields that new generative algorithms will not radically change. And little by little, different countries have begun noticing the need to regulate this tech.

Italy was the first country to block ChatGPT. The country’s authorities revealed that, because of OpenAI’s questionable handling of user data, they considered the company to be in violation of European protection laws. Although ChatGPT was back in operation in the European country by the end of April, it’s clear that concerns remain and that the issue of regulating these conversational chatbots is just beginning.

While no other country has followed Italy’s decision, several nations, including Spain, France, Germany, and Canada, are assessing the problems with the service. Authorities are concerned about two major things.

The first is the way in which the algorithm delivers its answers, which is often not only unreliable but also circumvents copyright laws. The second, much more complex, is how GPT is trained: By collecting information from the web. What information? They don’t say exactly. According to the company, it’s born from “a variety of licensed, created and publicly available data sources, which may include personal information.” Let’s keep the idea of “personal information” in mind.

Why are they so vague about how they train their algorithm? One of the reasons, experts say, is that the company could get into trouble if the authorities find out. The General Data Protection Regulation (GDPR) states that any company collecting information from its citizens must have the explicit consent of the individuals. And according to regulators, no one gave that explicit consent to OpenAI.

But this goes far beyond what European regulators think. Companies such as Samsung or JP Morgan have banned their employees from using this type of tool because they fear that the company has access to confidential information. What would happen if, for example, a Samsung developer ran code through GPT to find a bug? Could the team at OpenAI see that code? It’s disturbing even to imagine the answer.

What’s curious is that Microsoft is apparently working on offering a new version of GPT focused on high-level companies such as banks, insurance companies, and health providers, which would cost ten times more but would be more private. The first thing that comes to my mind is: Shouldn’t they offer that privacy to all their users?

We’re still in the early stages, but just as GDPR aims to protect the private information of Europeans against the sometimes-murky handling of their data, it’s likely that authorities will come up with either a new law or annexes to existing ones to control these algorithms that, evidently, are here to stay.

By Axel Marazzi

Axel is a journalist who specializes in technology and writes for media such as RED/ACCIÓN, Revista Anfibia, and collaborates with the Inter-American Development Bank. He has a newsletter, Observando, and a podcast, Idea Millonaria.

Leave a Reply

Your email address will not be published. Required fields are marked *