Categories
Software | April 25, 2023

A brief history of Artificial Intelligence

For decades, we’ve heard stories involving robots that become conscious, begin to feel like humans, and either join them to help them improve society or reduce them to mere slaves. Robots have been the protagonists of movies, books, series, and all kinds of cultural products related mainly to science fiction. Examples abound, from the book I, Robot to The Jetsons.

And with the arrival of ChatGPT, it all came back to the surface. Now everyone’s talking about how this technology will revolutionize and change the world in ways we haven’t even imagined yet. But how did we get here? Let’s go over the history of artificial intelligence.

It all started with Alan Turin, an English mathematician who explored the logical possibility of artificial intelligence. According to him, just as humans use information and reasoning to solve problems, so can a machine. His research resulted in one of the most important papers in the history of computing: Computing Machinery and Intelligence.

Five years later, Allen Newell, Cliff Shaw, and Herbert Simon started a proof of concept. That test was called Logic Theorist, a piece of software designed to mimic how humans solve problems. The program was presented at the Dartmouth Summer Research Project on Artificial Intelligence conference in 1956, and while it was incredibly basic and didn’t meet expectations, there was a general consensus: Artificial intelligence could be developed.

Between 1957 and 1974, the field boomed. Computers began to be able to store much more information and became faster and, more importantly, cheaper. After Newell and Simon’s first attempt to show the promise of artificial intelligence, many others followed. So much so that government agencies, such as the Defense Advanced Research Projects Agency (DARPA), began contributing money to research and development. There was plenty of optimism.

But although computers could store more information and had greater computational power, it was not enough. The capacity wasn’t sufficient to develop complex programs, and this is how we entered what is known as the “AI winter,” a period of disenchantment and stagnation due to a lack of investment. The AI winter lasted a decade.

By the mid-1980s, a renewed interest in this type of technology began to emerge, especially thanks to the research by John Hopfield and David Rumelhart, who spread the concept of “deep learning.” With this new breath of air, AI began receiving significant funding rounds. And not only in the United States; other countries also began to jump on the bandwagon. The Japanese government, for example, invested no less than $400 million in what they called the “Fifth Generation Computer Project” between 1982 and 1990.

This gave rise to a flourishing era. The 1990s was a precious time for artificial intelligence, with milestones that were etched in the minds of millions of people. For instance, in 1997, Garry Kasparov lost at chess against IBM’s Deep Blue. This was the first time that software had beaten a world chess champion. At the same time, voice recognition software was included in Windows. It was unprecedented at that time.

Today we are going through an era in which we can store all the information that exists virtually. We’re talking about so much information that it’s impossible for humans to process it. Artificial intelligence is now everywhere. Streaming platforms use it to recommend what to watch, others to translate texts, or to summarize a video. I can’t think of a field in which it can’t be used. And it was only months ago that GPT saw the light of day and began, little by little, to change the rules of the game in a way we could never have imagined. Everything is happening very, very fast.

Where do we go from here? I’m among those who believe that some of the most important chapters in the history of artificial intelligence are being written right now. No one fully understands where we are going, how these algorithms will evolve, and how societies and governments will embrace them while mitigating the potential risks and challenges they will bring. One thing’s clear: We must continue to research and develop in a responsible manner, making sure that we use AI to improve people’s lives.

By Axel Marazzi

Axel is a journalist who specializes in technology and writes for media such as RED/ACCIÓN, Revista Anfibia, and collaborates with the Inter-American Development Bank. He has a newsletter, Observando, and a podcast, Idea Millonaria.

2 replies on “A brief history of Artificial Intelligence”

Leave a Reply

Your email address will not be published. Required fields are marked *