Software | June 23, 2023

Can an algorithm be creative?

Less than a year ago, the tech world was revolutionized. You probably already know what I’m talking about: ChatGPT. Being able to chat with an algorithm that can answer practically anything is groundbreaking. At the end of the day, there’s a reason why OpenAI got over 100 million users in two months. Let’s compare that to TikTok: Although the platform has grown exponentially, it took nine months for it to reach that number.

The power of GPT and these types of generative models is out of the question, but from the moment it was released, another long-standing debate was reignited: What will happen to human creativity and originality in a world where computers can “create”? Will it be replaced by an algorithm? Will AI serve as a support? Will it write novels for us? Will its works end up in museums?

Let’s take it one step at a time. These generative models are trained with an enormous amount of data from the Internet. This data was created by humans. But… What happens if we start filling the web with content that was created by algorithms? The answer is practically rhetorical: Future models will be trained on this data, and an endless loop will begin.

Many philosophers say that creativity is among the foundations of what makes us human. In fact, the meaning of “original” has to do with having new and valuable ideas that did not exist before. But originality doesn’t just pop up. It stems from other people’s creations, research, and previous experiences. So, a generative technology like GPT could contract human nature rather than expand it. The reason is perfectly explained by writer and journalist Sigal Samuel: “Use the past to build the future.”

This is neither rare nor novel. Algorithms are constantly recommending the same thing to all of us. Think about it: It’s always the same series, songs, reality shows, and books that go viral. We are all being influenced by algorithms. The problem is that these algorithms, at least until now, couldn’t create content like generative algorithms do. It’s one thing to recommend content, but creating content to recommend is simply something else.

There are experts on both sides. Some say that GPT will be able to develop unique works that will surpass the masterpieces created by humans. On the other hand, there are those who explain that, unless this tech can “feel” (how scary!), it will never be able to create works that resonate with people the way human creations do.

And everyone wants to chime in.

On one side, we have intellectuals like Yuval Noah Harari, who says that this type of technology, by engaging with language, could hack “the operating system” of our civilization: “A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out an avalanche of new cultural artifacts.”

On the other hand, artists and people who contribute to the creation of cultural products have a different opinion: An algorithm could write a song or a play, but never like a human. That’s where Nick Cave stands:

“What ChatGPT is, in this instance, is replication as travesty. ChatGPT may be able to write a speech or an essay or a sermon or an obituary, but it cannot create a genuine song. It could perhaps in time create a song that is, on the surface, indistinguishable from an original, but it will always be a replication, a kind of burlesque.

Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend. ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become.”

I think I agree with Cave. The problem would be if an algorithm could “feel” or come close to feeling as we do. But that, at least for now, hasn’t happened.

By Axel Marazzi

Axel is a journalist who specializes in technology and writes for media such as RED/ACCIÓN, Revista Anfibia, and collaborates with the Inter-American Development Bank. He has a newsletter, Observando, and a podcast, Idea Millonaria.

Leave a Reply

Your email address will not be published. Required fields are marked *