Since I was young, I’ve had family conversations about society’s “utopian” advances… about what the future would hold: computers that fit in the palm of your hand back in 1986, robots that walked on two legs and maintained balance in 1989. Even AI has been advancing step by step for almost 40 years. AI had only two major problems: the objective (very basic tools to obtain very basic data) and the cost (too high for the results obtained). Therefore, AI has always been present but relegated to a mere “utopian advance” that we could only glimpse in science fiction movies.
Today, AI has made a quantitative and qualitative leap, allowing not only the development of complex tools and achieving complex results like summarizing the entire book of Don Quixote so that a 12-year-old can understand it, but its cost has also been democratized. This means that hardware with computing power similar to what’s used for playing 3D games can offer very interesting solutions without having to spend hundreds of millions of dollars on ultra-complex processing systems.
Currently, I follow AI daily for various reasons. One is because I’m passionate about it (the truth is, I’m passionate about any topic that helps us advance as a society, and AI is one of these topics). On the other hand, it’s clear that AI is a technology that will be incorporated into any new system we have—from a washing machine, a refrigerator, or a microwave to the house itself, allowing us to talk to it and automate it as no one could have ever imagined.
Today, it’s not difficult to find hundreds of “free” or “semi-free” tools that offer wonderful advantages based on artificial intelligence. They allow us to make videos, modify photographs, create songs, compose music, write texts, help us summarize, and design outlines to quickly and easily learn any topic we want.
AI Has a Cost
However, these tools have a cost—an intrinsic cost based on the computing power we demand. From the moment we send an audio recording and ask it to transcribe it into text so we can “read” what was said in the conversation, we realize there’s a card consuming almost 800W/h, generating 20ºC that needs cooling to prevent it from burning out, in addition to the acquisition cost (of the card, the equipment, air conditioning, etc.). If we multiply that by the number of simultaneous requests, the cost skyrockets. So how can companies like OpenAI, Google, or Amazon offer it at a fairly affordable price?
Hence the idea that AI is a bubble where costs are absorbed because it’s in their interest to publicize this technology, get people accustomed to it, have people use it, and see how “cheap” it is with these “low” rates that don’t cover the costs. That’s why OpenAI is the company that has publicly introduced AI, and everyone knows ChatGPT and can afford to pay €20/month for a cheap system that works wonders… but few would pay what that system really costs, and that’s why OpenAI is debating between bankruptcy or redoing its entire commercial system to cover costs.
Charge as Much as Possible, but with Low Prices…
Charging for AI is not simple at all. Just look at the price list, which mentions a price per “tokens” (the smallest unit into which a word or phrase can be divided. It can be a complete word, a punctuation mark, a subword like half of a compound word, or even a special character), per minute of audio to convert. The price of images is also quite curious because it depends on the number of iterations, the resolution, the type of generation, and a large number of parameters. It’s practically impossible to forecast costs for a given number of interactions.
If I create a system that transcribes and answers calls using artificial intelligence, what costs will I have? It will all depend on the words that need to be transcribed, those that have to be used as input, those that have to be used for output, those that have to be converted back to audio… everything depends on many factors. And what if someone asks for something that triggers a high number of response words? Bad news… get your wallet ready.
In the end, these prices are so strange to charge as much as possible (while appearing to charge the minimum) to cover costs. But even so, the cost is very high, and there will come a time when companies working with AI truly need to cover all costs and start raising prices on everything related to GPT, image generation, etc. Then we will see how some companies that have grown thanks to the low costs of those who did their calculations at low prices will be left without users because no one will be willing to pay what AI really costs.
When Will the Bubble Burst?
I don’t think anyone knows—basically, when the funds that lent the money to invest in AI infrastructure start demanding returns. Right now, we’re in a stage that in the Silicon Valley tech world we call: “creating the need.” Well, who considers ChatGPT a necessary tool? Who wouldn’t miss that tool? What designer or photographer doesn’t use Adobe’s AI to improve or modify their photographs? Today, I believe that the stage of “creating the need” has already been completed, so I suppose the cost increase will be soon… possibly in 2025.
What I am clear about is that Elon Musk has just created a cluster of systems with more than 100,000 NVIDIA H100 cards (each card costs about €30,000) in less than 120 days to create his own AI company. Now I ask: What is the return on investment timeframe that this company is considering? Who do you think is going to pay this cost? And most importantly, how much benefit will it bring us users to be able to pay what they need us to pay to cover their costs and get the profit they expect?
This is why I believe that AI is currently a bubble, and within one or two years, we’ll see who remains standing and at what cost…