The Massachusetts Institute of Technology (USA) hosted an event called “Imagination in Action” dedicated to AI-driven business. The invited co-founder and CEO of the OpenAI research lab, Sam Altman, shared his assumption that the huge artificial intelligence models we know today are unlikely to grow even larger. It is likely that we are now witnessing their maximum size.

One of the main obstacles to scaling large language models (LLM) is the extremely high and unstable costs associated with training and subsequent launch. The price of GPUs has a significant impact. For example, to train the popular AI-based chatbot ChatGPT, its developers used the power of more than 10,000 such processors. To work seamlessly, meeting all user requests, it will require even more resources. A single new Nvidia H100 GPU designed for artificial intelligence and high performance computing (HPC) can cost around $30,600. Run AI co-owner and CTO Ronen Dar believes that acquiring the computing resources needed to train larger LLMs will cost hundreds of millions of dollars. These are colossal amounts, even for technology giants.

Of course, the development of large AI models will not reach a dead end. «We will improve them in other ways», Sam Altman hinted in his speech. Given the sharp increase in LLM-related expenses, developers will instead shift their focus towards improving architecture, promoting algorithmic methods, and increasing data effectiveness. In other words, they plan to concentrate on quality. Obviously, AI will only benefit from this.