OpenAI is negotiating with investors and venturing into hardware production

It didn’t take long to see who is, metaphorically speaking, the elephant in the room, the important problem that everyone knows about but no one dares to speak about. The elephant in this story is the shortage of specialized chips produced by Nvidia. In front of its warehouses, all the major technology companies dealing with artificial intelligence patiently wait in line. News emerged five months ago that OpenAI was considering its own production and seeking investors. The idea wasn’t unexpected, but the question was how to gather investments: Sam Altman estimated that the cost of establishing new semiconductor production locations, along with the necessary infrastructure, could amount to an incredible $7 trillion. Yesterday, it was announced that MGX, an investment firm based in Abu Dhabi, entered into negotiations with Sam Altman regarding the production of semiconductors specialized for AI chips. This news, as reported by the Financial Times, holds dual significance: first, that OpenAI or Sam Altman independently ventures into hardware production and secures investors; second, it reflects the commitment of the United Arab Emirates to become a significant player in the global artificial intelligence sector by injecting enormous sums of dollars.

Scarcity and Costs OpenAI is not the only one facing scarcity and high costs of graphics processing units (GPUs), which are crucial for running its AI models, including ChatGPT. The company, like other competitors, relies on Nvidia GPUs, which dominate with over 80% of the global market. The demand for processors needed by OpenAI, Microsoft, Meta, and Google, recently joined by Apple, is such that it has elevated Nvidia’s value above that of Amazon and Alphabet. Last year, Nvidia’s revenue increased by 125 percent, reaching $60 billion. Similar growth is predicted for this year. Sam Altman often emphasizes the shortage of advanced processors and the “staggering” costs associated with OpenAI’s operations. Producing ChatGPT is an expensive business. Each query costs approximately 4 cents, according to analysis by Stacy Rasgon of the analytical firm Bernstein. If ChatGPT queries were to increase to a tenth the size of Google searches, it would require approximately $48.1 billion worth of GPUs and around $16 billion worth of chips annually to remain operational.

Attempts to Address Dependency The underlying motivation, not only for OpenAI but for others as well, lies in minimizing costs and reducing dependence on Nvidia’s specialized AI GPUs, but perhaps more importantly, ensuring a stable supply. Others are also trying to solve the same problem. Although Google uses NVIDIA H100 GPUs for its Cloud platform Vertex AI and has access to the NVIDIA DGX GH200 AI supercomputer, powered by NVIDIA GHS (Grace Hopper Superchips), it is attempting its own hardware. Google has developed its TPU v5p (TPU, short for “tensor processing unit”), an enhanced version of its previous TPU technology designed to significantly accelerate AI training processes. Google has invested an estimated $2 to $3 billion in its own chips. Amazon spent $200 million last year on one hundred thousand Nvidia chips, according to the New York Times. It relies on Nvidia as it uses more than 16,000 GH200 Grace Hopper superchips. Although Amazon is developing its Trainium2 and Graviton4 chips for training and running AI models, there is still significant demand for Nvidia GPUs.

The Nvidia chips that Microsoft uses or plans to use include H100 and H200 Tensor Core GPUs, which power Microsoft’s Azure cloud and AI services. However, Microsoft also aims to develop its custom AI chips, Azure Maia AI Accelerator, and Azure Cobalt CPU, in a project codenamed Athena, which began in 2019. Meta’s plan for this year is to purchase 350,000 Nvidia H100 GPUs, adding that quantity to the existing 600,000 H100 GPUs. Nvidia H100 GPUs reportedly cost around $25,000 to $30,000 per unit. Intel is also waking up and launching an independent company dedicated to generative artificial intelligence, Articul8.

Nvidia is still firmly rooted in all these companies, which won’t break free from its dependency overnight no matter how hard they try. It is particularly difficult to expect a new competitive factory to be built quickly. Nvidia partners with companies like Foxconn, TSMC, ASML, and Synopsys, enabling Nvidia to leverage the expertise, infrastructure, and manufacturing capabilities of industry-leading firms. Building a new chip factory involves strategic partnerships, cutting-edge manufacturing technologies, significant capital investments, and access to specialized talent and expertise. Moreover, seven trillion is seven trillion. OpenAI and Sam Altman have a tough job ahead of them.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

>