A recent report from Reuters reveals that OpenAI, known for creating ChatGPT, is shifting away from Nvidia’s AI chips for the first time. Instead, the company has started renting Google’s AI chips to power its services, including ChatGPT.
This marks a significant change for OpenAI, which has heavily relied on Nvidia GPUs for training and running AI models. These GPUs have been the main hardware used for applying AI to real-world tasks. Now, OpenAI is also relying on Google’s infrastructure to handle its increasing computing needs.
Earlier this month, Reuters reported that OpenAI plans to use Google Cloud services. This move hints at an unexpected partnership between two major players who often compete in AI.
For Google, offering its tensor processing units (TPUs) to outside companies is a part of a bigger effort. Once mainly used for Google’s own projects, TPUs are now attracting clients like Apple, and AI startups such as Anthropic and Safe Superintelligence. Many of these startups were founded by former OpenAI executives.
The decision to rent Google’s TPUs marks a shift away from OpenAI’s use of Microsoft data centers. A report from The Information states that TPUs could be cheaper than Nvidia’s GPUs. Using Google chips may help OpenAI cut costs, especially in the area of running AI models in real-time.
However, Google is not offering its best TPUs to OpenAI. A Google Cloud employee told The Information that Google is holding back its top-tier chips from a direct rival in the fast-moving AI field.
For more daily updates, please visit our News Section.