
OpenAI has struck a surprising deal with Google Cloud to access more computing power, despite their rivalry in AI, reported news agency Reuters.
Traditionally reliant on Microsoft Azure, OpenAI is now diversifying its infrastructure, following similar partnerships with Oracle, CoreWeave, and SoftBank.
The agreement, finalized in May 2025, comes as OpenAI faces growing demand for compute power, especially after launching graphics-heavy features like Ghibli-style image generation. CEO Sam Altman even joked that their GPUs are melting under the pressure.
Google is offering its tensor processing units (TPUs) to OpenAI, marking a shift in strategy as these chips were previously reserved for internal use. OpenAI is also working on custom AI chips, expected to roll out by 2026, reducing reliance on Nvidia GPUs.
Google's Tensor Processing Units (TPUs) and Nvidia's Graphics Processing Units (GPUs) are both designed for AI workloads, but they have distinct architectures and strengths.
TPUs are custom-built for AI tasks, especially deep learning inference, while GPUs are general-purpose processors originally designed for graphics but widely used for AI training.
GPUs handle parallel processing well, making them better suited for training complex AI models, whereas TPUs are optimized for tensor operations.
This deal strengthens Google Cloud’s position as a neutral compute provider, even as it competes in AI services. Meanwhile, Alphabet plans to spend $75 billion on AI-related infrastructure in 2025.