While many companies racing to build AI models rely heavily on Nvidia’s AI accelerators, Google has pursued a different strategy by developing its own custom Tensor Processing Units (TPUs). The tech giant has now unveiled its eighth-generation TPUs, marking a significant leap beyond the seventh-gen Ironwood TPU announced in 2025.
Unlike previous iterations, these new TPUs are not merely incremental upgrades. Instead, they introduce two distinct variants tailored for different stages of the AI lifecycle: the TPU 8t for training and the TPU 8i for inference. Google positions these chips as essential for the so-called "agentic era," a phase where AI systems are expected to operate more autonomously than ever before.
TPU 8t: Accelerating AI Model Training
Training AI models is a resource-intensive process that traditionally spans months for frontier models. Google’s TPU 8t is engineered to slash this timeline dramatically. According to the company, the chip can reduce training time from months to just weeks, enabling faster iteration and deployment of advanced AI systems.
TPU 8i: Optimizing AI Inference
For the inference phase—where trained models generate outputs—Google introduces the TPU 8i. This chip is designed to enhance efficiency and performance for real-world applications, ensuring that AI-driven services run smoothly and responsively. Together, the TPU 8t and TPU 8i form a cohesive platform for building and deploying next-generation AI agents.
Why the Agentic Era Demands New Hardware
Google argues that the agentic era represents a fundamental shift in AI capabilities. Unlike earlier systems focused on narrow tasks, agentic AI is expected to perform complex, multi-step operations with greater autonomy. This evolution requires hardware that can handle higher computational demands while improving energy efficiency—a challenge Google aims to address with its new TPUs.