The development of Tensor SoCs is continuously progressing, breaking Samsung’s all-inclusive foundry and packaging model.
Since the first Tensor SoC was featured in the Pixel 6 series in 2021, Google has used Samsung-produced chips as the core of its phones. However, next year’s tenth-generation Pixel will see a major change, with the Tensor G5 expected to be the first Pixel-specific chip produced by TSMC.
The Information reported in July last year that Google had reached an agreement with TSMC to produce fully customized Tensor SoCs for Pixel devices. If Google retains the current naming method, this chip might be called Tensor G5. Since foreign media revealed this, the development of Tensor SoCs has made continuous progress, including reports that testing orders were secured by KYEC, breaking Samsung’s all-inclusive foundry and packaging model.
Recently, another media outlet, Business Korea, reported that Google’s upcoming Tensor G5, to be released next year, will use TSMC’s 3nm process, which is expected to significantly enhance the performance level of the Pixel series. Currently, the Tensor G3 used in the Pixel 8 series is built with Samsung’s 4nm process, and switching to the 3nm process by the second half of 2025 is imperative.
This is not surprising, as Apple started using the 3nm process with last year’s iPhone 15 Pro series. More importantly, it is expected that Qualcomm and MediaTek’s next-generation chips will follow suit, meaning Tensor G5 in the non-Apple camp will not exclusively enjoy the process advantage.
Additionally, Business Korea reported that Samsung is working hard to solve yield and power consumption issues. The upcoming Exynos 2500 chip claims to have 10%~20% lower power consumption and better heat dissipation compared to TSMC’s 3nm process.
Apple has been using its self-developed A-series chips since the iPhone 4 and has extended its custom M-series chips to the entire Mac series. They have been developing 3nm process chips for iPhone and Mac for nearly a year, while Android camp competitors are just starting to venture into this technology. With Tensor SoC production transferred to TSMC, the new Pixel devices are expected to see significant upgrades.
1. Google’s New Generation Cloud AI Chip
Google also introduced the TPUv5p, its latest generation of cloud AI chips, which are the most powerful and cost-effective chips to date. Each TPUv5p Pod contains up to 8,960 chips, interconnected through high-bandwidth chip-to-chip connections for rapid data transfer and optimal performance.
The new TPUv5p excels in AI performance, offering 459 teraFLOPS of bfloat16 performance or 918 teraOPS of Int8 performance, equipped with 95GB of high-bandwidth memory and a data transfer speed of 2.76TB/s. Compared to the previous TPUv4, the TPUv5p has doubled its floating-point operations and tripled its high memory bandwidth, attracting widespread attention in the AI field.
Furthermore, TPUv5p has increased large language model (LLM) training speed by 2.8 times and improved about 50% over the previous TPUv5e. Google has also enhanced scalability, making TPUv5p’s scalability four times that of TPUv4. Overall, TPUv5p compared to TPUv4 shows the following improvements: twice the floating-point operations, triple the memory capacity, 2.8 times faster LLM training speed, 1.9 times faster embedding dense model training speed, 2.25 times more bandwidth, and twice the chip-to-chip interconnect bandwidth.
Google has achieved significant success in AI, attributing it to excellent hardware and software resources. Google’s cloud AI supercomputers are a set of collaborative elements designed to handle modern AI workloads. Google integrates performance-optimized computing, optimal storage, and liquid cooling to fully utilize its immense computing power, achieving industry-leading performance.
In terms of software, Google has strengthened support for popular machine learning frameworks such as JAX, TensorFlow, and PyTorch, providing powerful tools and compilers. These tools and compilers optimize distributed architectures, making it more efficient and easier to develop and train complex models on different hardware platforms. Google has also developed multi-chip training and multi-host inference software to simplify scaling, training, and workload management.
Google’s revolutionary approach in AI is strongly supported by its hardware and software elements, breaking various industry limitations. The newly released TPUv5p cloud AI chips and Google’s AI supercomputers will bring more possibilities and opportunities for ongoing AI development. These advanced technologies are expected to further intensify competition and drive the development of the AI field.
Related:
Disclaimer: This article is created by the original author. The content of the article represents their personal opinions. Our reposting is for sharing and discussion purposes only and does not imply our endorsement or agreement. If you have any objections, please get in touch with us through the provided channels.