The 2025 Chip Showdown: Top Four Markets at Stake

The wave of AI artificial intelligence is surging, with the potential to reshape the development pattern of the semiconductor industry. As one of the core links in the industry, chip design has become an important battleground for chip competition. The global chip design industry is highly concentrated. According to TrendForce, the top ten global chip design companies are expected to generate combined revenues of approximately $249.8 billion in 2024, with the top five companies contributing over 90% of the total revenue. Currently, IC design giants such as Nvidia, AMD, Qualcomm, and MediaTek are actively deploying around four key markets: smartphones, AI PCs, automobiles, and servers. At the same time, as AI drives the growing demand for high-performance chips, combined with increasing market competition, there is a clear trend of industrial chain collaboration.

Smartphone Market: Significant Improvement in Flagship Chip AI Performance

AI and advanced processes have become the key terms in the smartphone market, with Qualcomm and MediaTek, the two major smartphone SoC manufacturers, competing to launch the latest flagship chips to meet market demand.

Qualcomm’s flagship chip, the Snapdragon 8 Gen 2, focuses on performance, AI endpoints, and gaming ecology. This product uses second-generation 3nm process (N3E); dual Oryon ultra-large cores, with a clock frequency of over 4.2GHz, improving single-core performance by 35% and dedicated to high-load tasks (such as game rendering and AI inference); it also features six custom performance cores with a frequency of 3.5GHz and an energy efficiency improvement of 40%, responsible for multitasking and background management.

To meet AI computing power demands, the Snapdragon 8 Gen 2 integrates the sixth-generation AI engine (Hexagon NPU), with a computing power of 73 TOPS (73 trillion operations per second), a 45% improvement over the previous generation.

In terms of gaming experience, the Snapdragon 8 Gen 2 is equipped with the Adreno 830 GPU, supporting hardware-based ray tracing and global illumination, with a 25% improvement in graphics rendering speed. Through Snapdragon Elite Gaming technology, it enables Variable Rate Shading (VRS) and game super-resolution technology.

In April this year, MediaTek officially launched the Dimensity 9400+ flagship 5G AI mobile chip, offering generative AI and intelligent AI capabilities, delivering a new experience for flagship smartphones with high intelligence, high performance, high energy efficiency, and low power consumption.

The Dimensity 9400+ adopts a second-generation full-large-core architecture, with an 8-core CPU, including one Arm Cortex-X925 ultra-large core with a frequency of up to 3.73GHz, three Cortex-X4 ultra-large cores, and four Cortex-A720 large cores.

The Dimensity 9400+ integrates MediaTek’s eighth-generation AI processor NPU 890, which leads the industry in supporting DeepSeek-R1 inference model technologies and the enhanced decoding inference technology (SpD+), improving AI task inference speed by 20%. MediaTek’s Dimensity 9400+ is equipped with a 12-core Arm GPU Immortalis-G925, supporting the Dimensity OMM light tracking engine and Dimensity frame doubling technology, enhancing visual effects and providing a smoother mobile gaming experience.

AI PC: A Key Battlefield for Chip Design Manufacturers

Driven by the explosive demand for AI computing power and the gradual recovery of the PC market, AI PCs have become a key battlefield for chip design manufacturers, including AMD, Nvidia, and others.

In March this year, AMD held the “AI PC” innovation summit, showcasing its AMD Ryzen AI PC ecosystem, including laptops, Mini PCs, and other products using AMD Ryzen AI Max, AMD Ryzen AI 300, and AMD Ryzen 9000HX series processors.

According to data, the Ryzen AI Max series processors offer workstation-level performance, with 16 “Zen 5” architecture CPU cores, 40 AMD RDNA 3.5 graphics cores, and an AMD XDNA2 NPU with up to 50 TOPS AI computing power, all integrated into a single chip. Additionally, the processor supports up to 128GB of unified memory, with up to 96GB available for graphics processing, enabling seamless and reliable multitasking and supporting the operation of large-scale AI models.

The Ryzen 9000HX series has been redesigned with second-generation 3D V-Cache technology, repositioning memory beneath the processor for higher performance, lower temperatures, and higher clock frequencies. As the top product in the series, the Ryzen 9955HX3D is expected to become one of the fastest mobile processors designed for gamers and creators.

Nvidia’s strategy includes the RTX 5090 GPU, which integrates 92 billion transistors, with 4000 AI TOPS computing power, supporting local operation of large models with 200 billion parameters, offering a 100% performance improvement over the previous generation. Its 3nm process and third-generation RT Core technology improve the frame rate of 3A games like Cyberpunk 2077 by 60% at 4K resolution.

Additionally, earlier this year, MediaTek announced its collaboration with Nvidia to design the NVIDIA GB10 Grace Blackwell superchip, which will be used in Nvidia’s personal AI supercomputer, NVIDIA Project DIGITS. Project DIGITS is a personal AI supercomputer project by Nvidia, aimed at providing users with supercomputing performance at home. The device will feature the GB10 superchip, targeting AI researchers, data experts, and scientists.

Automobiles: Chips Empower the Development of Full Vehicle Intelligence

At the 2025 Shanghai Auto Show, several automotive chips made a significant debut, aiding the development of intelligent vehicles.

Qualcomm demonstrated the 8775 platform, which uses a CPU+NPU+GPU heterogeneous computing architecture. The single chip supports 4K multi-screen interaction, high-speed NOA navigation, and real-time body domain control, with a system bandwidth of 154GB/s.

Qualcomm also showcased its key technologies in collaboration with many automotive industry ecosystem partners, highlighting the latest achievements in technological implementation, intelligent experience upgrades, and ecosystem co-building. Qualcomm and Desay SV announced that they would jointly develop a series of combined driver assistance solutions, adopting a “same hardware, two sets of algorithms” cooperation model to accelerate the global automotive industry’s implementation and upgrade of safe and reliable driver assistance functions.

Furthermore, Qualcomm, in partnership with Baojun, Leap Motor, Chery, SAIC Buick, FAW Hongqi, and other partners, showcased solutions featuring the Snapdragon Ride platform at the Shanghai Auto Show. The Snapdragon Ride platform, developed with a variety of SoCs, accelerators, rich software, and software stacks, offers flexibility and scalability advantages, providing solutions for driver assistance systems at different stages and models in various price segments, significantly reducing product development complexity for partners and accelerating the popularization of driver assistance functions.

MediaTek launched its flagship automotive cockpit platform, the Dimensity C-X1, based on advanced 3nm processes, adopting the Arm v9.2-A architecture and integrating NVIDIA Blackwell GPUs and deep learning accelerators. It features dual AI engines to build a flexible computing architecture, providing powerful AI computing power for future smart cockpit demands. In the entertainment field, the Dimensity C-X1 integrates NVIDIA RTX GPU ray tracing technology, bringing realistic lighting effects for game rendering, offering a clear and smooth in-car AAA game experience.

The Dimensity automotive cockpit platform C-X1, when paired with Nvidia’s advanced safety and AI processors (such as NVIDIA DRIVE AGX Thor), forms a complete centralized computing platform solution capable of hosting all vehicle domain processors.

Intel unveiled the second-generation Intel AI-enhanced Software Defined Vehicle (SDV) SoC at the Shanghai Auto Show, becoming the first in the automotive industry to adopt a multi-node chiplet architecture. Car manufacturers can customize computing, graphics, and AI functions according to their needs, reducing development costs and shortening time to market. Compared to the previous generation, the new product’s generative and multimodal AI performance can be up to 10 times higher, and graphics performance can be 3 times higher, delivering richer human-machine interface (HMI) experiences. Intel claims this flexible and future-oriented design will help car manufacturers create differentiated products, providing next-generation experiences for drivers and passengers while reducing power consumption and costs.

Servers: Nvidia and AMD Strike Back

With the dual drive of digital transformation and the explosion of AI computing power demand, servers, especially AI servers, are becoming increasingly important.

Nvidia is committed to consolidating its advantages in data centers, AI, and HPC. Its product layout in the server field is characterized by multi-layer and full-scenario coverage. Nvidia’s GPU acceleration cards include the H100/B100 series for AI training, based on the Hopper/Blackwell architecture, integrating Transformer engines and FP8 precision computing, offering three times the AI training performance compared to the A100. The B200 chip delivers 20 petaFLOPS (FP4) computing power, supporting training for 100 trillion parameter models. There are also A40/RTX 4090 for inference and graphics rendering, and T4/L4 series focused on energy efficiency.

In the CPU and DPU fields, the Grace CPU, based on the Arm architecture, connects to GPUs via NVLink-C2C, with memory consistency reaching 900GB/s. The BlueField-4 DPU integrates ASAP2 and NVMe SNAP technology for network and storage virtualization offloading. Nvidia also launched the DGX GB200 AI server system, featuring 72 B200 GPUs and 36 Grace CPUs, with computing power reaching 1.44 exaFLOPS (FP4), using cold plate liquid cooling.

At the “GTC 2025” conference in March, Nvidia announced that it would launch the Blackwell Ultra architecture in the second half of 2025. Chips based on this architecture, such as the GB300 NVL72, will feature 72 Blackwell Ultra GPUs and 36 NVIDIA Grace CPUs based on the Arm Neoverse architecture. Its computing power (FLOPS) is 1.5 times higher than that of the GB200 NVL72, with 1.5 times faster storage and twice the bandwidth. Nvidia’s partners, including Cisco, Dell Technologies, Huawei, and Lenovo, are expected to launch servers based on Blackwell Ultra from the second half of 2025.

AMD, another major player, is also actively advancing in the server chip field. With the continuous iteration of its Zen architecture and EPYC processors, AMD’s server chip layout shows its ambition to lead the market.

AMD’s fifth-generation EPYC processor, based on the Zen 5 architecture and using TSMC’s 3nm process, improves single-threaded performance by 25% and energy efficiency by 30%. By integrating an AI acceleration engine and CXL 3.0 memory expansion, the architecture achieves performance leaps in AI inference and scientific computing scenarios.

Recently, AMD showcased its sixth-generation EPYC “Venice” processor’s first 2nm-class core module (CCD), which, using TSMC’s N2 process, has successfully completed basic testing and is expected to be launched in 2026 as the first processor based on this advanced process in the high-performance computing field.

The new Zen 6 architecture continues AMD’s tradition of improving performance by more than 30% per generation. Additionally, thanks to TSMC’s N2 process, the chip can reduce power consumption by up to 35% for the same performance or improve performance by 15% while maintaining power consumption. According to AMD, the fifth-generation EPYC processor has been validated at TSMC’s Arizona plant, and the 192-core fifth-generation EPYC is already in commercial use in servers from Dell, HPE, and other partners, demonstrating twice the throughput of the previous generation in AI training scenarios.

AI-Driven: Deeper Collaboration in the Chip Industry Chain

With increasing competition in the semiconductor industry, AI-driven chip performance is significantly improving, and collaboration between packaging, foundries, and design processes in the semiconductor industry is showing a clear trend toward deeper cooperation, continuously exploring the limits of chip architecture and performance.

Currently, collaboration between chip design companies and wafer foundries is the most basic model, with companies like Nvidia, Qualcomm, AMD, and MediaTek relying on foundries like TSMC, Samsung, GlobalFoundries, and UMC to manufacture their chip products. With technological developments, collaboration in the semiconductor industry is moving toward deeper, more optimized cooperation, especially in advanced nodes and packaging technologies.

Traditional monolithic system-on-chip (SoC) designs are approaching physical limits in terms of miniaturization, with sharply increased design and manufacturing costs and complexity. To continue improving performance and integration, the industry is shifting toward more advanced solutions, such as 3D integrated circuits (3D-IC), chiplets, and heterogeneous integration. These technologies require cross-field collaboration between design, process, packaging, and system experts. For example, achieving high-density interconnections and managing the thermal effects of complex multi-chip systems requires close cooperation between design companies, foundries, and packaging factories.

Additionally, the explosive growth of AI and HPC applications has raised the requirements for dedicated, high-performance, and energy-efficient chips (GPU, NPU, AI accelerators). Downstream system manufacturers (such as cloud service providers and automobile manufacturers) are increasingly inclined to seek custom or semi-custom chip solutions for specific applications, directly driving closer cooperation between chip design companies, foundries, and even end customers. Achieving system and technology optimization collaboration has become the key to meeting these demands.

Conclusion

The global chip market faces both challenges and opportunities. As the core of the semiconductor industry chain, chip design is a new round of technological and ecological model changes. This silent war is taking place across areas such as smartphones, AI PCs, automobiles, and servers. Technological innovation, product iteration, and industrial chain cooperation are the important “weapons” of manufacturers. Under the competition among giants, what new changes will chip design and the semiconductor industry chain undergo? We shall wait and see.

Source: TrendForce

End-of-Yunze-blog

Disclaimer:

  1. This channel does not make any representations or warranties regarding the availability, accuracy, timeliness, effectiveness, or completeness of any information posted. It hereby disclaims any liability or consequences arising from the use of the information.
  2. This channel is non-commercial and non-profit. The re-posted content does not signify endorsement of its views or responsibility for its authenticity. It does not intend to constitute any other guidance. This channel is not liable for any inaccuracies or errors in the re-posted or published information, directly or indirectly.
  3. Some data, materials, text, images, etc., used in this channel are sourced from the internet, and all reposts are duly credited to their sources. If you discover any work that infringes on your intellectual property rights or personal legal interests, please contact us, and we will promptly modify or remove it.

Leave a Reply