Can China’s 14nm–18nm AI Chip Concept Compete with Nvidia’s 4nm Giants?

A senior Chinese semiconductor expert has shared a bold idea about China’s future in AI hardware. Wei Shaojun, vice chairman of the China Semiconductor Industry Association and professor at Tsinghua University, spoke at an industry event about the possibility of building powerful AI accelerators inside China. Wei said that China could develop AI accelerators using 14nm logic chiplets and 18nm DRAM. These are older technologies compared to Nvidia’s advanced chips. Nvidia’s latest Blackwell processors are made using a custom 4nm-class process at TSMC. Even with this difference, Wei claimed that China’s 14nm–18nm AI chip could one day offer similar performance.

Wei spoke at the ICC Global CEO Summit. He said the breakthrough would come from advanced 3D stacking. This is a technology that places chips on top of each other to improve speed and reduce power use.

Can China’s 14nm–18nm AI Chip Concept Compete with Nvidia’s 4nm Giants?

Wei also talked about a “fully controllable domestic solution.” In this idea, 14nm logic and 18nm DRAM would be bonded together using 3D hybrid bonding. He said this could help China reduce its reliance on foreign AI chips like Nvidia’s H20. However, he also admitted that this design does not exist yet. There is no proof that it can be built with China’s current technology. So, his comments remain hypothetical.

Even so, Wei described what such a chip might look like. He said it could offer 120 TFLOPS of performance. He did not say which precision he was referring to, which makes comparisons difficult. However, he said it might use only 60 watts of power. This would give it a performance efficiency of 2 TFLOPS per watt. Wei compared this to Intel’s Xeon CPUs, saying the hypothetical chip would be more efficient.

But this would still fall far behind Nvidia’s real products. Nvidia’s B200 delivers 10,000 NVFP4 TFLOPS at 1200W. That means 8.33 TFLOPS per watt. The newer B300 is even more efficient at 10.7 TFLOPS per watt. Both are many times stronger than the theoretical Chinese design.

Wei focused on 3D hybrid bonding as the key technology. This method replaces traditional solder bumps with direct copper-to-copper connections. These connections can be made at extremely small distances, below 10 micrometers. This allows thousands of connections in a very small area and creates very fast communication between chips.

He also discussed near-memory computing, which places memory extremely close to the processor. This reduces delays and increases bandwidth. AMD’s 3D V-Cache is one example of this idea. It delivers 2.5 TB/s of bandwidth, a number far above what today’s HBM3E memory can provide.

See Also: Nvidia to Invest $1 Billion in Nokia to Boost AI and 6G Innovation

Wei said that this approach could even scale to ZetaFLOPS performance in the future. But he did not explain how or when that might happen.

He also warned that Nvidia’s CUDA platform is a major challenge. Once developers rely on one system, it becomes very hard to switch to others. This problem affects not just China but also global competitors.

In the end, Wei’s message was clear. China hopes to build competitive AI chips. But for now, the ideas remain theoretical, and the gap with Nvidia is still large.

PTA Taxes Portal

Find PTA Taxes on All Phones on a Single Page using the PhoneWorld PTA Taxes Portal

Explore NowFollow us on Google News!

Onsa Mustafa

Onsa is a Software Engineer and a tech blogger who focuses on providing the latest information regarding the innovations happening in the IT world. She likes reading, photography, travelling and exploring nature.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
>