After DeepSeek, Chinese AI expert calls for local alternative to Nvidia’s CUDA platform | Technology News

The success of DeepSeek’s cost-efficient, high-performance models may have given China an edge over the US in the race for AI dominance. However, a Chinese AI expert has argued that true self-sufficiency in AI will come from developing homegrown alternatives to platforms such as Nvidia’s CUDA.

“DeepSeek has made an impact on the CUDA ecosystem, but it has not completely bypassed CUDA, as barriers remain…In the long run, we need to establish a set of controllable AI software tool systems that surpass CUDA,” Li Guojie, a senior AI researcher at the Chinese Academy of Sciences was quoted as saying in a report by South China Morning Post.

While acknowledging that the hardware capabilities of China’s AI accelerator chips were comparable to Nvidia’s offerings, the 81-year-old computer scientist reportedly identified the CUDA ecosystem as the chip giant’s true core strength.

Story continues below this ad

Launched in 2006, CUDA is a toolkit for developers to build and speed up their AI applications by drawing compute power from Nvidia’s GPU accelerator chips.

Li’s comments come as China rallies its tech industry players to break away from the Nvidia ecosystem. During his visit to China in January this year, Nvidia CEO Jensen Huang had revealed that around 1.5 million developers in China were using the company’s CUDA computing platform.

Commenting on the impact of DeepSeek’s purported breakthrough, Li said, “The emergence of DeepSeek has forced the AI community to seriously reconsider whether to keep burning money and gamble, or to seek a new way to optimise algorithms. DeepSeek’s achievements suggest that algorithmic and model architecture optimisation can also lead to miracles.”

The computer scientist also questioned OpenAI’s strategy to continue spending on computing resources in order to improve its AI models.

Story continues below this ad

“In AI, the scaling law is viewed by some as an axiom … and companies like OpenAI and the US AI investment community have treated it like a winning formula. But the scaling law is not a scientifically verified principle like Newton’s laws; it is a generalisation based on the recent experiences of OpenAI and others in developing large language models,” Li said.

“Whether increasing the amount of training data will yield returns corresponding to investment will depend on actual outcomes in the future,” he added.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *