Nvidia unveiled a next-generation artificial intelligence (AI) chip called Rubin this week ahead of Computex in Taipei. The AI chip currently under development will be based on 8 stacks of HBM4 memory, which will be the next iteration of the necessary high-bandwidth memory. The platform will be available in 2026.
The fresh chip platform will be equipped with fresh graphics processors to train and run AI systems. It will also have a central processor called “Vera,” although Sunday’s announcement didn’t provide many details.
The Ruby AI system is named after the astronomer who discovered obscure matter
In a keynote speech at National Taiwan University, Jensen Huang said that Nvidia sees the development of generative artificial intelligence as a fresh industrial revolution and expects to play a major role in it. The company also introduced fresh tools and software models with Project G-Assist, an artificial intelligence-based assistant. Nvidia has committed to releasing fresh AI chip models on an “annual cadence,” Huang said.
Nvidia named its Rubin AI platform after Vera Florence Cooper Rubin, the astronomer who discovered obscure matter. Huang said the upcoming Rubin AI platform will utilize HBM4, the next iteration of vital high-bandwidth memory.
The company also plans to introduce Rubin Ultra with 12 HBM4 stacks.
“Computational Inflation”
“We are seeing computational inflation,” Huang said on Sunday. As the amount of data that needs to be processed grows exponentially, time-honored computing methods cannot keep up, and only with Nvidia’s style of accelerated processing can we reduce costs, Huang said. He praised the 98% cost savings and 97% less power required with Nvidia’s technology, claiming it represented “CEO math that’s not exact, but it’s correct.”
A Bloomberg report said Nvidia was a major beneficiary of a flood of artificial intelligence spending that made the company the world’s most valuable chipmaker. But now it plans to expand its customer base beyond the handful of cloud computing giants that generate most of its sales. The report added that Huang expects more companies and government agencies to adopt artificial intelligence as part of the expansion.
The fresh chip platform will be equipped with fresh graphics processors to train and run AI systems. It will also have a central processor called “Vera,” although Sunday’s announcement didn’t provide many details.
The Ruby AI system is named after the astronomer who discovered obscure matter
In a keynote speech at National Taiwan University, Jensen Huang said that Nvidia sees the development of generative artificial intelligence as a fresh industrial revolution and expects to play a major role in it. The company also introduced fresh tools and software models with Project G-Assist, an artificial intelligence-based assistant. Nvidia has committed to releasing fresh AI chip models on an “annual cadence,” Huang said.
Nvidia named its Rubin AI platform after Vera Florence Cooper Rubin, the astronomer who discovered obscure matter. Huang said the upcoming Rubin AI platform will utilize HBM4, the next iteration of vital high-bandwidth memory.
The company also plans to introduce Rubin Ultra with 12 HBM4 stacks.
“Computational Inflation”
“We are seeing computational inflation,” Huang said on Sunday. As the amount of data that needs to be processed grows exponentially, time-honored computing methods cannot keep up, and only with Nvidia’s style of accelerated processing can we reduce costs, Huang said. He praised the 98% cost savings and 97% less power required with Nvidia’s technology, claiming it represented “CEO math that’s not exact, but it’s correct.”
A Bloomberg report said Nvidia was a major beneficiary of a flood of artificial intelligence spending that made the company the world’s most valuable chipmaker. But now it plans to expand its customer base beyond the handful of cloud computing giants that generate most of its sales. The report added that Huang expects more companies and government agencies to adopt artificial intelligence as part of the expansion.