Nvidia CEO Jensen Huang unveiled the long-awaited modern processor from the AI chipmaker on Monday, saying tech giants like Microsoft and Google are already eagerly awaiting his arrival.
Huang announced it in the process the company’s closely watched GPU technology conferenceOr General terms and conditionswhich was dubbed “Woodstock AI” by the organizers. employees AND analysts both. The annual conference in San Jose, California was held just as Nvidia entered the market Wall Street frenzy for AI stocks start 2024 – exceeding expectations of high earningsbecoming the first chipmaker to reach a market capitalization of $2 trillionand past growth companies, including Amazon become the third most valuable company in the world.
Nvidia’s growth is driven by the cost of its $40,000 H100 chips, which support the so-called enormous language models needed to run generative AI chatbots like OpenAI’s ChatGPT. The chips are called graphics processors or graphics processing units.
On Monday, Huang unveiled Nvidia’s next-generation “Blackwell” graphics processor, named after mathematician David Blackwell, the first Black scientist admitted to the National Academy of Sciences. The Blackwell chip consists of 208 billion transistors and will be able to run AI models and queries faster than its predecessors, Huang said. Blackwell chips were a success The highly sought after H100 chip from Nvidia, which was named after computer scientist Grace Hopper. Huang called Hopper “the world’s most advanced graphics processor currently in production.”
“Hopper is fantastic, but we need bigger GPUs,” he said. “So ladies and gentlemen, I would like to introduce you to a very enormous graphics processor.”
Huang said Microsoft, Google parent Alphabet and Oracle are among the tech giants preparing for Blackwell. Microsoft and Google are Nvidia’s two biggest customers for its H100 chips.
Nvidia shares were flat on Monday, but are up more than 83% this year and more than 241% over the past 12 months.
During his keynote speech at Nvidia’s conference on Monday, Huang announced a modern partnership with computer software makers Cadence, Ansys and Snyopsys. Cadence, Huang said, is building a supercomputer with Nvidia graphics processors. Huang said Nvidia’s AI foundry works with SAP, ServiceNow and Snowflake.
Huang also shouted out Dell founder and CEO Michael Dell, whose company works with Nvidia, in the audience. Dell Expands Its AI Offerings customers, including modern enterprise data storage with Nvidia’s AI infrastructure.
“Every company will have to build artificial intelligence factories,” Huang said. “And it turns out Michael is here and will be elated to take your order.”
Huang also announced that Nvidia is creating a digital Earth model to predict weather patterns using a modern generative artificial intelligence model, CorrDiff, that can generate images with 12.5 times higher resolution than current models.
Huang said Nvidia’s Omniverse computing platform now supports streaming Apple Vision Pro headsetand that Chinese electric vehicle manufacturer BYD is acquiring Nvidia solutions next-generation Thor computer.
Huang concluded his two-hour speech accompanied by two robots, the orange and green BD Star Wars droids, which he believes are powered by an Nvidia processor Jetson’s computer systemsand with whom he learned to walk Isaac Sim by Nvidia.
Throughout the week at GTC, Nvidia researchers and executives will be joined by influential players in the artificial intelligence industry — including Brad Lightcap, OpenAI’s chief operating officer, and Arthur Mensch, CEO of OpenAI’s French rival, Mistral AI — to host sessions on topics ranging from innovation to ethics.
Microsoft and Meta are Nvidia’s biggest customers for the H100 chipwith both tech giants spending $9 billion on chips in 2023. Alphabet, Amazon and Oracle also spent the most on chips last year.
But the frenzy around Nvidia H100 chips has raised concerns about shortages, and competitors looking to get ahead in the AI race have started building their own versions of the chips. Amazon was working on two chips called Inferentia and Tranium, while Google was working on its Tensor processing units.