Valued at $4.5 billion, Hugging Face has become a central hub for AI researchers and developers to share chatbots and other AI-based software, with support from Amazon, Alphabet’s Google and Nvidia, among others. It’s the main place where developers go to obtain and tinker with open-source AI models like Meta Platforms’ Llama 3.
Amazon.com’s cloud unit said Wednesday that it has partnered with artificial intelligence startup Hugging Face to make it easier to run thousands of artificial intelligence models on Amazon’s custom computer chips.
Valued at $4.5 billion, Hugging Face has become a central hub for AI researchers and developers to share chatbots and other AI-based software, with support from Amazon, Alphabet’s Google and Nvidia, among others. It’s the main place where developers go to obtain and tinker with open-source AI models like Meta Platforms’ Llama 3.
However, once developers refine an open-source AI model, they typically want to leverage that model to run a piece of software. On Wednesday, Amazon and Hugging Face said they have teamed up to enable this on a custom Amazon Web Services (AWS) chip called Inferentia2.
“One thing that is very significant to us is efficiency – making sure that as many people as possible can create models and that they can run them in the most cost-effective way,” said Jeff Boudier, head of product and development at Hugging Face.
continued below
For its part, AWS hopes to attract more AI developers to employ cloud services to deliver AI. Although Nvidia dominates the training model market, AWS says the chips can then run these trained models – a process called inference – at a lower cost over time.
“You train these models maybe once a month. But you can draw conclusions from them tens of thousands of times an hour. This is where Inferentia2 really shines,” said Matt Wood, who oversees artificial intelligence products at AWS.
Most read digitally