Apparently Microsoft training your own artificial intelligence model compete with the Google and OpenAI models it has a multi-year, multi-billion dollar partnership With.
The tech giant’s novel internal model, internally referred to as MAI-1, is led by former Google AI chief Mustafa Suleyman, The Information reports, citing people familiar with the matter. Microsoft he hired Suleiman, who co-founded the artificial intelligence startups DeepMind (acquired by Google in 2014) and Inflection, which he led as CEO, and joined most of Inflection’s employees to run the artificial intelligence division in March. The company also paid $650 million for Inflection’s intellectual property rights. However, the novel model is different from the previously released Inflection models, they told The Information. However, Microsoft’s novel model could be built based on Inflection’s training data and other technologies, The Information reports.
Microsoft declined to comment on the report.
Written by Kevin Scott, Chief Technology Officer at Microsoft post on LinkedIn the company builds “huge supercomputers to train artificial intelligence models” and that OpenAI “uses these supercomputers to train boundary-setting models.”
“Every supercomputer we build for Open AI is much larger than the one that preceded it, and every pioneer model it creates is much more powerful than its predecessors,” Scott wrote. “We will continue on this path – building an increasingly powerful supercomputer for open AI to train models that will set the pace for the entire field – long into the future.” Scott added that Microsoft has been building artificial intelligence models for years, and some of the “models have names like Turing and MAI.”
MAI-1 will be costly because it requires a lot of processing power and training data because it will be “significantly larger” than the smaller open-source models Microsoft has trained, they told The Information. Compared to open source models from Meta and Mistral, which have 70 billion parameters – variables that the models learn during training to predict — MAI-1 will reportedly have about 500 billion parameters. OpenAI’s most powerful model, GPT-4, reportedly has over a trillion parameters.
According to The Information, Microsoft may preview the model at its annual Build developer conference later this month. She added that the company maintains a huge server cluster that includes Nvidia GPUs, or graphics processing units, as well as huge amounts of data to train the model.