We have already learned this Apple may touch Google’s Gemini to support some of the recent AI features in iOS 18, but that hasn’t stopped the tech giant from working on its own AI models. In a recent article, Apple revealed more details about its approach to working on the recent MM1 AI model.
This is Apple plans to exploit a diverse dataset that includes interleaved image-text documents, image-caption pairs, and text data to lend a hand train and develop MM1. This, Apple says, should allow the MM1 to set a recent standard in AI’s ability to caption images, answer visual questions, and even the way it responds using natural language inference. The idea seems to be to ensure the highest level of accuracy possible.
This research method allows Apple to focus on several types of training data and even model architectures, which should give the AI greater scope to understand and even generate language based on linguistic and visual cues.
Apple clearly hopes that by combining the training methods of other AI vendors with its own methods, it will be able to provide better pre-training rates and achieve competitive results that will lend a hand it catch up with other companies that are already deeply committed to the development of artificial intelligence, such as Google’ and OpenAI.
Apple has never been a stranger to paving its own way. The company is constantly finding recent ways to deal with the same situations that other companies have, including through the way it designs hardware and software. Whether you consider this a good thing or not is up to you, but the truth is that Apple’s ongoing attempts to create a reliable and competitive AI have always been about approaching the matter differently and based on the information presented in this document: the company found a very unique way to do this.
This article, of course, is just our first real look at what Apple is doing to advance its AI capabilities. It will be captivating to see where things go.