Immense language models (or LLMs), trained on massive amounts of text, are the super-intelligent engines that power generative AI chatbots like ChatGPT and Google’s Gemini, and Opera has just become the first web browser to enable local LLM integration.
You may have already read how you can install LLM locally: this means the AI models are stored on your computer, so nothing needs to be sent to the cloud. It requires a pretty decent mix of hardware to make it work, but it’s better from a privacy standpoint – no one will be spying on your suggestions or using your conversations to train the AI.
We’ve already seen Opera introduce various AI features. Now this also applies to local LLMs, and you have over 150 models to choose from.
Local LLM in Opera
Before delving into your local LLM in Opera, there are a few things to consider. First of all, this is still in the experimental stage, so you may notice a bug or two. Second, you’ll need some free disk space – some LLMs are less than 2GB, but others on the list are over 40GB.
A larger LLM gives better answers but also takes more time to download and run. To some extent, the model’s performance will depend on the hardware configuration you’re running it on, so if you’re using an older machine, you may have to wait a few moments before you get anything back (and again, this is still in beta).
The Aria chatbot is now available in Opera – local LLMs are now available as well.
Source: Lifehaker
These local LLMs are a mix of models developed by substantial names (Google, Meta, Intel, Microsoft) and models created by researchers and developers. It’s free to install and operate – not least because you operate your own computer to power the LLM, so there are no running costs for the team that developed it.
Please note that some of these models are geared towards specific tasks, such as coding, and may not provide the general knowledge level of answers you would expect from ChatGPT, Copilot, or Gemini. Each of them is accompanied by a description; please read before installing any of these models so you know what you are getting.
Test it for yourself
This is a feature that, at the time of writing, was only available in early test versions of Opera, before its wider rollout. If you want to try it, you need to download and configure it developer version of Opera One. Once you’ve done this, open the left-hand side panel by clicking the Aria button (the tiny A symbol) and follow the instructions to set up the built-in AI bot (you’ll need to create or sign in to a free Opera account).
When Aria is ready to leave, you should see a symbol Select a local AI model box at the top: click this, then select Go to settings, and you will see a list of available LLMs along with some information about them. Select any LLM to see a list of versions (along with file sizes) and download buttons that will install them locally.
There are already over 150 LLMs to choose from.
Source: Lifehaker
If you want, you can set up multiple LLMs in Opera – just select the one you want to operate in each chat using the drop-down menu at the top of the Arii window. If you do not select local LLM, the default (cloud) Aria chatbot will be used instead. You can always start a modern chat by clicking gigantic + (plus) in the upper right corner of the chat window.
Employ these local LLMs as you would anything cloud-based: ask them to write on any topic and in any style, ask questions about life, the universe and everything, and get tips on any topic. Since these models do not have access to the cloud, they will not be able to search for anything relatively modern or current on the Internet.