Do you like the idea of artificial intelligence, but would like to be able to operate it without having to trust a huge server somewhere? You can run huge language models locally, giving you something like ChatGPT that works completely offline.
January is a free, open source application that makes it simple to download multiple language models in huge languages and start conversations with them. There are uncomplicated installers for Windows, macOS and Linux. This isn’t perfect. The models are not necessarily as good as the latest OpenAI or Google, and depending on the power of your computer, it may take some time to get results. On the other hand, you can operate this technology without having to worry about the privacy and security issues associated with using an online AI service.
Source: Justin Pot
Once installed Jan you will need to choose a model – if you don’t know which one I would start with Mistral at the top of the list – you can always try something else later if you don’t like the results. As soon as the model is downloaded, you can start chatting.
If you want, you can give the bot some general instructions in the right panel – the default setting is “you’re a helpful assistant,” but you can change it to whatever you want to give it a little more context. You can then start using the service in the same way as ChatGPT or Google Gemini. I tried to ask for a low summary of my last article; he did a decent job.
Source: Justin Pot
Again, if you don’t like the results or think it takes too long to get results, try a few different models. All of them are free and optimized for different purposes: some are specifically designed to solve coding problems, for example, and others are optimized to run on computers without high CPU power. It’s a matter of finding what works best for you.
There is another frigid feature: if the application is running, it can also act as an API corresponding to OpenAI. If you don’t know what this means, don’t worry – it’s a very manic thing. Overall, this means you can operate Jan in applications that would otherwise require a paid ChatGPT subscription – just enable the API feature in settings and point other applications to the local IP address and port number instead of OpenAI.