With the rise of Mistral, an open-source and French language model, running such a model on my machine seemed like an exciting challenge. To make this process smoother, I opted for Ollama.
In this article, I will guide you step-by-step through the installation process of Ollama on Windows and its use with various models, including Mistral, Llama2, and uncensored versions (meaning the model responds without ethical norms constraints).
I will share my personal insights, having successfully completed this installation on my computer equipped with 16GB of DDR4 RAM, an Nvidia GTX1660 graphics card, and a Ryzen 5 3600 processor. While my setup is slightly dated, it proved sufficient for this project.
Since Ollama is only available on Mac and Linux, I’ll show you how to install it via WSL, a workaround since using a virtual machine didn’t work for me. We’ll cover common errors and their solutions, installing Nvidia drivers for Linux, and provide useful links for further exploration.
To begin installing Ollama on a Windows system, the first step is to install a Linux subsystem. In my case, I chose Ubuntu, available through the Microsoft Store.
It is crucial to check certain Windows features before proceeding. Ensure the Hyper-V
, Windows Subsystem for Linux
, and Virtual Machine Platform
options are enabled in Windows features.
Once this verification is complete, proceed as follows:
Opening PowerShell as Administrator: This is necessary to execute commands that affect system settings.
Running the following commands: These commands will install and update the Windows Subsystem for Linux.
wsl.exe --install
wsl.exe --update
Handling potential errors: If an error occurs, virtualization might not be enabled in your computer's BIOS. To fix this, restart your computer, access the BIOS, and enable the Virtualization option.
Resolving persistent issues: If errors persist, Microsoft usually provides a link with detailed instructions to resolve them.
Once the Linux subsystem is installed, Ubuntu will prompt you to create a user account. After setting up your account and accessing the terminal, the first step is to update the packages. To do this, run the following command:
sudo apt update && sudo apt upgrade
This step ensures your system has the latest updates and security patches.
After updating your system, you can proceed with installing Ollama. This process is relatively straightforward.
To install Ollama, you have two options:
curl <https://ollama.ai/install.sh> | sh
Once this step is complete, you can easily confirm Ollama's installation. Simply enter the following command in your terminal: ollama help
.
In response, Ollama will display the full list of commands available, including but not limited to:
ollama run
ollama list
ollama rm
Next, visit Ollama's website and go to the section dedicated to models. Select the model of your choice and launch it using the ollama run
command. For example, to use the Mistral model, you would enter: ollama run mistral
. I also encourage you to explore uncensored models, such as llama2-uncensored
. These models provide an entertaining experience, especially for asking questions ChatGPT cannot answer.
If, after installing Ollama, you notice a "CPU ONLY MODE" message instead of "NVIDIA GPU installed" despite having an Nvidia graphics card, an additional step is required. You’ll need to install Nvidia drivers on your PC. These can be downloaded from Nvidia’s official website.
After installing the drivers on your PC, restart it and proceed to install the drivers in Ubuntu using the following command:
sudo apt-get install nvidia-driver-515