Run LLMs Locally using Ollama
Step-by-step process to running large language models locally on your laptop.
Introduction
Since the release of ChatGPT, there has been a drastic rise in the popularity of large language models (LLMs). The majority of people interact with LLMs via APIs that are hosted externally, Ollama allows you to host LLMs locally on your own laptop.
Ollama provides the ability to interact with open-source and customisable LLMs via a command line interface (CLI), REST API, or Jupyter Notebook. It is extremely simple to install and will have you interacting with local LLMs in a matter of minutes.
Installing Ollama
Ollama can be downloaded on MacOS, Windows, and Linux (by using the following command):
curl -fsSL https://ollama.com/install.sh | sh
Once installed, run the following command in your CLI:
ollama run <MODEL_NAME>
This will download your LLM of choice and initiate a conversation. The easiest approach to interacting with LLMs using Ollama is via the CLI.
What models are available?
Ollama supports all SOTA open-source LLMs. As of March 2024, the list of LLMs…