Run LLMs Locally using Ollama

Step-by-step process to running large language models locally on your laptop.

Marc Matterson
3 min readMar 10, 2024
Ollama Logo (source: Ollama)

Introduction

Since the release of ChatGPT, there has been a drastic rise in the popularity of large language models (LLMs). The majority of people interact with LLMs via APIs that are hosted externally, Ollama allows you to host LLMs locally on your own laptop.

Ollama provides the ability to interact with open-source and customisable LLMs via a command line interface (CLI), REST API, or Jupyter Notebook. It is extremely simple to install and will have you interacting with local LLMs in a matter of minutes.

Installing Ollama

Ollama can be downloaded on MacOS, Windows, and Linux (by using the following command):

curl -fsSL https://ollama.com/install.sh | sh

Once installed, run the following command in your CLI:

ollama run <MODEL_NAME>

This will download your LLM of choice and initiate a conversation. The easiest approach to interacting with LLMs using Ollama is via the CLI.

What models are available?

Ollama supports all SOTA open-source LLMs. As of March 2024, the list of LLMs…

--

--

Marc Matterson
Marc Matterson

Written by Marc Matterson

Lead Data Scientist with 8 Years Experience • Writing about Machine Learning, Artificial Intelligence and Engineering • All opinions are my own

No responses yet