Ollama Tutorial – Running Large Language Models Locally

Large Language Models Python

Ollama is a platform that allows multiple local large language models (LLMs) to be executed. In addition, Ollama offers an API to remotely access the text or code generation functionality of the models installed via Ollama. In this blog article we will show you how to install Ollama, add large language models locally with Ollama. We will also show you which models Ollama offers directly and how you can access the models in order to benefit from them.

Table of Contents

Ollama Models

You can find the models that Ollama offers directly here on the following Ollama Library Page. Below you will find a few interesting models listed in the table:

Model Name Description Parameter Count
mistral Model released by Mistral AI, very popular and efficient for local use cases. 7B
mixtral A new kind of llm model that combines multiple expert llms in one, released by Mistral AI. It is very popular with high-quality but requires GPU hardware with enough V-RAM. 8x7B
deepseek-coder DeepSeek Coder is a coding model trained on two trillion code tokens. 6.7B and 33B
phi Language model by Microsoft with good language understanding and reasoning quality. 2.7B
openchat A family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. 7B

Ollama Installation

Installing Ollama is very easy. You can find the installation packages here.

Ollama Linux Installation:


curl https://ollama.ai/install.sh | sh

Ollama maxOS Installation Package:

https://ollama.ai/download/Ollama-darwin.zip

Ollama Usage

After installing Ollama, the command ollama can be executed locally.


Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Ollama Model Installation & First Run

With the following command we determine which model we want to use in ollama. In this example we chose Mistral. If the model is not yet installed in Ollama, it will be installed in the background.


ollama run mistral

After installation and execution we will see the following output.


>>> Send a message (/? for help)

Now you can use the console to make a request in the Ollama console while a Large Language Model is running on your local machine.


>>> Who are you and how can you help me?
 I am an artificial intelligence designed to assist and provide information on various topics, including astronomical questions such as the minimum and maximum distances between planets in our solar system like Ceres and Earth. By using accurate data from trusted sources, I aim to give you detailed and precise answers to your queries. If you have any question related to Ceres or any other topic, feel free to ask!

Ollama Other Important Commands

Listing all local installed models


ollama list

Removing local installed model


ollama rm mistral

Ollama API

Ollama offers its own API, which currently does not support compatibility with the OpenAI interface. However, it provides a user-friendly experience, and some might even argue that it is simpler than working with the OpenAI interface. With this API, users can access several functions:

  • creating a model,
  • listing local models,
  • displaying model information,
  • copying a model, deleting a model,
  • pulling a model, pushing a model,
  • generating completions,
  • producing chat completions
  • and generating embeddings.

To learn more about the Ollama API, please visit their documentation at https://github.com/jmorganca/ollama/blob/main/docs/api.md.

Conclusion

In conclusion, Ollama proves to be an innovative platform empowering developers and researchers alike by providing seamless execution of multiple local large language models. Its robust offering extends beyond just running these models on your machine. The remote API enables effortless integration with existing workflows and systems, enhancing productivity and accelerating development cycles.

As advancements continue within AI and natural language processing, platforms like Ollama pave the way towards accessible, efficient, and powerful utilization of cutting-edge technology. So why wait? Embrace the future and unlock the potential of large language models using Ollama today!

To top