Skip to content
arrow-down

GitHub Action

Run ollama server

v3 Latest version

Run ollama server

arrow-down

Run ollama server

Run an ollama server with the model you provide

Installation

Copy and paste the following snippet into your .yml file.

              

- name: Run ollama server

uses: pydantic/ollama-action@v3

Learn more about this action in pydantic/ollama-action

Choose a version

Ollama GitHub Action

CI

GitHub Action to install ollama, pull a model and run the ollama server.

Both the ollama install and the model are cached between runs.

Example usage:

      - uses: pydantic/ollama-action@v3
        with:
          model: qwen2:0.5b

We can then run tests and connect to http://localhost:11434 to make requests to ollama. Here we use the qwen2:0.5b model as it's very small and therefore quick to download.

This action is used by pydantic-ai.