OLLAMA (Omni-Layer Learning Language Acquisition Model) is a platform that allows you to run open large language models (LLMs) locally rather than using cloud-hosted solutions. This allows you to use and develop these LLMs in a simple, private, and cost-effective way.
Installing and running OLLAMA on the Discovery cluster might not be as straightforward as a local install, so the RC team put together instructions to help you use OLLAMA on the cluster.
1. Getting a Jupyterlab session on Open OnDemand:
You can request a Jupyterlab session using the following parameters, for example:
2. Open a terminal in Jupyterlab
Select the blue plus (+) sign button in the upper right corner of Jupyterlab to start the “Launcher.” You can then click on “Terminal” under the “Other” heading to open a terminal.
3. Create a .sif file from the OLLAMA container in a terminal
Containers allow for portable software execution. OLLAMA has an official container on Docker Hub. (Docker Hub is a container repository).
On the cluster, we use Singularity to run containers. We can use the following commands to set the OLLAMA container on the cluster (so that it can be run using Singularity).
Now, you can run OLLAMA in this terminal using
4. Run a specific model in another terminal
First, start another terminal in Jupyterlab. you can then load and run a model (such as llama3.2:latest or deepseek-r1:1.5b) using:
This will create a command line prompt to interact with the model.
5. Accessing the model from python
Jupyterlab can also be used for development with the available LLM model running on the cluster. Note that you’ll need to unset the http and https proxy variables.
First, in a new terminal session in Jupyterlab, run the following command:
Then, in a new Jupyter notebook, paste the following code and run:
Need More Help?
The RC team offers weekly virtual drop-in office hours, 1:1 consultations at your convenience, and recorded introductory HPC training. You can also send the RC team an email at rchelp@northeastern.edu. We are happy to work through any questions you may have!