Category Archives: LLM

Running ollama locally

Installing ollama locally is easily done using docker, for example

docker run -d -v "c:\temp\ollama:/root/.ollama" -p 11434:11434 --name ollama ollama/ollama

Next we’ll want to pull in a model, for example using phi3

docker exec -it ollama ollama run phi3 

We have several phi3 models, phi3:mini, phi3:medium and phi3:medium-128k (requires Ollama 0.1.39+).

Other options include mistral, llama2 or openhermes. Just replace phi3 with your preferred model

On running the exec command we get a “prompt” and can start a “chat” with the model.

Use “/bye” to exit the prompt.