Ollama
Installation
You need to install the ollama
library to use this in blendsql.
Note
We consider Ollama models 'remote', since we're unable to access the underlying logits. As a result, we can only use Ollama for traditional generation, and not constrained generation (such as via the options
arg in LLMQA)
OllamaLLM
Bases: RemoteModel
Class for an ollama model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name_or_path
|
str
|
Name of the ollama model to load. See https://ollama.com/library |
required |
host
|
Optional[str]
|
Optional custom host to connect to e.g. 'http://localhost:11434' |
None
|
caching
|
bool
|
Bool determining whether we access the model's cache |
True
|
Examples:
from blendsql.models import OllamaLLM
# First, make sure your ollama server is running.
model = OllamaLLM("phi3")
Source code in blendsql/models/remote/_ollama.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|