Python API#
ollama_serve.main#
Core helpers for interacting with the Ollama server.
- ollama_serve.main.ensure_model_and_server_ready(model: str, host: str = '127.0.0.1', port: int = 11434, timeout: float | None = None, retries: int | None = None, retry_delay: float | None = None) bool[source]#
Ensure the server is running and the requested model is available.
- Parameters:
model – Model name to ensure is present (for example, “llama3:latest”).
host – Hostname or IP address to probe.
port – Port to probe.
timeout – Socket timeout in seconds.
retries – Number of attempts before returning False.
retry_delay – Sleep duration between retries in seconds.
- Returns:
True when the server is running and the model is available; otherwise False.
- ollama_serve.main.install_model(model: str, host: str = '127.0.0.1', port: int = 11434, timeout: float | None = None, retries: int | None = None, retry_delay: float | None = None) bool[source]#
Install a model if it is not already present in Ollama.
- Parameters:
model – Model name to install (for example, “llama3” or “llama3:latest”).
host – Hostname or IP address to probe.
port – Port to probe.
timeout – Socket timeout in seconds.
retries – Number of attempts before returning False.
retry_delay – Sleep duration between retries in seconds.
- Returns:
True when the model is already installed or installs successfully; otherwise False.
- ollama_serve.main.is_model_installed(model: str, host: str = '127.0.0.1', port: int = 11434, timeout: float | None = None, retries: int | None = None, retry_delay: float | None = None) bool[source]#
Return True when the named model is present in Ollama.
- Parameters:
model – Model name to look up (for example, “llama3” or “llama3:latest”).
host – Hostname or IP address to probe.
port – Port to probe.
timeout – Socket timeout in seconds.
retries – Number of attempts before returning False.
retry_delay – Sleep duration between retries in seconds.
- Returns:
True when the model appears in the Ollama tags list; otherwise False.
- ollama_serve.main.is_ollama_running(host: str = '127.0.0.1', port: int = 11434, timeout: float | None = None, retries: int | None = None, retry_delay: float | None = None) bool[source]#
Return True when an Ollama server responds on the given host/port.
We use a lightweight HTTP request to the tags endpoint to avoid false positives when another service is bound to the same port.
- Parameters:
host – Hostname or IP address to probe.
port – Port to probe.
timeout – Socket timeout in seconds.
retries – Number of attempts before returning False.
retry_delay – Sleep duration between retries in seconds.
- Returns:
True when the Ollama tags endpoint responds; otherwise False.
- ollama_serve.main.run_ollama_server(host: str = '127.0.0.1', port: int = 11434, timeout: float | None = None, retries: int | None = None, retry_delay: float | None = None) bool[source]#
Start the Ollama server when it is not already running.
- Parameters:
host – Hostname or IP address to probe.
port – Port to probe.
timeout – Socket timeout in seconds.
retries – Number of attempts before returning False.
retry_delay – Sleep duration between retries in seconds.
- Returns:
True when the server is already running or successfully started; False when Ollama is not installed or fails to start.