Installation Guideο
This guide covers all aspects of installing and setting up Ollama Toolkit.
Introductionο
Following the Eidosian approach, each step is minimized yet thorough:
Validate prerequisites
Install efficiently
Verify without redundancies
Prerequisitesο
Python 3.6 or higher
pip (Python package installer)
Ollama (version 0.1.11 or later recommended, will be automatically installed if not present)
2+ GB RAM for running small models
8+ GB RAM recommended for larger models
Basic Installationο
Install the package directly from PyPI:
pip install ollama-toolkit
Development Installationο
For development or to access the latest features:
# Clone the repository
git clone https://github.com/Ace1928/ollama_toolkit.git
cd ollama_toolkit
# Install in development mode
pip install -e .
Automatic Ollama Setupο
Ollama Toolkit can automatically install and manage Ollama for you:
from ollama_toolkit.utils.common import ensure_ollama_running
# This will install Ollama if needed and start the server
is_running, message = ensure_ollama_running()
if is_running:
print(f"Ollama is ready: {message}")
else:
print(f"Could not start Ollama: {message}")
Manual Ollama Installationο
If you prefer to install Ollama manually:
Linuxο
curl -fsSL https://ollama.com/install.sh | sh
macOSο
curl -fsSL https://ollama.com/install.sh | sh
Windowsο
Download the installer from: https://ollama.com/download/windows
Verifying Installationο
Verify the installation with:
from ollama_toolkit import OllamaClient
client = OllamaClient()
version = client.get_version()
print(f"Connected to Ollama version: {version['version']}")
# List available models
models = client.list_models()
print("Available models:", [model.get("name") for model in models.get("models", [])])
Dependenciesο
The package automatically installs these dependencies:
requests
: For HTTP communicationaiohttp
: For asynchronous requestscolorama
: For terminal coloringnumpy
: For array manipulation (optional, for embedding operations)
Configurationο
No configuration is necessary to get started, but you can customize the client:
from ollama_toolkit import OllamaClient
# Custom configuration
client = OllamaClient(
base_url="http://localhost:11434/", # Custom Ollama API URL
timeout=300, # Request timeout in seconds
max_retries=3, # Connection retry attempts
retry_delay=1.0, # Delay between retries
cache_enabled=True, # Enable response caching
cache_ttl=300.0 # Cache time-to-live in seconds
)
Troubleshooting Installationο
If you encounter issues during installation:
Ollama not found: Ensure Ollama is installed and in your PATH
Connection errors: Check if the Ollama server is running (
ollama serve
)Python version: Verify youβre using Python 3.6+
Permission issues: Try installing with
sudo
or use a virtual environment
For more detailed troubleshooting, see the Troubleshooting Guide.
Advanced Detailsο
For advanced deployments and documentation generation in CI/CD:
Use pinned dependencies in a dedicated βdocsβ or βbuildβ environment.
Optionally integrate with GitHub Actions to automatically build docs on push or PR events.
Provide references in your docstrings for cross-linking function and class usage (especially with autolinking in Sphinx or MkDocs).