A Model Context Protocol (MCP) server for integrating with Ollama local LLM instances.
Built with: MCP TypeScript Template
- π€ Local LLM Integration - Connect to your local Ollama instance
- π TypeScript with ES Modules - Modern JavaScript with full type safety
- π§ͺ Comprehensive Testing - Vitest with coverage reporting
- π§ Code Quality - Biome for linting and formatting
- π¦ Automated Publishing - Semantic versioning and NPM publishing
- π οΈ Development Tools - Hot reload, watch mode, and CLI support
This server is currently under development. When complete, it will provide tools for:
- π Model Management - List, pull, and manage Ollama models
- π¬ Text Generation - Generate completions and chat responses
- π Streaming Support - Real-time streaming responses
- βοΈ Configuration - Customizable model parameters and settings
List all available models in your Ollama instance.
Response: Array of model objects with names, sizes, and metadata.
Generate text completions using a specified model.
Parameters:
model
(string, required): Model name (e.g., "llama2", "codellama")prompt
(string, required): Input prompt for completionmax_tokens
(number, optional): Maximum tokens to generatetemperature
(number, optional): Sampling temperature (0.0-1.0)stream
(boolean, optional): Enable streaming responses
Generate chat responses with conversation context.
Parameters:
model
(string, required): Model namemessages
(array, required): Array of message objectsmax_tokens
(number, optional): Maximum tokens to generatetemperature
(number, optional): Sampling temperature
Download and install a new model from the Ollama library.
Parameters:
model
(string, required): Model name to downloadinsecure
(boolean, optional): Allow insecure connections
Remove a model from your local Ollama instance.
Parameters:
model
(string, required): Model name to delete
Get detailed information about a specific model.
Parameters:
model
(string, required): Model name to query
- Ollama installed and running locally
- Node.js 18 or higher
- At least one Ollama model installed
Visit https://ollama.ai to download and install Ollama for your platform.
After installation, pull a model:
ollama pull llama2
Start the Ollama service:
ollama serve
npm install -g mcp-ollama
yarn global add mcp-ollama
git clone https://github.com/Mearman/mcp-ollama.git
cd mcp-ollama
yarn install
yarn build
Add to your MCP client configuration:
{
"mcpServers": {
"ollama": {
"command": "mcp-ollama",
"args": [],
"env": {
"OLLAMA_HOST": "http://localhost:11434"
}
}
}
}
Configure the Ollama connection using environment variables:
OLLAMA_HOST
: Ollama server URL (default:http://localhost:11434
)OLLAMA_TIMEOUT
: Request timeout in milliseconds (default:30000
)
# Start development server with hot reload
yarn dev
# Run tests
yarn test
# Run tests in watch mode
yarn test:watch
# Build the project
yarn build
# Run linting
yarn lint
# Auto-fix linting issues
yarn lint:fix
// Using the MCP client
const response = await mcpClient.callTool('generate_completion', {
model: 'llama2',
prompt: 'Explain quantum computing in simple terms',
max_tokens: 200,
temperature: 0.7
});
const response = await mcpClient.callTool('chat_completion', {
model: 'llama2',
messages: [
{ role: 'user', content: 'What is the capital of France?' },
{ role: 'assistant', content: 'The capital of France is Paris.' },
{ role: 'user', content: 'What is its population?' }
],
max_tokens: 100
});
// List available models
const models = await mcpClient.callTool('list_models', {});
// Pull a new model
await mcpClient.callTool('pull_model', {
model: 'codellama'
});
// Get model information
const info = await mcpClient.callTool('model_info', {
model: 'llama2'
});
- Implement core Ollama API integration
- Add model management tools (list, pull, delete)
- Implement text generation tools
- Add chat completion support
- Support streaming responses
- Add comprehensive error handling
- Implement configuration validation
- Add model parameter customization
- Support for embeddings and vector operations
- Add performance monitoring and metrics
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'feat: add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Open a Pull Request
This project is licensed under the CC BY-NC-SA 4.0 license.
- Ollama - Run large language models locally
- MCP TypeScript SDK
- MCP Specification
- MCP Template