Skip to content

Adding SSE and Streamable HTTP transports #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 16, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 27 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,13 @@ This implementation was adapted from the [Model Context Protocol quickstart guid
## Features

- 🌐 **Multi-Server Support**: Connect to multiple MCP servers simultaneously
- 🚀 **Multiple Transport Types**: Supports STDIO, SSE, and Streamable HTTP server connections
- 🎨 **Rich Terminal Interface**: Interactive console UI
- 🛠️ **Tool Management**: Enable/disable specific tools or entire servers during chat sessions
- 🧠 **Context Management**: Control conversation memory with configurable retention settings
- 🔄 **Cross-Language Support**: Seamlessly work with both Python and JavaScript MCP servers
- 🔍 **Auto-Discovery**: Automatically find and use Claude's existing MCP server configurations
- 🚀 **Dynamic Model Switching**: Switch between any installed Ollama model without restarting
- 🎛️ **Dynamic Model Switching**: Switch between any installed Ollama model without restarting
- 💾 **Configuration Persistence**: Save and load tool preferences between sessions
- 📊 **Usage Analytics**: Track token consumption and conversation history metrics
- 🔌 **Plug-and-Play**: Works immediately with standard MCP-compliant tool servers
Expand Down Expand Up @@ -75,6 +76,9 @@ If you don't provide any options, the client will use auto-discovery mode to fin
- `--servers-json`: Path to a JSON file with server configurations.
- `--auto-discovery`: Auto-discover servers from Claude's default config file (default behavior if no other options provided).

> Note: Claude's configuration file is typically located at:
`~/Library/Application Support/Claude/claude_desktop_config.json`

#### Model Options:
- `--model`: Ollama model to use (default: "qwen2.5:7b")

Expand Down Expand Up @@ -157,35 +161,51 @@ The configuration saves:

## Server Configuration Format

The JSON configuration file should follow this format:
The JSON configuration file supports STDIO, SSE, and Streamable HTTP server types:


```json
{
"mcpServers": {
"server-name": {
"stdio-server": {
"command": "command-to-run",
"args": ["arg1", "arg2", "..."],
"env": {
"ENV_VAR1": "value1",
"ENV_VAR2": "value2"
},
"disabled": false
},
"sse-server": {
"type": "sse",
"url": "http://localhost:8000/sse",
"headers": {
"Authorization": "Bearer your-token-here"
},
"disabled": false
},
"http-server": {
"type": "streamable_http",
"url": "http://localhost:8000/mcp",
"headers": {
"X-API-Key": "your-api-key-here"
},
"disabled": false
}
}
}
```

Claude's configuration file is typically located at:
`~/Library/Application Support/Claude/claude_desktop_config.json`
> Note: If you specify a URL without a type, the client will default to using Streamable HTTP transport.

## Compatible Models

The following Ollama models work well with tool use:

- qwen2.5
- llama3.3
- llama3.2
- qwen3
- llama3.1
- llama3.2
- mistral

For a complete list of Ollama models with tool use capabilities, visit the [official Ollama models page](https://ollama.com/search?c=tools).
Expand Down
4 changes: 2 additions & 2 deletions cli-package/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "ollmcp"
version = "0.3.2"
version = "0.4.0"
description = "CLI for MCP Client for Ollama - An easy-to-use command for interacting with Ollama through MCP"
readme = "README.md"
requires-python = ">=3.10"
Expand All @@ -9,7 +9,7 @@ authors = [
{name = "Jonathan Löwenstern"}
]
dependencies = [
"mcp-client-for-ollama==0.3.2"
"mcp-client-for-ollama==0.4.0"
]

[project.scripts]
Expand Down
2 changes: 1 addition & 1 deletion mcp_client_for_ollama/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
"""MCP Client for Ollama package."""

__version__ = "0.3.2"
__version__ = "0.4.0"
96 changes: 88 additions & 8 deletions mcp_client_for_ollama/server/connector.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,17 @@
initialization, and communication.
"""

import os
import shutil
import os
import shutil
from contextlib import AsyncExitStack
from typing import Dict, List, Any, Optional, Tuple
from rich.console import Console
from rich.panel import Panel
from mcp import ClientSession, StdioServerParameters, Tool
from mcp.client.stdio import stdio_client
from mcp import ClientSession, Tool
from mcp.client.stdio import stdio_client, StdioServerParameters
from mcp.client.sse import sse_client

from .discovery import process_server_paths, parse_server_configs, auto_discover_servers

Expand Down Expand Up @@ -107,20 +110,61 @@ async def _connect_to_server(self, server: Dict[str, Any]) -> bool:
self.console.print(f"[cyan]Connecting to server: {server_name}[/cyan]")

try:
# Create server parameters based on server type
if server["type"] == "script":
server_type = server.get("type", "script")
session = None

# Connect based on server type
if server_type == "sse":
# Connect to SSE server
url = self._get_url_from_server(server)
if not url:
self.console.print(f"[red]Error: SSE server {server_name} missing URL[/red]")
return False

headers = self._get_headers_from_server(server)

# Connect using SSE transport
sse_transport = await self.exit_stack.enter_async_context(sse_client(url, headers=headers))
read_stream, write_stream = sse_transport
session = await self.exit_stack.enter_async_context(ClientSession(read_stream, write_stream))

elif server_type == "streamable_http":
# Connect to Streamable HTTP server
url = self._get_url_from_server(server)
if not url:
self.console.print(f"[red]Error: HTTP server {server_name} missing URL[/red]")
return False

headers = self._get_headers_from_server(server)

# In MCP 1.9.0, use SSE client for HTTP connections as well
# since the dedicated HTTP client is no longer available
self.console.print(f"[yellow]Note: Using SSE client for HTTP connection to {server_name}[/yellow]")
transport = await self.exit_stack.enter_async_context(sse_client(url, headers=headers))
read_stream, write_stream = transport
session = await self.exit_stack.enter_async_context(ClientSession(read_stream, write_stream))

elif server_type == "script":
# Connect to script-based server using STDIO
server_params = self._create_script_params(server)
if server_params is None:
return False

stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
read_stream, write_stream = stdio_transport
session = await self.exit_stack.enter_async_context(ClientSession(read_stream, write_stream))

else:
# Connect to config-based server using STDIO
server_params = self._create_config_params(server)
if server_params is None:
return False

stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
read_stream, write_stream = stdio_transport
session = await self.exit_stack.enter_async_context(ClientSession(read_stream, write_stream))

# Connect to this server
stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
stdio, write = stdio_transport
session = await self.exit_stack.enter_async_context(ClientSession(stdio, write))
# Initialize the session
await session.initialize()

# Store the session
Expand Down Expand Up @@ -304,3 +348,39 @@ def disable_all_tools(self):
"""Disable all available tools"""
for tool_name in self.enabled_tools:
self.enabled_tools[tool_name] = False

def _get_url_from_server(self, server: Dict[str, Any]) -> Optional[str]:
"""Extract URL from server configuration.

Args:
server: Server configuration dictionary

Returns:
URL string or None if not found
"""
# Try to get URL directly from server dict
url = server.get("url")

# If not there, try the config subdict
if not url and "config" in server:
url = server["config"].get("url")

return url

def _get_headers_from_server(self, server: Dict[str, Any]) -> Dict[str, str]:
"""Extract headers from server configuration.

Args:
server: Server configuration dictionary

Returns:
Dictionary of headers
"""
# Try to get headers directly from server dict
headers = server.get("headers", {})

# If not there, try the config subdict
if not headers and "config" in server:
headers = server["config"].get("headers", {})

return headers
28 changes: 24 additions & 4 deletions mcp_client_for_ollama/server/discovery.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,12 +66,32 @@ def parse_server_configs(config_path: str) -> List[Dict[str, Any]]:
# Skip disabled servers
if config.get('disabled', False):
continue

all_servers.append({
"type": "config",

# Determine server type
server_type = "config" # Default type for STDIO servers

# Check for URL-based server types (sse or streamable_http)
if "type" in config:
# Type is explicitly specified in config
server_type = config["type"]
elif "url" in config:
# URL exists but no type, default to streamable_http
server_type = "streamable_http"

# Create server config object
server = {
"type": server_type,
"name": name,
"config": config
})
}

# For URL-based servers, add direct access to URL and headers
if server_type in ["sse", "streamable_http"]:
server["url"] = config.get("url")
if "headers" in config:
server["headers"] = config.get("headers")

all_servers.append(server)

return all_servers

Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "mcp-client-for-ollama"
version = "0.3.2"
version = "0.4.0"
description = "MCP Client for Ollama - A client for connecting to Model Context Protocol servers using Ollama"
readme = "README.md"
requires-python = ">=3.10"
Expand Down
2 changes: 1 addition & 1 deletion uv.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.