Skip to content

Update tests to be compatible with new OpenAI, MistralAI and MCP versions #2094

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion docs/mcp/server.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,10 @@ async def sampling_callback(
SamplingMessage(
role='user',
content=TextContent(
type='text', text='write a poem about socks', annotations=None
type='text',
text='write a poem about socks',
annotations=None,
meta=None,
),
)
]
Expand Down
9 changes: 8 additions & 1 deletion pydantic_ai_slim/pydantic_ai/mcp.py
Copy link
Contributor Author

@medaminezghal medaminezghal Jun 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mcp.types.Content is deprecated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding ResourceLink According to this test.

Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ async def _sampling_callback(
)

def _map_tool_result_part(
self, part: mcp_types.Content
self, part: mcp_types.ContentBlock
) -> str | messages.BinaryContent | dict[str, Any] | list[Any]:
# See https://github.com/jlowin/fastmcp/blob/main/docs/servers/tools.mdx#return-values

Expand Down Expand Up @@ -239,6 +239,13 @@ def _map_tool_result_part(
)
else:
assert_never(resource)
elif isinstance(part, mcp_types.ResourceLink):
return {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the expectation that this resource link is sent straight to the model as JSON, or are we supposed to handle it somehow and pass the actual resource?

If it's supposed to be JSON, we can use part.model_dump() here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DouweM I'm not sure either. So I have open an question in Github about the new ResourceLink and that's what I get.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'type': 'resource_link',
'uri': part.uri,
'name': part.name,
'mimeType': part.mimeType,
}
else:
assert_never(part)

Expand Down
3 changes: 3 additions & 0 deletions pydantic_ai_slim/pydantic_ai/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,7 @@
'gpt-4o-audio-preview',
'gpt-4o-audio-preview-2024-10-01',
'gpt-4o-audio-preview-2024-12-17',
'gpt-4o-audio-preview-2025-06-03',
'gpt-4o-mini',
'gpt-4o-mini-2024-07-18',
'gpt-4o-mini-audio-preview',
Expand Down Expand Up @@ -242,6 +243,7 @@
'o3-mini',
'o3-mini-2025-01-31',
'openai:chatgpt-4o-latest',
'openai:codex-mini-latest',
'openai:gpt-3.5-turbo',
'openai:gpt-3.5-turbo-0125',
'openai:gpt-3.5-turbo-0301',
Expand Down Expand Up @@ -274,6 +276,7 @@
'openai:gpt-4o-audio-preview',
'openai:gpt-4o-audio-preview-2024-10-01',
'openai:gpt-4o-audio-preview-2024-12-17',
'openai:gpt-4o-audio-preview-2025-06-03',
'openai:gpt-4o-mini',
'openai:gpt-4o-mini-2024-07-18',
'openai:gpt-4o-mini-audio-preview',
Expand Down
4 changes: 2 additions & 2 deletions pydantic_ai_slim/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ dependencies = [
# WARNING if you add optional groups, please update docs/install.md
logfire = ["logfire>=3.11.0"]
# Models
openai = ["openai>=1.76.0"]
openai = ["openai>=1.86.0"]
cohere = ["cohere>=5.13.11; platform_system != 'Emscripten'"]
vertexai = ["google-auth>=2.36.0", "requests>=2.32.2"]
google = ["google-genai>=1.15.0"]
Expand All @@ -75,7 +75,7 @@ tavily = ["tavily-python>=0.5.0"]
# CLI
cli = ["rich>=13", "prompt-toolkit>=3", "argcomplete>=3.5.0"]
# MCP
mcp = ["mcp>=1.9.4; python_version >= '3.10'"]
mcp = ["mcp>=1.10.0; python_version >= '3.10'"]
# Evals
evals = ["pydantic-evals=={{ version }}"]
# A2A
Expand Down
2 changes: 1 addition & 1 deletion tests/models/test_mistral.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def completion_message(
return MistralChatCompletionResponse(
id='123',
choices=[MistralChatCompletionChoice(finish_reason='stop', index=0, message=message)],
created=1704067200 if with_created else None, # 2024-01-01
created=1704067200 if with_created else 0, # 2024-01-01
model='mistral-large-123',
object='chat.completion',
usage=usage or MistralUsageInfo(prompt_tokens=1, completion_tokens=1, total_tokens=1),
Expand Down
2 changes: 1 addition & 1 deletion tests/test_mcp.py
Original file line number Diff line number Diff line change
Expand Up @@ -982,7 +982,7 @@ async def test_client_sampling():
{
'meta': None,
'role': 'assistant',
'content': {'type': 'text', 'text': 'sampling model response', 'annotations': None},
'content': {'type': 'text', 'text': 'sampling model response', 'annotations': None, 'meta': None},
'model': 'test',
'stopReason': None,
}
Expand Down
Loading
Loading