Skip to content

Add o1 to verfied models #6642

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Feb 6, 2025
Merged

Add o1 to verfied models #6642

merged 2 commits into from
Feb 6, 2025

Conversation

mamoodi
Copy link
Collaborator

@mamoodi mamoodi commented Feb 6, 2025

End-user friendly description of the problem this fixes or functionality that this introduces

  • Include this change in the Release Notes. If checked, you must provide an end-user friendly description for your change below

Add o1 to verified models


Give a summary of what the PR does, explaining any non-trivial design decisions

Tested setting provider to OpenAI and model to o1 and it works. Do we want it in the "verified models"?


Link of any specific issues this addresses
#6030


To run this PR locally, use the following command:

docker run -it --rm   -p 3000:3000   -v /var/run/docker.sock:/var/run/docker.sock   --add-host host.docker.internal:host-gateway   -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:69a2435-nikolaik   --name openhands-app-69a2435   docker.all-hands.dev/all-hands-ai/openhands:69a2435

@mamoodi
Copy link
Collaborator Author

mamoodi commented Feb 6, 2025

Does it need to be added anywhere in this file?

LLM_RETRY_EXCEPTIONS: tuple[type[Exception], ...] = (
APIConnectionError,
# FIXME: APIError is useful on 502 from a proxy for example,
# but it also retries on other errors that are permanent
APIError,
InternalServerError,
RateLimitError,
ServiceUnavailableError,
)
# cache prompt supporting models
# remove this when we gemini and deepseek are supported
CACHE_PROMPT_SUPPORTED_MODELS = [
'claude-3-5-sonnet-20241022',
'claude-3-5-sonnet-20240620',
'claude-3-5-haiku-20241022',
'claude-3-haiku-20240307',
'claude-3-opus-20240229',
]
# function calling supporting models
FUNCTION_CALLING_SUPPORTED_MODELS = [
'claude-3-5-sonnet',
'claude-3-5-sonnet-20240620',
'claude-3-5-sonnet-20241022',
'claude-3.5-haiku',
'claude-3-5-haiku-20241022',
'gpt-4o-mini',
'gpt-4o',
'o1-2024-12-17',
'o3-mini-2025-01-31',
'o3-mini',
]
# visual browsing tool supported models
# This flag is needed since gpt-4o and gpt-4o-mini do not allow passing image_urls with role='tool'
VISUAL_BROWSING_TOOL_SUPPORTED_MODELS = [
'claude-3-5-sonnet',
'claude-3-5-sonnet-20240620',
'claude-3-5-sonnet-20241022',
'o1-2024-12-17',
]
REASONING_EFFORT_SUPPORTED_MODELS = [
'o1-2024-12-17',
'o1',
'o3-mini-2025-01-31',
'o3-mini',
]
MODELS_WITHOUT_STOP_WORDS = [
'o1-mini',
'o1-preview',
]

Copy link
Collaborator

@enyst enyst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it needs to be in more places in llm.py.

Maybe we can remove o1-preview from verified, if we add o1?

@mamoodi
Copy link
Collaborator Author

mamoodi commented Feb 6, 2025

I don't think it needs to be in more places in llm.py.

Maybe we can remove o1-preview from verified, if we add o1?

You know better. Should I remove it?

@enyst
Copy link
Collaborator

enyst commented Feb 6, 2025

Yes, I don't think there's a great use anymore, and anyway people can add it themselves in advanced settings

@mamoodi mamoodi merged commit ff48f8b into main Feb 6, 2025
15 checks passed
@mamoodi mamoodi deleted the mh/add-o1 branch February 6, 2025 21:38
adityasoni9998 pushed a commit to adityasoni9998/OpenHands that referenced this pull request Feb 7, 2025
adityasoni9998 pushed a commit to adityasoni9998/OpenHands that referenced this pull request Feb 7, 2025
chuckbutkus pushed a commit that referenced this pull request Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants