Skip to content

Google Gemini 2.0 flash has wider range of temperature. #7696

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
5 tasks done
anilgulecha opened this issue Feb 14, 2025 · 7 comments · Fixed by #7703 or #7748
Closed
5 tasks done

Google Gemini 2.0 flash has wider range of temperature. #7696

anilgulecha opened this issue Feb 14, 2025 · 7 comments · Fixed by #7703 or #7748
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@anilgulecha
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

const baseLLM = new ChatGoogleGenerativeAI({
model: "gemini-2.0-flash",
temperature: 1,
maxRetries: 1,
maxConcurrency: 1
});

Error Message and Stack Trace (if applicable)

Error: temperature must be in the range of [0.0,1.0]
at new ChatGoogleGenerativeAI

Description

https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-google-genai/src/chat_models.ts#L647

The code assumes the temperature has to be b/w 0 and 1. But the latest model supports upto 2 (gemini-2.0-flash)

System Info

I'm on

    "@langchain/core": "^0.3.39",
    "@langchain/google-genai": "^0.1.8",
    "@langchain/langgraph": "^0.2.46"
Copy link

dosubot bot commented Feb 14, 2025

Hello, @anilgulecha! I'm here to assist you with bugs, questions, and becoming a contributor. Let's squash those bugs together! 🤖

To resolve the issue with the gemini-2.0-flash model's temperature parameter, you need to update the validation logic in the LangChain.js code. The current implementation enforces a range of [0.0, 1.0], but the gemini-2.0-flash model supports values up to 2. You can modify the code as follows:

In the constructor of the GoogleGenerativeAIChatInput class, update the condition to:

this.temperature = fields?.temperature ?? this.temperature;
if (this.temperature && (this.temperature < 0 || this.temperature > 2)) {
  throw new Error("`temperature` must be in the range of [0.0,2.0]");
}

This change will allow the temperature parameter to accept values up to 2, accommodating the gemini-2.0-flash model's requirements [1][2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Feb 14, 2025
@afirstenberg
Copy link
Contributor

You may also wish to switch to the @langchain/google-webauth package which has this change already applied.

@anilgulecha
Copy link
Author

Hi How can I pull this locally? currently I see only 0.1.8 available in npm
https://www.npmjs.com/package/@langchain/google-genai

When will newer version be tagged?

Regards

@jacoblee93
Copy link
Collaborator

Ah I think this was only fixed for vertex, need to do another release for Gen AI!

@jacoblee93 jacoblee93 reopened this Feb 22, 2025
@afirstenberg
Copy link
Contributor

Ah I think this was only fixed for vertex, need to do another release for Gen AI!

This had been fixed in Vertex/common for a while.
These changes were for genai.
(They just came at the same time I did a bunch of other work for Vertex)

I think all you need to do is cut a new release for google-genai.

@jacoblee93
Copy link
Collaborator

Oh my mistake! Will do later today

@jacoblee93
Copy link
Collaborator

Sorry about the delay, just cut!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
3 participants