Skip to content

🐛 [firebase_vertexai] Feature Request: Support thinkingBudget configuration for Gemini 2.5 Flash in Flutter #17368

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
nmarafo opened this issue May 19, 2025 · 1 comment
Labels
Needs Attention This issue needs maintainer attention. plugin: vertexai label issues for vertexai plugin type: enhancement New feature or request

Comments

@nmarafo
Copy link

nmarafo commented May 19, 2025

Hi team,

First of all, thank you for your amazing work on this library. I’d like to request support for a new feature introduced in the Gemini 2.5 Flash model — the ability to configure the internal "thinking" process via the thinkingBudget parameter.

As described in the official documentation, Gemini 2.5 Flash models support an internal reasoning process that can be tuned by setting the thinkingBudget (an integer between 0 and 24,576). This parameter gives the model guidance on how many tokens it can use internally to “think” before generating the final response.

Why is this important?
This feature is crucial for advanced tasks such as:

Complex code generation

Multistep problem solving in math or logic

Structured data analysis and reasoning

Use cases where we want to trade off latency vs. reasoning depth

For example, setting thinkingBudget: 0 disables the internal reasoning (faster response), while higher values allow more in-depth reasoning (better quality for complex tasks).

What we need
Please consider adding support for configuring the thinkingBudget parameter in the Flutter wrapper/library for Gemini. Ideally, it could be passed through a GenerateContentConfig (or similar config object), as is done in the Python API:

from google import genai
from google.genai import types

client = genai.Client()

response = client.models.generate_content(
    model="gemini-2.5-flash-preview-04-17",
    contents="Explain the Occam's Razor concept and provide everyday examples of it",
    config=types.GenerateContentConfig(
        thinking_config=types.ThinkingConfig(thinking_budget=1024)
    ),
)

Final thoughts
Having access to this parameter in Flutter will enable developers to optimize Gemini's performance depending on their specific app needs. It’s especially valuable in educational, scientific, and reasoning-intensive applications.

Thanks again for your hard work! Looking forward to your feedback.

Best regards,
Norberto

@nmarafo nmarafo added type: enhancement New feature or request Needs Attention This issue needs maintainer attention. labels May 19, 2025
@SelaseKay SelaseKay added the plugin: vertexai label issues for vertexai plugin label May 20, 2025
@davidpryor
Copy link

davidpryor commented May 20, 2025

I have implemented this in a fork so we could test some of the new models with thinking turned off. I didn't mirror the config options 1:1 (enable/disable thinking AND thinking budget), but effectively setting the thinkingBudget to 0 disabled thinking.

I mimicked the config nesting already in the package and the API turned out like this:

FirebaseAI.vertexAI(...).generativeModel(
      model: "gemini-2.5-flash-preview-04-17",
      generationConfig: GenerationConfig(
        ...,
        thinkingConfig: ThinkingConfig(
          thinkingBudget: 0,
        ),
      ),
    );

and adding the thinking token count on the UsageMetadata model:

  UsageMetadata._(
      {this.promptTokenCount,
      this.candidatesTokenCount,
      this.totalTokenCount,
      this.promptTokensDetails,
      this.candidatesTokensDetails,
      this.thoughtsTokenCount});

I am also waiting for implicit caching to turn on for vertex AI so I can add that to the UsageMetadata model as well.
I am happy to refine this and put up a PR if that is something useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Attention This issue needs maintainer attention. plugin: vertexai label issues for vertexai plugin type: enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants