-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs(genai): Add Dedicated Samples for Flash-Lite/Pro #13254
base: main
Are you sure you want to change the base?
Conversation
Here is the summary of changes. You are about to add 2 region tags.
This comment is generated by snippet-bot.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @holtskinner, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request adds two new Python files, textgen_with_txt_2_0_flash_lite.py
and textgen_with_txt_2_0_pro.py
, to the genai/text_generation
directory. These files provide dedicated samples for using the Gemini 2.0 Flash Lite and Gemini 2.0 Pro models for text generation. Each file contains a generate_content
function that initializes a genai.Client
, calls the generate_content
method with the appropriate model name and a sample question, and prints the response. The code is also wrapped in start/end tags for documentation purposes.
Highlights
- New Samples: Adds dedicated samples for Gemini 2.0 Flash Lite and Gemini 2.0 Pro models.
- Text Generation: Demonstrates how to use the
generate_content
method to generate text from a given prompt. - Model Specification: Shows how to specify the model to use for text generation (gemini-2.0-flash-lite and gemini-2.0-pro-exp-02-05).
Changelog
- genai/text_generation/textgen_with_txt_2_0_flash_lite.py
- Added a new file demonstrating text generation with the Gemini 2.0 Flash Lite model.
- The file includes a
generate_content
function that sends a prompt to the model and prints the response. - Uses the
googlegenaisdk_textgen_with_txt_2_0_flash_lite
tag for documentation.
- genai/text_generation/textgen_with_txt_2_0_pro.py
- Added a new file demonstrating text generation with the Gemini 2.0 Pro model.
- The file includes a
generate_content
function that sends a prompt to the model and prints the response. - Uses the
googlegenaisdk_textgen_with_txt_2_0_pro
tag for documentation.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
The models awaken, with prompts in their core,
Generating answers, like never before.
From Flash Lite's speed to Pro's deeper dive,
AI's creations, continue to thrive.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request adds two new Python files, textgen_with_txt_2_0_flash_lite.py
and textgen_with_txt_2_0_pro.py
, which demonstrate text generation using the Gemini 2.0 Flash Lite and Pro models, respectively. The code is straightforward and easy to understand. However, there are a few areas that could be improved.
Summary of Findings
- Model Name Hardcoding: The model names are hardcoded in the scripts. Consider making them configurable via environment variables or command-line arguments for better flexibility.
- Error Handling: The code lacks explicit error handling. Adding try-except blocks would make the code more robust.
- Copyright Year: The copyright year is set to 2025. Please ensure this is the correct and intended year.
Merge Readiness
The pull request introduces new sample code that is generally well-structured and easy to understand. However, the hardcoded model names and lack of error handling are potential issues that should be addressed before merging. I am unable to directly approve this pull request, and recommend that others review and approve this code before merging. At a minimum, the high severity issues should be addressed before merging.
model="gemini-2.0-flash-lite", | ||
contents="How does AI work?", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model name is hardcoded here. Consider making it configurable via an environment variable or command-line argument to allow users to easily switch between models without modifying the code.
model="gemini-2.0-flash-lite", | |
contents="How does AI work?", | |
) | |
model=os.environ.get("GEMINI_MODEL", "gemini-2.0-flash-lite"), | |
contents="How does AI work?", |
model="gemini-2.0-pro-exp-02-05", | ||
contents="How does AI work?", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model name is hardcoded here. Consider making it configurable via an environment variable or command-line argument to allow users to easily switch between models without modifying the code.
model="gemini-2.0-pro-exp-02-05", | |
contents="How does AI work?", | |
) | |
model=os.environ.get("GEMINI_MODEL", "gemini-2.0-pro-exp-02-05"), | |
contents="How does AI work?", |
@@ -0,0 +1,36 @@ | |||
# Copyright 2025 Google LLC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gemini-code-assist This is a new file, so 2025 is the correct value in year 2025.
client = genai.Client() | ||
response = client.models.generate_content( | ||
model="gemini-2.0-flash-lite", | ||
contents="How does AI work?", | ||
) | ||
print(response.text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a try-except block to handle potential exceptions during the API call. This will make the code more robust.
client = genai.Client() | |
response = client.models.generate_content( | |
model="gemini-2.0-flash-lite", | |
contents="How does AI work?", | |
) | |
print(response.text) | |
try: | |
client = genai.Client() | |
response = client.models.generate_content( | |
model="gemini-2.0-flash-lite", | |
contents="How does AI work?", | |
) | |
except Exception as e: | |
print(f"An error occurred: {e}") | |
return "" |
@@ -0,0 +1,36 @@ | |||
# Copyright 2025 Google LLC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
client = genai.Client() | ||
response = client.models.generate_content( | ||
model="gemini-2.0-pro-exp-02-05", | ||
contents="How does AI work?", | ||
) | ||
print(response.text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a try-except block to handle potential exceptions during the API call. This will make the code more robust.
client = genai.Client() | |
response = client.models.generate_content( | |
model="gemini-2.0-pro-exp-02-05", | |
contents="How does AI work?", | |
) | |
print(response.text) | |
try: | |
client = genai.Client() | |
response = client.models.generate_content( | |
model="gemini-2.0-pro-exp-02-05", | |
contents="How does AI work?", | |
) | |
except Exception as e: | |
print(f"An error occurred: {e}") | |
return "" |
|
||
|
||
def generate_content() -> str: | ||
# [START googlegenaisdk_textgen_with_txt_2_0_flash_lite] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use googlegenaisdk_textgen_with_txt_flash_lite
No description provided.