Skip to content

Commit eda7008

Browse files
vertex-sdk-botcopybara-github
authored andcommitted
docs: update import paths for Gemini README
FUTURE_COPYBARA_INTEGRATE_REVIEW=#3966 from googleapis:release-please--branches--main 98e236c PiperOrigin-RevId: 647201359
1 parent ef5aeda commit eda7008

File tree

4 files changed

+706
-0
lines changed

4 files changed

+706
-0
lines changed
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,188 @@
1+
# Vertex Generative AI SDK for Python
2+
The Vertex Generative AI SDK helps developers use Google's generative AI
3+
[Gemini models](http://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/overview)
4+
and [PaLM language models](http://cloud.google.com/vertex-ai/docs/generative-ai/language-model-overview)
5+
to build AI-powered features and applications.
6+
The SDKs support use cases like the following:
7+
8+
- Generate text from texts, images and videos (multimodal generation)
9+
- Build stateful multi-turn conversations (chat)
10+
- Function calling
11+
12+
## Installation
13+
14+
To install the
15+
[google-cloud-aiplatform](https://pypi.org/project/google-cloud-aiplatform/)
16+
Python package, run the following command:
17+
18+
```shell
19+
pip3 install --upgrade --user "google-cloud-aiplatform>=1.38"
20+
```
21+
22+
## Usage
23+
24+
For detailed instructions, see [quickstart](http://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/quickstart-multimodal) and [Introduction to multimodal classes in the Vertex AI SDK](http://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/sdk-for-gemini/gemini-sdk-overview-reference).
25+
26+
#### Imports:
27+
```python
28+
from vertexai.generative_models import GenerativeModel, Image, Content, Part, Tool, FunctionDeclaration, GenerationConfig
29+
```
30+
31+
#### Basic generation:
32+
```python
33+
from vertexai.generative_models import GenerativeModel
34+
model = GenerativeModel("gemini-pro")
35+
print(model.generate_content("Why is sky blue?"))
36+
```
37+
38+
#### Using images and videos
39+
```python
40+
from vertexai.generative_models import GenerativeModel, Image
41+
vision_model = GenerativeModel("gemini-pro-vision")
42+
43+
# Local image
44+
image = Image.load_from_file("image.jpg")
45+
print(vision_model.generate_content(["What is shown in this image?", image]))
46+
47+
# Image from Cloud Storage
48+
image_part = generative_models.Part.from_uri("gs://download.tensorflow.org/example_images/320px-Felis_catus-cat_on_snow.jpg", mime_type="image/jpeg")
49+
print(vision_model.generate_content([image_part, "Describe this image?"]))
50+
51+
# Text and video
52+
video_part = Part.from_uri("gs://cloud-samples-data/video/animals.mp4", mime_type="video/mp4")
53+
print(vision_model.generate_content(["What is in the video? ", video_part]))
54+
```
55+
56+
#### Chat
57+
```
58+
from vertexai.generative_models import GenerativeModel, Image
59+
vision_model = GenerativeModel("gemini-ultra-vision")
60+
vision_chat = vision_model.start_chat()
61+
image = Image.load_from_file("image.jpg")
62+
print(vision_chat.send_message(["I like this image.", image]))
63+
print(vision_chat.send_message("What things do I like?."))
64+
```
65+
66+
#### System instructions
67+
```
68+
from vertexai.generative_models import GenerativeModel
69+
model = GenerativeModel(
70+
"gemini-1.0-pro",
71+
system_instruction=[
72+
"Talk like a pirate.",
73+
"Don't use rude words.",
74+
],
75+
)
76+
print(model.generate_content("Why is sky blue?"))
77+
```
78+
79+
#### Function calling
80+
81+
```
82+
# First, create tools that the model is can use to answer your questions.
83+
# Describe a function by specifying it's schema (JsonSchema format)
84+
get_current_weather_func = generative_models.FunctionDeclaration(
85+
name="get_current_weather",
86+
description="Get the current weather in a given location",
87+
parameters={
88+
"type": "object",
89+
"properties": {
90+
"location": {
91+
"type": "string",
92+
"description": "The city and state, e.g. San Francisco, CA"
93+
},
94+
"unit": {
95+
"type": "string",
96+
"enum": [
97+
"celsius",
98+
"fahrenheit",
99+
]
100+
}
101+
},
102+
"required": [
103+
"location"
104+
]
105+
},
106+
)
107+
# Tool is a collection of related functions
108+
weather_tool = generative_models.Tool(
109+
function_declarations=[get_current_weather_func],
110+
)
111+
112+
# Use tools in chat:
113+
model = GenerativeModel(
114+
"gemini-pro",
115+
# You can specify tools when creating a model to avoid having to send them with every request.
116+
tools=[weather_tool],
117+
)
118+
chat = model.start_chat()
119+
# Send a message to the model. The model will respond with a function call.
120+
print(chat.send_message("What is the weather like in Boston?"))
121+
# Then send a function response to the model. The model will use it to answer.
122+
print(chat.send_message(
123+
Part.from_function_response(
124+
name="get_current_weather",
125+
response={
126+
"content": {"weather": "super nice"},
127+
}
128+
),
129+
))
130+
```
131+
132+
133+
#### Automatic Function calling
134+
135+
```
136+
from vertexai.preview.generative_models import GenerativeModel, Tool, FunctionDeclaration, AutomaticFunctionCallingResponder
137+
138+
# First, create functions that the model can use to answer your questions.
139+
def get_current_weather(location: str, unit: str = "centigrade"):
140+
"""Gets weather in the specified location.
141+
142+
Args:
143+
location: The location for which to get the weather.
144+
unit: Optional. Temperature unit. Can be Centigrade or Fahrenheit. Defaults to Centigrade.
145+
"""
146+
return dict(
147+
location=location,
148+
unit=unit,
149+
weather="Super nice, but maybe a bit hot.",
150+
)
151+
152+
# Infer function schema
153+
get_current_weather_func = FunctionDeclaration.from_func(get_current_weather)
154+
# Tool is a collection of related functions
155+
weather_tool = Tool(
156+
function_declarations=[get_current_weather_func],
157+
)
158+
159+
# Use tools in chat:
160+
model = GenerativeModel(
161+
"gemini-pro",
162+
# You can specify tools when creating a model to avoid having to send them with every request.
163+
tools=[weather_tool],
164+
)
165+
166+
# Activate automatic function calling:
167+
afc_responder = AutomaticFunctionCallingResponder(
168+
# Optional:
169+
max_automatic_function_calls=5,
170+
)
171+
chat = model.start_chat(responder=afc_responder)
172+
# Send a message to the model. The model will respond with a function call.
173+
# The SDK will automatically call the requested function and respond to the model.
174+
# The model will use the function call response to answer the original question.
175+
print(chat.send_message("What is the weather like in Boston?"))
176+
```
177+
178+
## Documentation
179+
180+
You can find complete documentation for the Vertex AI SDKs and the Gemini model in the Google Cloud [documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview)
181+
182+
## Contributing
183+
184+
See [Contributing](https://github.com/googleapis/python-aiplatform/blob/main/CONTRIBUTING.rst) for more information on contributing to the Vertex AI Python SDK.
185+
186+
## License
187+
188+
The contents of this repository are licensed under the [Apache License, version 2.0](http://www.apache.org/licenses/LICENSE-2.0).

0 commit comments

Comments
 (0)