Skip to content

v1.88.0

Latest
Compare
Choose a tag to compare
@release-please release-please released this 09 Apr 01:57
f828c8d

1.88.0 (2025-04-09)

Features

  • Add a module-level function to create a Gemini template config for single-turn Gemini examples without having to explicitly construct the Gemini example. (126d10c)
  • Add autoscaling_target_request_count_per_minute option in Preview model deployment on Endpoint & Model classes. (0f1f10a)
  • Add Gen AI logging public preview API (1589f66)
  • Add new template for AgentEngine (0478f10)
  • Add the possibility to create multimodal datasets without explicitly specifying a bigquery dataset/table. (f5043a6)
  • Allow EvalTask to take dataset as a Google sheet - simplifying rubric revision CUJ (15df1f6)
  • Allow table targets in multi-region datasets when creating multimodal datasets (2d7bc32)
  • Check if rubrics column is present before converting list of rubrics to string (6c1569b)
  • Convert list of rubrics to a string before sending the API request. This allows users to use default parse_rubrics function that returns a list of rubrics with customized prompts, and bring their own rubrics as a list. (9f21b73)
  • GenAI Evaluation: Release GenAI Evaluation SDK autorater metric configuration utils to vertexai.preview module. (f816d5a)
  • GenAI Evaluation: Release GenAI Evaluation SDK parsing rubric generation response with additional fields to vertexai.preview module. (79ca86a)
  • GenAI Evaluation: Release GenAI Evaluation SDK rubric based evaluation to vertexai.preview module. (bb07581)
  • Model optimizer SDK support (f257298)
  • Track the output path for metrics_table in experiments metadata, if output bucket is specified but no file name is specified, we will generate a unique file name (be2c99f)

Bug Fixes

  • Preprend question tag to rubric critiquing response after editing the predefined prompts to end with <question> to avoid parsing errors (0abd6ad)