-
Notifications
You must be signed in to change notification settings - Fork 426
[Fix] Make spark_version
field optional to work with defaults in policies
#4643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This fails with:
I suspect the field needs to be marked as optional in the OpenAPI spec for clusters to make this work. |
spark_version
field optional to work with defaults in policiesspark_version
field optional to work with defaults in policies
@@ -399,7 +399,7 @@ type Cluster struct { | |||
ClusterID string `json:"cluster_id,omitempty"` | |||
ClusterName string `json:"cluster_name,omitempty"` | |||
|
|||
SparkVersion string `json:"spark_version"` | |||
SparkVersion string `json:"spark_version,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without this change, the jobs unit tests failed with:
panic: new_cluster: inconsistency: spark_version is optional, default is empty, but has no omitempty
Reflection on the 2.0 structs flagged the incompatibility between an optional field in the schema without a default and a mandatory struct field. This struct is no longer used directly by a resource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for doing this Pieter!
If integration tests don't run automatically, an authorized user can run them manually by following the instructions below: Trigger: Inputs:
Checks will be approved automatically on success. |
## Release v1.75.0 ### New Features and Improvements * Add support for `power_bi_task` in jobs ([#4647](#4647)) * Add support for `dashboard_task` in jobs ([#4646](#4646)) * Add `compute_mode` to `databricks_mws_workspaces` to support creating serverless workspaces ([#4670](#4670)). * Make `spark_version` optional in the context of jobs such that a cluster policy can provide a default value ([#4643](#4643)) ### Documentation * Document `performance_target` in `databricks_job` ([#4651](#4651)) * Add more examples for `databricks_model_serving` ([#4658](#4658)) * Document `on_streaming_backlog_exceeded` in email/webhook notifications in `databricks_job` ([#4660](#4660)) * Refresh `spark_python_task` option in `databricks_job` ([#4666](#4666)) ### Exporter * Emit files installed with `%pip install` in Python notebooks ([#4664](#4664)) * Correctly handle account-level identities when generating the code ([#4650](#4650)) * Add export of dashboard tasks in `datarbicks_job` ([#4665](#4665)) * Add export of PowerBI tasks in `datarbicks_job` ([#4668](#4668)) * Add `Ignore` implementation for `databricks_grants` to fix issue with wrongly generated dependencies ([#4650](#4650)) * Improve handling of `owner` for UC resources ([#4669](#4669))
## Changes Upgrade TF provider to 1.75.0 Includes databricks/terraform-provider-databricks#4643 which fixes #2755
Changes
A cluster policy can enforce a specific
spark_version
field and set it as the default. This mechanism allows for a centralized choice of the Databricks Runtime version across all jobs. To allow a job to inherit this field from a policy, it must be configured as optional in the schema.The job resource referred to
JobSettings
andJobSettingsResource
with a mix ofjs
andjsr
variable names. This PR updates references toJobSettingsResource
to be calledjsr
.Tests
make test
run locallydocs/
folder