You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Fix] Make spark_version field optional to work with defaults in policies (#4643)
## Changes
A cluster policy can enforce a specific `spark_version` field and set it
as the default. This mechanism allows for a centralized choice of the
Databricks Runtime version across all jobs. To allow a job to inherit
this field from a policy, it must be configured as optional in the
schema.
The job resource referred to `JobSettings` and `JobSettingsResource`
with a mix of `js` and `jsr` variable names. This PR updates references
to `JobSettingsResource` to be called `jsr`.
## Tests
- [x] `make test` run locally
- [ ] relevant change in `docs/` folder
---------
Co-authored-by: Tanmay Rustagi <[email protected]>
Copy file name to clipboardExpand all lines: NEXT_CHANGELOG.md
+2-1
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,8 @@
7
7
* Add support for `power_bi_task` in jobs ([#4647](https://github.com/databricks/terraform-provider-databricks/pull/4647))
8
8
* Add support for `dashboard_task` in jobs ([#4646](https://github.com/databricks/terraform-provider-databricks/pull/4646))
9
9
* Add `compute_mode` to `databricks_mws_workspaces` to support creating serverless workspaces ([#4670](https://github.com/databricks/terraform-provider-databricks/pull/4670)).
10
-
10
+
* Make `spark_version` optional in the context of jobs such that a cluster policy can provide a default value ([#4643](https://github.com/databricks/terraform-provider-databricks/pull/4643))
0 commit comments